entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
10
200
authors
list
primary_category
stringlengths
5
18
categories
list
text
stringlengths
2
817k
http://arxiv.org/abs/2306.02513v1
20230605002631
Zero-field composite Fermi liquid in twisted semiconductor bilayers
[ "Hart Goldman", "Aidan P. Reddy", "Nisarga Paul", "Liang Fu" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.str-el" ]
Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 *These authors contributed equally to the development of this workfootnote Recent experiments have produced evidence for fractional quantum anomalous Hall (FQAH) states at zero magnetic field in the semiconductor moiré superlattice system tMoTe_2. Here we argue that a composite fermion description, already a unifying framework for the phenomenology of 2d electron gases at high magnetic fields, provides a similarly powerful perspective in this new context, despite the absence of a magnetic field. To this end, we present exact diagonalization evidence for composite Fermi liquid states at zero magnetic field in tMoTe_2, at fillings n=1/2 and n=3/4. We dub these non-Fermi liquid metals anomalous composite Fermi liquids (ACFLs), and we argue that they play a central organizing role in the FQAH phase diagram. We proceed to develop a long wavelength theory for this ACFL state, which offers concrete experimental predictions upon doping the composite Fermi sea, including a Jain sequence of FQAH states and a new type of commensurability oscillations originating from the superlattice potential intrinsic to the system. Zero-field composite Fermi liquid in twisted semiconductor bilayers Hart Goldman^*, Aidan P. Reddy^*, Nisarga Paul^*, and Liang Fu July 31, 2023 ===================================================================== Introduction. Recently, signatures of fractional quantum anomalous Hall (FQAH) states at zero magnetic field have been observed by optical measurements on twisted bilayer MoTe_2 (tMoTe_2) at fractional fillings of the moiré unit cell n=2/3 and 3/5 <cit.>. In a separate work, the charge gap of the putative FQAH state at n=2/3 was measured <cit.>. The existence of FQAH states in twisted homobilayers of transition metal dichalcogenides (TMD) was theoretically predicted as a result of nontrivial band topology <cit.>, spontaneous ferromagnetism, and strong correlations <cit.>. This sighting of FQAH states has provided new motivation to explore the phenomenology and phase diagram of partially filled Chern bands in tMoTe_2 and beyond <cit.>. Unlike Landau levels, Chern band systems can exhibit competition between incompressible FQAH states  <cit.> and more conventional broken symmetry phases enabled by the presence of a periodic lattice structure, such as charge ordered phases <cit.> and generalized Wigner crystals <cit.>, or conducting phases <cit.> like Fermi liquids and even superconductors. Exotic quantum critical phases have also been shown to appear in half-filled flat Chern bands <cit.>. As a result, the global phase diagram of partially filled Chern bands is potentially much richer than that of Landau levels, calling for systematic study. In this work, we focus on the physics of twisted TMD bilayers at even-denominator filling factors, which have not yet received attention. We present numerical evidence from continuum model exact diagonalization (ED) calculations of gapless metallic states at filling factors n=1/2 and 3/4. Remarkably, depending on the twist angle, two types of ferromagnetic metals with full spin/valley polarization are found. At larger twist angle, the ground state is a Fermi liquid. In contrast, at smaller twist angles where strong interaction effect induces odd-denominator FQAH states, we find non-Fermi liquid metals of composite fermions at n=1/2 and 3/4, which share features with the composite Fermi liquid at high magnetic fields <cit.>, but are “enriched” by the underlying moiré superlattice. We dub these zero-field non-Fermi liquid states “anomalous composite Fermi liquids” (ACFLs). Synonymously with the composite Fermi liquid phases in Landau level systems, we propose the ACFL as the parent state of the FQAH phase diagram at B=0 <cit.>. Indeed, based on our ED study, we argue that the prominent FQAH states at n=2/3 and 3/5 in twisted TMD homobilayers are descendants of the ACFL state at n=1/2. These states fall along a Jain sequence of FQAH states, which we show emerges by doping the ACFL. We further reveal the unique phenomenology of the ACFL state itself. Perhaps most strikingly, the ACFL resistivity and thermodynamic properties experience intrinsic commensurability oscillations as a function of density, ρ_e, at B=0. This behavior contrasts both with an ordinary Fermi liquid and a composite Fermi liquid in a Landau level <cit.>. Close to the ACFL state at n=1/2, we find the oscillations are periodic in 1/Δρ_e and occur at large integers j satisfying 1/Δρ_e∝j/Q k_F , where Δρ_e≡ρ_e-ρ is the doping density from half filling, Q is the moiré superlattice wave vector, and k_F=√(4πρ) is the composite Fermi wave vector. This result is a consequence of the flux attachment to charge in the ACFL – which causes the composite fermions to feel an effective magnetic field as the system is doped from half filling – and the system's intrinstic moiré potential. In contrast, commensurability oscillations in composite Fermi liquids in Landau level systems require an externally supplied periodic potential. Composite Fermi liquid phases are expected to exhibit non-Fermi liquid observable features, some of which may be accessible as new platforms are developed realizing the ACFL. For example, the thermodynamic entropy of the ACFL state can be measured from the change in chemical potential with temperature through a Maxwell relation  <cit.>. In the clean limit, because gauge fluctuations lead to a logarithmic mass enhancement of composite fermions, the entropy of ACFL state should also be enhanced <cit.>, s(T)/T∼ m_*(T)∼log T <cit.>, compared to the linear temperature dependence of an ordinary Fermi liquid, s(T)/T∼constant. In systems where the electronic Coulomb interaction is screened by a nearby metallic gate, this enhancement becomes a power law, s(T)∼ T^2/3. Motivation. In ordinary Landau level quantum Hall systems, the existence of a composite Fermi liquid phase can be understood through flux attachment. At even-denominator filling ν=2πρ_e/B=1/2q, where q is an integer, ρ_e is the electron density, and B is the external magnetic field, attaching 2q flux quanta to each electron completely screens the magnetic field, leading to an effective system of composite fermions in effective magnetic field b_*=B-2q(2πρ_e)=0. As a result, the composite fermions form a Fermi surface strongly coupled to an emergent gauge field that implements the flux attachment <cit.>. By doping away from ν=1/2q, the composite fermions feel a nonvanishing magnetic field, and fill Landau levels, leading to the Jain sequence of observed incompressible fractional quantum Hall phases, ν_Jain =p/2q p-1 , where the integer, p, is the number of filled composite fermion Landau levels <cit.>. From a complementary point of view, the composite Fermi liquid is simply the p→∞ limit of the Jain sequence. A similar picture may in principle be applicable to twisted TMD bilayers in the absence of a physical magnetic field, when (1) interactions spontaneously drive all carriers into the Chern band of one valley; (2) the Coulomb Hamiltonian projected to the Chern band sufficiently resembles that of the LLL; and (3) the band dispersion is negligible relative to the system's characteristic interaction energy scale ∼ e^2/(ϵ a_M). When these conditions are satisfied, the problem can be approximately mapped to that of a partially filled Landau level through the band-projected Hamiltonian, H_projected =1/2∫ d^2q U(q) ρ_-q ρ_q . Here ρ_q is the density operator projected to the Chern band and U(q) is the Fourier transformed interaction potential. While the projected density operators, ρ_q, generically differ from their lowest-Landau-level (LLL) counterparts, one might expect that any of the quantum Hall phases in a Landau level at filling ν should be possible in a flat C=1 band at the same filling, as has been understood recently in a broad range of contexts. The challenge is to find situations in which such physics succeeds over other phases that are not possible in Landau levels. Hence, once a material is known to exhibit the FQAH effect, it is natural to anticipate that other essential features of the fractional quantum Hall phase diagram can also occur in the same material, such as the convergence of Jain sequence FQAH phases into a metallic composite Fermi liquid at half-filling. Numerical evidence for ACFL. We now provide numerical evidence for the existence of the ACFL in tMoTe_2. The nontrivial layer pseudospin structure of this system's Bloch wavefunctions endows its moiré bands with topological character <cit.>. In particular, the first moiré valence band in each valley has |C|=1, with opposite signs in opposite valleys due to time-reversal symmetry. We study the continuum model of tMoTe_2 with Coulomb interaction U(r)=e^2/ϵ r projected to the lowest moiré band, using finite size ED for torus geometry. Further details of the model and methodology are provided in the Supplemental Material. To establish a benchmark comparison for the properties of a composite Fermi liquid on a finite size torus, we show the low-lying many-body spectrum of the half-filled LLL on a torus with 16 flux quanta in Fig. <ref>(a), assuming full spin polarization. Given our system geometry, the spectrum features 12 exactly degenerate ground states with 2 in each of 6 momentum sectors. The momentum quantum numbers of the degenerate ground states reflect the most compact possible composite Fermi sea configurations <cit.>. There are 6 such configurations – one composite fermion is accounted for by occupying the state at the center of the Brillouin zone, 6 more by occupying the set of closest points , and the final one by occupying any one of the 6 next closest points. The additional factor of 2 in the overall ground state degeneracy is enforced by the non-commuting center-of-mass magnetic translations of the guiding center coordinates. We also show the momentum space occupation numbers of electron Bloch states averaged over the ground state manifold. The Bloch state occupation is uniform despite the presence of a composite Fermi sea. Next, in Fig. <ref>(b), we show the many-body spectrum of the θ=2^∘ lowest tMoTe_2 band at filling n=1/2 on the corresponding 16-unit-cell torus across all possible S_z ≥ 0 sectors. First, we observe that the lowest-lying states have S_z=S_z,max=4, indicating spontaneous, full spin/valley polarization. Moreover, these states have the same momentum quantum numbers as their partners in the LLL, providing evidence for a composite Fermi sea. The momentum space occupation numbers are nearly uniform as in the LLL, demonstrating that the system is not a ferromagnetic Fermi liquid. In Fig. <ref>(c), we show an exact diagonalization spectrum at n=3/4 that exhibits similar features to those of the LLL, which is shown in the Supplemental Material. In Fig. <ref>(d), we contrast these findings with n=1/2 at a larger twist angle θ=4.5^∘. Here, the lowest-energy states are still fully spin/valley polarized, but their many-body momenta are those expected from occupying the moiré band Bloch states with lowest energy. The Bloch state occupation numbers in Fig. <ref>(c) exhibit a sharp drop across the Fermi surface expected for non-interacting, spin-polarized holes (note that the hole band minima are at the Brillouin zone's corners rather than its center). At twist angles showing ACFL phases at n=1/2, 3/4, we expect the system at generic filling factors or sufficiently large inter-layer voltage drop to exhibit a ferromagnetic Fermi liquid phase. Therefore, it should be possible to tune between ACFL and ferromagnetic Fermi liquid phases via electrostatic gating or displacement field. Second-order transitions between composite Fermi liquids and Fermi liquids have been proposed in the past <cit.>. We leave study of the ACFL – ferromagnetic metal phase transition, as well as a more comprehensive analysis of ACFL stability, to future work. Effective theory of the ACFL. With this motivation, we propose a long wavelength effective theory of the ACFL in a Chern band at half-filling, n=1/2, ℒ_ACFL =ψ^†[i∂_t+a_t+A_t+𝒱(x)] ψ -1/2m_*| (i∂_i+a_i+A_i)ψ|^2-V(ρ_e) -1/21/4π ε_μνλ a_μ∂_ν a_λ-a_t ρ . Here ψ is the composite fermion field, m_* is an effective mass; V(ρ_e) is the density-density interaction potential, which we generally take to be 1/r Coulomb interactions unless otherwise specified; a_μ=(a_t,a_x,a_y) is a fluctuating Chern-Simons statistical gauge field; and A_μ=(A_t,A_x,A_y) is the background electromagnetic gauge field. We denote the value of the charge density at half filling by ρ, such that the charge per unit cell is n≡ρ×(unit cell area)=1/2. Importantly, although we focus on n=1/2 here, the theory for the ACFL at n=3/4 is easily obtained by attaching 4 flux quanta and acting with a particle-hole transformation (subtracting a filled C=1 band). We expect its universal properties to be essentially the same [While a lack of manifest particle-hole <cit.> and so-called reflection <cit.> symmetries are a significant problem for the HLR description of even-denominator composite Fermi liquids, the Chern band systems we study lack any microscopic particle-hole symmetry.] as the ACFL at n=1/2. In the Supplemental Material, we discuss how Eq. (<ref>) can be constructed from a parton mean field construction along similar lines to Ref. <cit.>, which also considered ACFL type phases in flat Chern bands. The theory in Eq. (<ref>) closely resembles the Halperin-Lee-Read (HLR) theory of half-filled Landau levels <cit.>. This is not surprising: the |C|=1 Chern band in twisted TMD bilayers can be thought of as carrying an emergent magnetic flux of one flux quantum per unit cell, which arises from the skrymion lattice configuration of “Zeeman” field acting on the layer pseudospin <cit.>. Nevertheless, there are two major differences with the standard HLR theory for a half-filled Landau level. The first is the final term, which alters the usual flux attachment constraint. Using the equation of motion for a_t, ρ_e=ρ+1/21/2π(∇×a) , where we have used the fact that the physical electron density consides with that of the composite fermions, ρ_e=δℒ_ACFL/δ A_t=ψ^†ψ, and boldface denotes spatial vectors. At half-filling of the Chern band, n=n=1/2, meaning that by Eq. (<ref>) the gauge flux per unit cell must vanish, and the composite fermions form a Fermi surface (as above, we use n to denote charge per unit cell). The second difference is that Eq. (<ref>) includes the effect of the moiré superlattice, in the form of a periodic scalar potential, 𝒱(x)=𝒱_0∑_n=1^3cos(Q_n·x) , where Q_n are the moiré superlattice wave vectors (see Supplemental Material). The full scalar potential felt by the composite fermions is therefore 𝒱(x)+A_t, where A_t includes any additional probe fields. We will see that the presence of this term leads to commensurability oscillations which are unique to the ACFL. Although the theory in Eq. (<ref>) should correctly reproduce long wavelength, universal observable properties, we emphasize that this theory is not meant to completely incorporate microscopic details. For example, it does not give the correct algebra of density operators, nor does it incorporate the composite fermion dipole moment. Rather, we expect that should a complete, band-projected theory be constructed, then Eq. (<ref>) could be understood as its long wavelength limit. For recent efforts to develop band-projected composite fermion theories in the context of the LLL, see Refs. <cit.>. FQAH sequence. Doping away from half-filling by tuning charge density or applied magnetic field causes the composite fermions to feel a net magnetic field and fill Landau levels. As a result, we can immediately predict a Jain sequence of FQAH states in tMoTe_2 corresponding to integer quantum Hall states of composite fermions. Like fractional Chern insulator states developed earlier <cit.>, these FQAH phases are topological orders enriched with (super)lattice symmetry. Say that the composite fermions fill p Landau levels, ν_ψ=2π⟨ψ^†ψ⟩/b_*=p , b_*=∇×(a+A) , where b_* is the total magnetic field felt by the composite fermions. Combining the flux attachment constraint, Eq. (<ref>), with Eq. (<ref>), we can relate the electron density to the applied magnetic field, B=∇×A, ρ_e(B) =p/2p-1(2ρ-B/2π) . The Streda formula then implies that one will measure Landau fans extending to B=0, with slopes that fall on the Jain sequence (see Fig. <ref>), dρ_e/dB=σ_xy=-p/2p-1 . For the FQAH sequence proximate to n=3/4, one obtains FQAH states on the sequence, n=1-p/4p-1. It is instructive to multiply Eq. (<ref>) on both sides by the superlattice unit cell area to obtain the simple expression, n=p/2p-1(1-n_Φ) , where n_Φ is the flux per unit cell. The FQAH Jain sequence includes both the observed state at filling 2/3 in tMoTe_2, which has also been studied numerically <cit.>. Additionally, we present ED evidence for additional Jain FQAH states in tMoTe_2 at n=2/5 and its particle-hole conjugate at n=3/5 in Fig. <ref>. Intrinsic commensurability oscillations at zero field. We now explore the host of phenomena arising from an interplay of flux attachment with the presence of the moiré superlattice potential. Indeed, due to the periodic modulation intrinsic to moiré materials, we find that both commensurability oscillations <cit.> and Hofstadter subgaps can be accessed by tuning density alone. Commensurability oscillations occur when the cyclotron radius and the modulation period are commensurate. More precisely, magnetoresistance minima and compressibility maxima are expected when a system in a spatially modulated potential with wave vector Q satisfies the electronic flat band conditions 2k_F Q ℓ^2 = 2π(j-1/4) , for j a positive integer and ℓ the effective magnetic length felt by the electric charges. This condition holds robustly from perturbative and semiclassical approaches <cit.>, and we derive it in the Supplemental Material. Again focusing on the n=1/2 ACFL, if the system is doped from half-filling with density, Δρ_e, the composite fermions to feel a magnetic field b_* = 4πΔρ_e by the flux attachment constraint in Eq. (<ref>). As in the HLR approach, the composite Fermi wave vector of the ACFL described by Eq. (<ref>) is set by the electric charge density, k_F = √(4πρ). We then find that commensurability oscillations occur at densities Δρ_e = k_F Q/(2π)^21/j at large j. We emphasize that in the ACFL these oscillations occur in the absence of any external magnetic field, and they coexist with SdH oscillations coming from filling integer Landau levels of composite fermions, which realize Jain states in a clean system. Moreover, tuning density and external field together allows full access of the magnetic spectrum of composite fermions near the half-filled Chern band. For instance, simultaneous control could allow the observation of Weiss oscillations without SdH oscillations by tuning within a CF Landau level. Finally, we note that the 2d moiré potential will give rise to zero-field Hofstadter subgaps within the CF Landau levels which may also be observable. Discussion. Starting from a band-projected continuum model for tMoTe_2, we have presented exact diagonalization evidence for compressible, non-Fermi liquid states at zero magnetic field, which we dub the anomalous composite Fermi liquid. Much as in conventional fractional quantum Hall systems, we argue that the ACFL picture offers a powerful organizing perspective for understanding fractional quantum anomalous Hall states. Indeed, all of the states for which there currently exists theoretical or experimental evidence fall on the celebrated Jain sequence, which also explains a large majority of fractional quantum Hall phases observed under strong magnetic fields. We furthermore have developed an effective theory capturing the universal properties of the ACFL offering concrete, observable signatures of the anomalous composite Fermi liquid, including commensurability oscillations and Jain sequence FQAH states themselves. Our numerical and theoretical analysis paves the way for further investigation. Interestingly, in the recent experiment on tMoTe_2 <cit.>, the coercive field is found to be enhanced at n=3/4, in addition to n=2/3. Unlike the n=2/3 state, which is incompressible, the n=3/4 state appears to be compressible. Our theory offers a natural explanation of the observed n=3/4 state as the anomalous composite Fermi liquid. Acknowledgements. We are grateful to Xiaodong Xu and Ady Stern for interesting discussions. HG also thanks Mike Mulligan, Sri Raghu, T. Senthil, Raman Sohal, and Alex Thomson for conversations on related topics. This work was supported by the Air Force Office of Scientific Research (AFOSR) under award FA9550-22-1-0432 and the David and Lucile Packard Foundation. HG is supported by the Gordon and Betty Moore Foundation EPiQS Initiative through Grant No. GBMF8684 at the Massachusetts Institute of Technology. § SUPPLEMENTAL MATERIAL §.§ Exact diagonalization methodology and extended data Our starting point is the single-particle continuum model for AA-stacked, K-valley TMD moiré homobilayers, H_0 = ∑_σ=↑, ↓∫ d r ψ_σ^†(r)_σℋ_σψ_σ(r). Here ℋ_σ is a 2×2 matrix in layer pseudospin, ℋ_↑ = [ ħ^2(- i ∇ - κ_+)^2/2m + V_1(r) t(r); t^†(r) ħ^2( - i∇ - κ_-)^2/2m + V_2(r) ] where V_l(r)=-2V∑_i=1,3,5cos(g_i·r+(-1)^lϕ), t(r) = w(1+e^ig_2·r+e^ig_3·r), and g_i=4π/√(3)a_M(cosπ(i-1)/3,sinπ(i-1)/3), κ_- = g_1 + g_6/3, κ_+ = g_1 + g_2/3. ℋ_↓ is the time reversal conjugate of ℋ_↑. Due to large spin-orbit splitting at the monolayer valence band maxima, the spin and valley are locked at low energies into a single “spin" quantum number indexed by σ. This model has been derived and its rich single-particle properties discussed elsewhere <cit.>. We work under a particle-hole transformation c_k↑→ c^†_k↑ so that the spectrum is bounded from below. We study the model's many-body physics by adding a Coulomb interaction V = e^2/ϵ r and projecting to the lowest moiré band, yield the projected Hamiltonian H̃ = ∑_k,σε_kσc^†_kσc_kσ + 1/2∑_k'p'kp,σσ'V_k'p'kp;σσ'c^†_k'σc^†_p'σ'c_pσ'c_kσ where c_kσ creates a hole with moiré crystal momentum k spin/valley quantum number σ with band energy ε_kσ. V_k'p'kp;σσ'≡⟨k'σ;p'σ'|V̂|kσ;pσ'⟩ are the Bloch state interaction matrix elements. This method can be viewed as variationally approximating the ground and low-lying excited states within the Fock space of the lowest moiré band. Diagonalizing <ref> in a plane-wave basis yields single-particle Bloch states as a linear combination of basis states labeled by spin σ, total momentum (measured relative to the moiré Brillouin zone center) as a sum of moiré crystal and reciprocal lattice momenta |k+g⟩, and layer l. From these Bloch states the Coulomb matrix elements can be calculated given the Fourier transform in two dimensions of the Coulomb potential V(q)=2π e^2/ϵ q. We neglect finite interlayer separation since it is much smaller than a typical moiré period and note that the choice between a long range or screened Coulomb interaction should be relatively inconsequential in a finite size system when the screening length is larger than several moiré periods a_M. We use the continuum model parameters reported in Ref. <cit.> for tMoTe_2. For the exact diagonalization calculations of the LLL on a torus, we follow the LLL Bloch basis prescription of Ref. <cit.>. In Fig. <ref>, we provide evidence that the same qualitative results for the composite Fermi liquid and ferromagnetic metal at n=1/2 persist in the presence of weaker Coulomb interaction ε=20. In Fig <ref>, we show the ED spectrum of the LLL at ν=3/4 with which Fig. <ref>(c) can be compared. Finally, in Fig. <ref>,we show that the spectrum of the moiré band AFCL at n=3/4, θ=1.9^∘ resembles its LL cousin even more closely than at θ=2^∘ shown in the main text. §.§ Parton construction of the ACFL effective theory Despite the absence of external magnetic flux in the Chern band context, much of the physics of flux attachment persists if we adopt a perspective where the physical electrons, denoted c, are fractionalized into emergent parton degrees of freedom, c_x=ϕ_x ψ_x . Here ϕ is taken to be a bosonic “holon” that carries the physical electric charge, while ψ is a neutral fermionic “spinon.” Using this parton description allows for a simple extension of the intuition of flux attachment to the Chern band setting <cit.>. However, we note recent efforts to construct lattice flux attachment procedures for constructing FQAH and fractional Chern insulator phases <cit.>. The above parton decomposition has a local gauge redundancy, under which ψ_x→ e^iθ_xψ_x and ϕ_x→ e^-iθ_xϕ_x, meaning that both must also couple to an emergent U(1) gauge field which we denote a_μ=(a_t,a_x,a_y). In summary, ϕ couples to the combination A_μ-a_μ, where A_μ=(A_t,A_x,A_y) is the physical (background) electromagnetic gauge field, while ψ couples to a_μ alone. We will assume that the system is in a regime where gauge fluctuations do not confine the partons. Previously, Refs. <cit.> applied the parton decomposition in Eq. (<ref>) to study ACFL phases starting from lattice tight binding models. Here, with an eye toward moiré systems, we approach the emergence of the ACFL from a more long wavelength perspective. We treat the long-wavelength moiré superlattice as a periodic electric scalar potential, A_t=𝒱(x). Then, neglecting gauge fluctuations, ⟨ a_μ⟩=0, the holons, ϕ, will see the same band structure as the electrons in the physical system of interest. Considering the example of tMoTe_2, at sufficient twist angle time-reversal symmetry is broken due to spontaneous Ising ferromagnetism, and a narrow C=-1 flat band occurs. If the physical electrons fill half fill this band, then so too do the holons (at mean field level), since they see the same superlattice potential. Because the holons are bosons, they then form an incompressible, bosonic FQAH state with the same topological order as the ν=-1/2 bosonic Laughlin state and Hall conductivity, σ^ϕ_xy=-e^2/2h. They may be integrated out to yield the FQAH topological quantum field theory (TQFT) <cit.>, ℒ_ϕ =2/4πbdb+1/2πbd(A-a)-(A_t-a_t) ρ , where we have introduced an auxiliary gauge field, b_μ. We define ρ to be the value of the physical charge density, such that the filling is ρ×(moiré unit cell area)≡𝔰=-1/2. Indeed, the main difference with the usual continuum bosonic Laughlin state is the presence of the final term, which gives the charge “glued” to the superlattice and guarantees that the theory makes sense in the absence of an external magnetic field. Notably, in a FQAH or fractional Chern insulator state at finite field, 𝔰n∈ℤ should be quantized in units of the topological degeneracy on the torus, n <cit.>. Assuming that at long wavelengths the spinons can be described with a quadratic dispersion and effective mass, m_*, one then obtains a final effective theory for the Chern band system at half-filling, ℒ_ACFL =ψ^†(i∂_t+a_t) ψ-1/2m_*| (i∂_i+a_i)ψ|^2 +2/4πbdb+1/2πbd(A-a)-(A_t-a_t) ρ . Integrating out b_μ gives the effective Lagrangian in Eq. (<ref>). However, only the Lagrangian with b_μ is gauge invariant on arbitrary manifolds. §.§ Commensurability oscillations A distinguishing feature of the anomalous composite Fermi liquid (ACFL) is that it will display commensurability oscillations as a function of density alone due to its in-built moiré periodic potential, while an ordinary Fermi liquid or finite-field CFL will not. In contrast, GaAs displays commensurability oscillations due to an externally applied periodical scalar or vector potential <cit.> and at finite field. Upon doping away from half-filling, ρ̅= 1/2 A_u.c.^-1, the composite fermions feel an effective magnetic field b_* = 4π(ρ_e- ρ̅). Let us assume that the added density (and thus b_*) is spatially uniform. We can treat this system as a parabolic band of fermions with effective mass m^* in a magnetic field b_* and a periodic scalar potential. The potential has triangular lattice symmetry with wavevector Q and strength V_0. The composite fermions half-fill the lowest band of the moiré Brillouin zone.In this section we discuss the perturbative and semiclassical approaches to commensurability oscillations. While the former approach is well known, discussions of the latter approach are lacking. The advantage of the latter is that it shows that commensurability oscillations naturally arise even in a nonperturbative regime V_0 ≫ω_c; indeed, the ACFL may be in this regime at small b_*. Perturbative approach. As noted in Ref. <cit.>, magnetoresistance minima occur at the electron flat band condition. To first order in perturbation theory, the bandwidth of the n'th Landau level in a scalar potential of wavevector Q is controlled by the factor e^-Q^2ℓ^2/4L_n(Q^2ℓ^2/2) = cos(√(2n)Qℓ - π/4)/√(π Q ℓ√(n/2)) + O(n^-3/4) so at large n, using the fact that n = ρ_e 2πℓ^2 and k_F = √(4πρ_e), we have the flat band condition 2k_FQℓ^2 = 2π (j-1/4). This implies 2R_c/a = j-1/4 when a = 2π/Q, while for the triangular lattice this implies 4R_c/√(3)a =j-1/4 <cit.>. For composite fermions, this condition becomes 2√(4π(ρ̅+ Δρ_e)) Q / 4πΔρ_e = 2π(j-1/4), which yields Eq. (<ref>) at large j. Semiclassical approach. We follow the semiclassical network model approach to the magnetic density of states as in <cit.>. Consider the network model shown in Fig. <ref>. For simplicity we are considering a 1D modulation instead of a 2D modulation, since the nature of commensurability oscillations doesn't change qualitatively between the two. We are assuming some phenomenological tunneling T between Fermi surfaces in neighboring zones. The tunneling is best thought of as wavepacket tunneling over a barrier (in k-space). The barrier is set by the dispersion, and hence T depends on V_0. Although it can occur at other points in the Fermi surface, the tunneling is peaked when the velocity is normal to the barrier; hence as a simplified model we have taken a single hopping between Fermi surface tops and bottoms. To describe scattering at the junctions, we adopt a basis where (0,1)^T and (1,0)^T are the dashed and straight incoming edges, respectively. After a suitable gauge transformation, the scattering unitary at the junction can be written as U = [ √(1-T^2) e^2π iϕ -T; T √(1-T^2)e^-2π iϕ ]. Together with the requirement that electrons pick up phases along links so that the total phase around any closed orbit is equal to the corresponding Aharanov-Bohm flux, this specifies the network model completely. We solve for the spectrum at a given magnetic field B by solving for eigenmodes. Valid eigenmodes must be periodic (up to phase) in the repeated Brillouin zone. Let S_0 = π k_F^2 and S_□ = 2k_F Q. Note S_□ is not the area of any valid closed orbit, but S_1 = S_0 + S_□ is. Next, note that the network could be considered as 1D array of area-S_1 closed orbits, whose mutual overlaps are the area-S_0 orbits. These correspond to the “original" and “lens" orbits of Ref. <cit.>. It follows from the analysis of Ref. <cit.> that there is a flat band whenever S_0 and S_1 are both quantized, or, equivalently, whenever S_0 and S_□ are both quantized. (A flat band in the network model corresponds to an extensive set of localized eigenmodes). In short, flat bands occur when ℓ^2 S_0 =2π(n+γ) ℓ^2 S_□ = 2π(m-ϕ) where m,n are suitable integers and γ = 1/2 is a Maslov correction. Using k_F^2 = 2mE, Eq. (<ref>) is E = ω_c(n+1/2) while Eq. (<ref>) is precisely the commensurability condition 2k_F Q ℓ^2 = 2π(j-ϕ) for ϕ = 1/4. The flat band condition coincides with a compressibility maximum and resistance minimum. Note that the all-orders in V_0/ω_c dependence is encoded in T, which does not affect the final result. The agreement of the two approaches supports the robustness of the commensurability oscillations across the relevant range of b_*.
http://arxiv.org/abs/2306.08243v2
20230614050411
MMASD: A Multimodal Dataset for Autism Intervention Analysis
[ "Jicheng Li", "Vuthea Chheang", "Pinar Kullu", "Eli Brignac", "Zhang Guo", "Kenneth E. Barner", "Anjana Bhat", "Roghayeh Leila Barmaki" ]
cs.CV
[ "cs.CV", "cs.LG" ]
0000-0003-2564-6337 University of Delaware 18 Amstel Ave Newark DE United States 19716 [email protected] University of Delaware Newark DE United States [email protected] 0000-0001-5999-4968 University of Delaware Newark DE United States [email protected] 0000-0003-3396-8782 University of Delaware Newark DE United States [email protected] 0009-0000-3922-4376 University of Delaware Newark DE United States [email protected] 0000-0003-1599-571X University of Delaware Newark DE United States [email protected] 0000-0002-0936-7840 University of Delaware 540 S. College Ave Newark DE United States 19713 [email protected] Correspondence: Roghayeh Barmaki ([email protected]) University of Delaware Newark DE United States [email protected] 0000-0002-7570-5270 Autism spectrum disorder (ASD) is a developmental disorder characterized by significant social communication impairments and difficulties perceiving and presenting communication cues. Machine learning techniques have been broadly adopted to facilitate autism studies and assessments. However, computational models are primarily concentrated on specific analysis and validated on private datasets in the autism community, which limits comparisons across models due to privacy-preserving data sharing complications. This work presents a novel privacy-preserving open-source dataset, MMASD as a MultiModal ASD benchmark dataset, collected from play therapy interventions of children with Autism. MMASD includes data from 32 children with ASD, and 1,315 data samples segmented from over 100 hours of intervention recordings. To promote public access, each data sample consists of four privacy-preserving modalities of data; some of which are derived from original videos: (1) optical flow, (2) 2D skeleton, (3) 3D skeleton, and (4) clinician ASD evaluation scores of children, e.g., ADOS scores. MMASD aims to assist researchers and therapists in understanding children's cognitive status, monitoring their progress during therapy, and customizing the treatment plan accordingly. It also has inspiration for downstream tasks such as action quality assessment and interpersonal synchrony estimation. MMASD dataset can be easily accessed at <https://github.com/Li-Jicheng/MMASD-A-Multimodal-Dataset-for-Autism-Intervention-Analysis>. <ccs2012> <concept> <concept_id>10003456.10010927.10003616</concept_id> <concept_desc>Social and professional topics People with disabilities</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120</concept_id> <concept_desc>Human-centered computing</concept_desc> <concept_significance>500</concept_significance> </concept> <concept><concept_id>10010147.10010178.10010224.10010225.10010228</concept_id> <concept_desc>Computing methodologies Activity recognition and understanding</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010257</concept_id> <concept_desc>Computing methodologies Machine learning</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Social and professional topics People with disabilities [500]Human-centered computing [500]Computing methodologies Activity recognition and understanding [500]Computing methodologies Machine learning < g r a p h i c s > MMASD provides multiple multimodal privacy-preserving features derived from original videos via ROMP <cit.>, OpenPose <cit.> and Lucas-Kanade <cit.>, including optical flow, 2D/3D skeleton and clinician evaluation results. The dataset can be accessed publicly and investigate research questions centered on social, and behavioral interactions of children with ASD in playful, group activities. taser MMASD: A Multimodal Dataset for Autism Intervention Analysis Roghayeh Leila Barmaki ============================================================ § INTRODUCTION Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by significant impairments in social communication, as well as difficulties in perceiving and expressing communication cues. Approximately 1 in 54 children are on the spectrum in the United States, resulting in over 1 million affected individuals nationwide <cit.>. Primary treatment of ASD includes behavioral and psycho-social interventions accompanied by prescribed medication. Behavioral and psycho-social interventions facilitate social and communication development, while medication helps control associated symptoms and comorbid problems <cit.>. Specifically, psycho-social interventions are diverse in content and can vary in curriculum, structure, discipline, and theme. Prevailing therapeutic interventions include applied behavior analysis and robot-assisted therapy, both of which can provide valuable data for analyzing children's mental development and developing individualized treatment plans. Various studies have widely adopted machine learning techniques to facilitate autism research <cit.>. Compared to traditional methods that rely heavily on human expertise and experience, machine learning approaches can help reduce the need for human labor and associated costs while still achieving decent performance. The autism community has benefited from machine learning techniques in many areas, including but not limited to autism diagnosis <cit.>, emotion recognition <cit.>, and movement pattern assessment <cit.>. In machine learning research, it is widely accepted to follow a research pipeline that involves developing, applying, and comparing models across multiple benchmark datasets to ensure fair comparisons in performance. However, in the autism community, commonly recognized benchmarks, especially for behavior analysis and activity understanding, are limited due to privacy concerns. Typically, models are validated on a private dataset, and data-sharing roadblocks can restrict the comparison between models. In this sense, the availability of publicly accessible datasets is a crucial first step for the autism community since it allows cutting-edge machine-learning techniques to be trained and validated on ASD datasets. Although some studies have already been conducted, there is still a significant lack of publicly available, multimodal datasets that can be used to analyze the full-body movements of children during therapeutic interventions. To overcome the data-sharing roadblock, in this paper, we proposed a publicly available multimodal dataset, MMASD[MMASD can be accessed at <https://github.com/Li-Jicheng/MMASD-A-Multimodal-Dataset-for-Autism-Intervention-Analysis>.], collected from physical therapy interventions of children with autism. MM-ASD maintains privacy while retaining essential movement features by providing optical flow, 2D and 3D skeletons that are derived from the original videos, thereby avoiding the exposure of sensitive and identifiable raw video footage. Additionally, it includes clinician evaluation results of each child such as ADOS-2 and motor function scores. We also provide the intervention activity class labels for overall scene understanding. Overall, MMASD can be used to help therapists and researchers to understand children's cognitive status, track development progress in therapy, and guide the treatment plan accordingly. It also provides inspiration for downstream tasks, such as activity recognition <cit.>, action quality assessment <cit.>, and interpersonal synchrony estimation <cit.>. In contrast to current datasets, e.g., listed in Table  <ref>, our dataset stands out for the following reasons: * It is a publicly accessible benchmark dataset for movement and behavior analysis during therapeutic interventions featured by diverse scenes, group activities, and participants. * It includes multimodal features such as optical flow, 2D/3D skeletons, demographic data, and ADOS score rating. These features provide privacy-preserving approaches to maintain critical and full-body motion information. * Each scene depicts the same activity performed by a child and one or more therapists, providing an in-place template for comparing typically developing individuals with children with ASD. The rest of the paper is organized as follows. Further background on relevant autism datasets is presented in Section <ref>, followed by a presentation of data collection approach in Section <ref>. Details of the dataset, including statistics, data processing and annotation, are provided in Section <ref>. Finally, discussion and current limitations of MMASD are described in Section <ref> and conclusion in Section <ref>. § RELATED WORK ASD is characterized by atypical movement patterns, such as repetitive movements, clumsiness, and difficulties with coordination. A deeper understanding of these movement patterns and their association with ASD can aid therapists, clinicians, and researchers in developing more effective interventions and therapies. To this end, several datasets have been developed for movement analysis of children with ASD. These datasets typically involve collecting motion capture or sensor data from children while they perform various activities or specific tasks. The collected data is then analyzed to identify differences and patterns in movement between children with ASD and typically developing children. In this section, we present an overview of existing datasets that focus on movement and behavior analysis of children with ASD. Movement analysis with humanoid robot interactions datasets Marinoiu et al. <cit.> presented a dataset and system that use a humanoid robot to interact with children with ASD and monitor their body movements, facial expressions, and emotional states. The results show that children significantly improved their ability to recognize emotions, maintain eye contact, and respond appropriately to social cues. They identified several effective techniques for detecting and analyzing the children's emotional and behavioral responses, such as analyzing the frequency and duration of specific behaviors. Billing et al. <cit.> proposed a dataset of behavioral data recorded from 61 children with ASD during a large-scale evaluation of robot-enhanced therapy. The dataset comprises sessions where children interacted with a robot under the guidance of a therapist and sessions where children interacted one-on-one with a therapist. For each session, they used three RGB and two RGBD (Kinect) cameras to provide detailed information, including body motion, head position and orientation, and eye gaze of children's behavior during therapy. Another dataset related to humanoid robot interactions with autistic children is DE-ENIGMA <cit.>, which includes using a multimodal human-robot interaction system to teach and expand social imagination in children with ASD. The DE-ENIGMA dataset comprises behavioral features such as facial mapping coordinates, visual and auditory, and facilitates communication and social interaction between the children and the robot. The authors indicated that the DE-ENIGMA could be used as an effective tool for teaching and expanding social imagination in children with ASD. They also suggest that the usage of a multimodal human-robot interaction could be a promising approach for developing interventions for children with ASD that aim to improve their social skills and promote better social integration. Eye movement and vocalization datasets Duan et al. <cit.> introduced a dataset of eye movement collected from children with ASD. The dataset includes 300 natural scene images and eye movement data from 14 children with ASD and 14 healthy individuals. It was created to facilitate research on the relationship between eye movements and ASD, with the goal of designing specialized visual attention models. Baird et al. <cit.> introduced a dataset of vocalization recordings from children with ASD. They also evaluated classification approaches from the spectrogram of autistic speech instances. Their results suggest that automatic classification systems could be used as a tool for aiding in the diagnosis and monitoring of ASD in children. Behavior analysis datasets For action recognition dataset of children with ASD, Pandey et al. <cit.> proposed a dataset of video recording actions and a technique to automate the response of video recording scenes for human action recognition. They evaluated their technique on two skill assessments with autism datasets and a real-world dataset of 37 children with ASD. Rehg et al. <cit.> introduced a publicly available dataset including over 160 sessions of child-adult interactions. They discussed the use of computer vision and machine learning techniques to analyze and understand children's social behavior in different contexts. They also identified technical challenges in analyzing various social behaviors, such as eye contact, smiling, and discrete behaviors. Rajagopalan et al. <cit.> explored the use of computer vision techniques to identify self-stimulatory behaviors in children with ASD. They also presented a self-stimulatory behavior dataset (SSBD) to assess the behaviors from video records of children with ASD in uncontrolled natural settings. The dataset comprises 75 videos grouped into three categories: arm flapping, head banging, and spinning behaviors. Comparison to MMASD In <ref>, we compare our proposed dataset with related benchmarks. Overall, MMASD features diverse themes and scenes, capturing full-body movements with multimodal features. In contrast, some works focused specifically on upper-body movements <cit.>. MMASD also provides critical privacy-preserving features to represent body movements making it publicly accessible, while some works were conducted on raw videos that are either private or accessible only upon request <cit.>. Additionally, it is collected from therapeutic interventions, reflecting participants' motor ability and providing valuable insights for treatment guidance. § METHOD In the following sections, we describe the participants, procedure, and experimental settings of our proposed dataset. §.§ Participants We recruited 32 children (27 males and 5 females) with ASD from different races (Caucasian, African American, Asian, and Hispanic) and backgrounds through fliers posted online and onsite in local schools, services, and self-advocacy groups. Prior to enrollment, children were screened using the Social Communication Questionnaire <cit.>, and their eligibility was determined by the Autism Diagnostic Observation Schedule-2 (ADOS-2) <cit.> as well as clinical judgment. All the children were between 5 and 12 years old. Written parental consent was obtained from the Institutional Review Board of the university before enrollment. The Vineland Adaptive Behavior Scales <cit.> were used to assess the children's adaptive functioning levels. In general, 82% of the participating children had delays in the Adaptive Behavior Composite. Specifically, 70% of them experienced communication delays, 80% had difficulties with daily living skills, and 82% had delays in socialization §.§ Procedure The study was conducted over ten weeks, with the pre-test and post-test being conducted during the first and last weeks of the study, respectively. Each training session was scheduled four times per week and lasted approximately 45 minutes. During the intervention, the trainer and adult model interacted with the child within a triadic context, with the adult model acting as the child's confederate and participating in all activities with the child. This triadic setting (child, trainer, and model) provided numerous opportunities for promoting social and fine motor skills such as eye contact, body gesturing and balancing, coordination, and interpersonal synchrony during joint action games. All expert trainers and models involved were either physical therapists or physical therapy/kinesiology graduate students who had received significant pediatric training prior to their participation. The trainers and models were unknown to the children before the study. In addition to the expert training sessions, we also encouraged parents to provide two additional weekly sessions involving similar activities to promote practice. Parents were provided with essential instruction manuals, supplies, and in-person training beforehand. All training sessions were videotaped with the parents' consent and notification to the children, and the training diary was compiled by parents in collaboration with expert trainers. The general pipeline of training sessions had a standard procedure despite some unique activities across different themes. A welcoming and debriefing phase was present at the beginning and end of the data collection to help children warm up and get ready for the intervention, as well as to facilitate the subsequent data processing stage by providing time labels that indicate the segments to investigate. §.§ Experiment Settings All videos were recorded in a house environment with the camera pointed toward the children. Models (trainers/therapists and trained adults) interacted with the child within a triadic context, and the adult model was the child's confederate and practiced all activities with the child. Different tools were introduced to facilitate the training process depending on the theme of the intervention, for example, instruments and robots. Selected scenes in different themes of our proposed MMASD dataset are shown in <ref>. § MMASD MMASD includes 32 children diagnosed with autism of different levels. It covers three unique themes: * Robot: children followed a robot and engaged in body movements. * Rhythm: children and therapists played musical instruments or sang together as a form of therapy. * Yoga: children participated in yoga exercises led by experts. These exercises included body stretching, twisting, balancing, and other activities. Overall, MMASD comprises 1,315 video clips that have been meticulously gathered from intervention video recordings spanning more than 108 hours. It consists of 244,679 frames with an average duration of 7.15 seconds. The average data length in MMASD is 7.0 ± 3.4 seconds (186.1 ± 92.9 frames), with dimensions ranging from 320×240 to 1920×1080. <ref> presents statistical information on MMASD. Depending on the activity conducted during the intervention, we further categorized all data into eleven activity classes as described in <ref>. Each activity class falls into a unique theme, as shown in <ref>. MMASD also reports demographic and autism evaluation scores of all participating children, including date of birth, ADOS-2 score (social affect & restricted and repetitive behavior), motor functioning score, and severity of autism. §.§ Data processing From original video recordings, we manually find out the start and end time stamps of a specific activity. Then we segmented the video into clips and categorized them by activity class. Clips shorter than three seconds were discarded. We also discarded noisy data due to video quality, lighting conditions, and body occlusion. Besides all the eleven activities in MMASD, there were some other activities with fewer examples. To ensure a balanced data distribution, we excluded all inadequate classes. §.§ Data Annotation We have four annotators that are well-trained in intervention understanding. The annotators had a comprehensive understanding of interventions and a cross-disciplined background in computer science and physical therapy. We exclusively assigned one activity class label to each video. Each annotator completed data annotation independently, and the final class label was determined by majority voting. However, original videos cannot be publicly shared due to privacy concerns. Therefore, we create data samples from every video clip by extracting selected features from the original scenes, including (1) optical flow, (2) 2D skeleton, and (3) 3D skeleton, respectively. All these features can maintain critical body movements while preserving privacy. Section <ref> explained all selected features in detail. In addition to the motion-related features mentioned above, we also reported clinician evaluation results such as ADOS-2 score, motor functioning score, and severity of autism for each participating child. ADOS-2 is a standardized assessment tool used to evaluate individuals suspected of having ASD. It is used in conjunction with other diagnostic information to help clinicians determine whether an individual meets the criteria for an ASD diagnosis. ADOS-2 includes several modules, each designed for individuals of different ages and language abilities and includes a series of activities and tasks that are used to observe key features of ASD, such as social communication skills and repetitive behaviors. The ADOS-2 scores are based on an individual's performance during activities and tasks specific to their module and can range from 0 to 10 or higher depending on the algorithm used, with higher scores indicating more severe ASD symptoms. Moreover, we reported the ADOS comparison score, a continuous metric ranging from 1 to 10 that describes the severity of a child's autism symptoms compared to children with ASD of similar age and language levels <cit.>. Low comparison scores are indicative of minimal evidence of autism symptoms, whereas high scores are indicative of severe autism symptoms. The contrast between the ADOS-2 score and the ADOS comparison score is worth mentioning. The ADOS-2 score reflects an individual's raw score on the ADOS-2 assessment tool, while the ADOS comparison score is a statistical measure that compares an individual's performance to others of the same age and language level. The motor functioning score refers to an assessment of an individual's motor skills and abilities and is evaluated based on the children's level of independence in daily living skills <cit.>. It is on a scale of 1 to 3, while 1, 2, 3 represent low functioning (needing significant support), medium functioning (needing moderate support), and high functioning (needing less support), respectively. Finally, the severity of autism is determined by a comprehensive assessment that includes both the ADOS and motor function evaluation. §.§ Feature Extraction In order to preserve critical details of movement while avoiding any infringement of privacy, we derived the subsequent features from the initial footage. (1) Optical flow An optical flow is commonly referred to as the apparent motion of individual pixels between two consecutive frames on the image plane. Optical flow derived from raw videos (see Figure <ref>) can provide a concise description of both the region and velocity of a motion without exposing an individual's identity <cit.>. (2) 2D Skeleton Skeleton data has an edge over RGB representations because it solely comprises the 2D positions of the human joints, which offer highly conceptual and context-independent data. This allows models to concentrate on the resilient aspects of body movements. 2D skeleton data has been widely applied to tasks relating to human behavior understanding, such as action recognition <cit.>, action quality assessment <cit.> and beyond. An optimal way to acquire skeleton data is through the use of wearable devices and sensors that are affixed to the human body. However, in the context of autism research, it poses a substantial challenge as children may feel overwhelmed wearing these devices and experience anxiety. As a result, the skeleton data extraction process is carried out by pre-trained pose detectors based on deep neural networks. (3) 3D Skeleton Similar to 2D skeleton data, 3D skeletons instead represent each key joint with a 3D coordination, introducing an additional depth dimension. Since all the data was collected using a single RGB camera, we also completed this process with the help of deep neural networks. The technical details and tools utilized for feature extraction can be found in Section <ref>. §.§ Data Format Suppose the original video clip includes N participants (child, trainer, and assistant) is composed of L frames, and the height and width of each frame are H and W, respectively. As discussed above, each data sample consists of four distinct components, with data dimension demonstrated in braces: * Optical flow (L-1, H, W): saved as npy files <cit.>. * 2D skeleton (L, N , 17, 2): 2D coordinates of 17 key joints, following COCO <cit.> format, saved as JSON files. * 3D skeleton (L, N, 24, 3): 3D coordinates of 24 key joints, following ROMP <cit.> format, saved as npz files. * Demographic and clinical evaluation for ASD (9,): including nine attributes such as participant ID, date of birth, chronological age, ADOS-2 module, social affect score, restricted and repetitive behavior score, ADOS comparison score, motor functioning score, and severity of autism, saved as CSV files. §.§ Implementation Details §.§.§ Optical flow: The Lucas-Kanade <cit.> method is used for our study. It is a popular technique used in computer vision to estimate the motion of objects between consecutive frames. The method assumes that the displacement of the image contents between two nearby instants is small and approximately constant within a neighborhood of the point under consideration. By solving the optical flow equation for all pixels within a window centered at the point, the method can estimate the motion of objects in the image sequence. Overall, the Lucas-Kanade optical flow method is an effective and popular technique for estimating motion in various computer vision applications. §.§.§ 2D skeleton: The OpenPose method <cit.> is used to extract 2D skeletons from our dataset of human action videos. OpenPose is a powerful tool for body, face, and hand analysis, developed by Carnegie Mellon University, and is based on deep learning techniques. It is a real-time multi-person key-point detection library that can accurately detect the key points of a human body, including joints and body parts, from an image or video feed. Initially, it predicts confidence maps for every body part and subsequently associates them with distinct individuals via Part Affinity Fields. The library is open-source and written in C++ with a Python API, which makes it easy to use and integrate into various computer vision applications. §.§.§ 3D skeleton: We utilized the Regression of Multiple 3D People (ROMP) proposed by Sun et al. <cit.>, a state-of-the-art technique to estimate the depth and pose of an individual from a single 2D image. The authors proposed a deep learning-based approach that is based on a fully convolutional architecture, which takes an input image and directly predicts the 3D locations of the body joints of the person(s) present in the image. This is achieved by directly estimating multiple differentiable maps from the entire image, which includes a Body Center heatmap and a Mesh Parameter map. 3D body mesh parameter vectors of all individuals can be extracted from these maps using a simple parameter sampling process. These vectors are then fed into the SMPL body model to generate multi-person 3D meshes. In our study, we employed the code and pre-trained model shared by the authors and used it on our dataset to suit our specific needs. By utilizing this method and applying it to our own data, we obtained 2D and 3D coordinates of key joints of the person(s). § DISCUSSION This section delves into the limitations of MMASD and the challenges we encountered during the dataset preparation process. As the experiments were conducted in real-world settings, we faced common computer vision challenges, including varying video quality, illumination changes, cluttered backgrounds, and pose variations. Notably, in the feature extraction stage, we encountered pose detection failures in challenging scenarios, such as body occlusion and participants moving out of the scene. The intrinsic video quality limitation of MMASD also restricted us from capturing subtle and fine-grained features, such as facial expressions. Moreover, it is imperative to conduct in-depth investigations into domain-specific challenges. For instance, in standard benchmarks for human activity recognition, typically developing individuals exhibit dominant and continuous actions with similar intensity. However, data on MMASD may not solely contain the target behavior throughout the entire duration, as impromptu actions or distractions may occur during therapy sessions for children. Furthermore, unlike prevailing benchmarks that collect ground truth skeleton data by attaching sensors to the human body, MMASD generates skeleton data by means of pre-trained deep neural networks. This is because children with autism have limited tolerance for external stimuli, and the presence of sensors on their bodies may cause them to become anxious, agitated, or exhibit challenging behaviors. Consequently, the skeleton data's reliability in MMASD depends on the performance of the underlying pose detectors. In addition, children with autism can exhibit varying motor functions, resulting in different intensity levels and completion rates for the same activity. Clinician evaluation results can offer additional guidance in determining and classifying activities for each individual. Nevertheless, there is a need to explore ways to incorporate tabulated clinician evaluation results with other movement-related features to generate comprehensive representations. § CONCLUSION Autism research has been greatly facilitated by machine learning techniques, which offer cost-effective and accurate ways to analyze various aspects of children's behavior and development. However, the lack of open-access datasets has posed challenges to conducting fair comparisons and promoting sound research practices in the field of autism research. To address this issue, we have proposed MMASD, a publicly accessible multimodal dataset that features diverse scenes collected from therapy interventions for children with autism. Our dataset includes multimodal features such as 2D & 3D skeleton data, optical flow, demographic data, and ADOS rating, offering a confidential data-sharing approach that can maintain critical motion information. MMASD distinguishes itself from existing works by utilizing privacy-preserving multimodal features to provide comprehensive representations of full-body movements across a range of diverse themes. Moreover, each scene in our dataset depicts the same activity performed by a child and one or more therapists, providing a valuable template for comparing typically developing individuals with children with ASD. There are several directions for future work that are worth exploring. Firstly, further research can be conducted to develop and compare machine learning models on the MMASD dataset for various tasks, such as action quality assessment <cit.>, interpersonal synchrony estimation <cit.>, and cognitive status tracking. This can help establish benchmark performance and identify state-of-the-art methods for analyzing the full-body movements of children during therapeutic interventions. In addition, new approaches can be investigated to overcome pose detection failures in MMASD. For example, by introducing pose uncertainty or attention mechanism to assign higher weights to more reliable body joints. Furthermore, the MMASD dataset can be expanded in several aspects. To elaborate, further intervention scenarios could be included along with more features such as mutual gaze <cit.> and additional annotations like a synchrony score. Finally, efforts can be made to dataset augmentation via existing benchmarks not limited to the autism domain by matching samples with similar motion features <cit.>, which can significantly expand the scale of the autism dataset. ACM-Reference-Format
http://arxiv.org/abs/2306.01680v1
20230602165512
Bloch point nanospheres for the design of magnetic traps
[ "F. Tejo", "C. Zambrano-Rabanal", "V. Carvalho-Santos", "N. Vidal-Silva" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci" ]
]Bloch point nanospheres for the design of magnetic traps Escuela de Ingeniería, Universidad Central de Chile, Avda. Santa Isabel 1186, 8330601, Santiago, Chile Departamento de Ciencias Físicas, Universidad de La Frontera, Casilla 54-D, Temuco, Chile Universidade Federal de Viçosa, Departamento de Física, Avenida Peter Henry Rolfs s/n, 36570-000, Viçosa, MG, Brasil. [email protected] Departamento de Ciencias Físicas, Universidad de La Frontera, Casilla 54-D, Temuco, Chile Through micromagnetic simulations, this work analyzes the stability of Bloch points in magnetic nanospheres and the possibility of using an array of such particles to compose a system with the features of a magnetic trap. We show that a BP can be nucleated as a metastable configuration in a relatively wide range of the nanosphere radius compared to a quasi-uniform and vortex state. We also show that the stabilized Bloch point generates a quadrupolar magnetic field outside it, from which we analyze the field profile of different arrays of these nanospheres to show that the obtained magnetic field shares the features of magnetic traps. Some of the highlights of the proposed magnetic traps rely on the magnetic field gradients achieved, which are orders of magnitude higher than standard magnetic traps, and allow three-dimensional trapping. Our results could be useful in trapping particles through the intrinsic magnetization of ferromagnetic nanoparticles while avoiding the commonly used mechanisms associated with Joule heating. [ N. Vidal-Silva July 31, 2023 ================== Several propositions for applications of magnetic nanoparticles in spintronic-based devices demand the spin transport electronics of magnetic textures through magnetic fields or electric currents without moving the particle itself <cit.>. Nevertheless, manipulating and moving nanomagnets through external magnetic fields without changing the magnetic pattern of the system also generates exciting possibilities for a plethora of applications <cit.>. Within such propositions, an emergent possibility of applying magnetic nanoparticles is using their generated magnetostatic fields as magnetic traps (MTs) <cit.>, which consists of a system that uses a gradient of the magnetic field to confine charged or neutral particles with magnetic moments <cit.>, levitate magnetic nanoparticles <cit.>, and pinning neutral atoms in low temperatures for quantum storage <cit.>. MTs generally present a set of devices arranged to generate a quadrupolar magnetic field <cit.>. These field profiles can be obtained, for instance, by two ferromagnetic bars parallel to each other, with the north pole of one next to the south of the other. The same field profile can be generated by two spaced coils with currents in opposite directions or four pole tips, with two opposing magnetic north poles and two opposing magnetic south poles <cit.>. The magnetic field gradient of a quadrupole has the particularity of allowing atoms to leave from the MT due to the zero field strength located at its center <cit.>. Several solutions to avoid the particles escaping from the trap suggest adding a set of magnetic fields generated by an array of electric currents <cit.> to the quadrupolar field. The magnetic fields generated by these electric current distributions (I) scale as I/s, while their gradient and second derivatives scale as I/s^2 and I/s^3, respectively <cit.>. Here, s represents the characteristic length of the system. In this context, the smaller these MTs, the better the particle confinement, and several techniques to diminish their sizes were developed <cit.>. However, the miniaturization of MTs using an array of nanowires and coils for manipulating atoms faces the problem of energy dissipation by Joule heating <cit.>. In this context, the intrinsic dipolar fields of specific magnetic textures of ferromagnetic nanoparticles emerge as natural candidates to compose nanosized MTs <cit.>. A promising proposition to adopt nanosized magnetic textures as sources of magnetic field gradient is using the magnetostatic field generated by spin textures in chiral magnets <cit.>. Indeed, because the magnetostatic field generated by a skyrmion lattice is similar to that created by two helices carried by electric currents <cit.>, nanoscaled MTs can be engineered by stacking chiral ferromagnets hosting skyrmions <cit.>. Another exciting result regarding magnetostatic fields produced by topological spin textures is the generation of a quadrupolar field by just one magnetic nanoelement, as evidenced by Zambrano et. al. <cit.> for a magnetic nanosphere hosting a Bloch point (BP). Nevertheless, in that case, the nanosphere is located at the center of the quadrupolar field, reducing the feasibility of applying this only structure as a magnetic trap. Following these ideas and motivated by the proposition of stacking skyrmion lattices to compose MTs, we analyze, through micromagnetic simulations, the possibility of using a BP array as an MT. We start by exploring the stability of a BP on a nanosphere as a function of their geometrical and magnetic parameters. After determining the magnetostatic field of a BP, we show that an array of four BP nanospheres generate a magnetic field gradient with all properties to be applied as an MT. Our main focus is presenting a proposition to use BP nanospheres as sources of magnetic fields in MTs. Therefore, we obtain the stable and metastable states of a ferromagnetic nanosphere as a function of its radius, R, and magnetic parameters. The analysis is performed through micromagnetic simulations using the OOMMF code <cit.>, a well-known tool that agrees well with experimental results on describing the magnetization of nanoparticles. In the simulations, we consider three values to M_s and the exchange stiffness, A, characterizing iron (M_s ≈ 1700 kA/m and A = 21 pJ/m), Permalloy (Ms ≈ 850 kA/m and A = 13 pJ/m), and cobalt (Ms ≈ 1450 kA/m and A = 56 pJ/m). To simulate a smooth spherical geometry, we consider a cubic cell with the size of 0.5×0.5×0.5 nm^3. The local and global minima are obtained by comparing the total energy, E, of three magnetic profiles: quasi-uniform, where the magnetic moments slightly deviate from the purely parallel direction <cit.>; vortex, characterized by a curling magnetization field around an out-of-plane core <cit.>; and BP configuration, characterized by two magnetic bobbers <cit.> separated by a texture that, in a closed surface around its center, the magnetization field covers the solid angle an integer number of times <cit.>. These magnetic patterns are obtained by relaxing the system from three different configurations and determining the total energy, E=E_x+E_d, of the relaxed state. Here, E_x and E_d are the exchange and dipolar contributions to the total energy. The first initial state consists of a single domain, which, after relaxation, reaches a quasi-uniform configuration. The second and third initial configurations consist of a rigid vortex and BP artificially imposed. Subsequently, both states let it relax to achieve a vortex and a BP as metastable system configurations, respectively. The energies of final states for a nanosphere of Fe, Py, and Co are shown in Fig. <ref>. One notices that due to the role that the exchange interaction plays in systems with small sizes, the quasi-uniform state appears as groundstate when the nanosphere radius is smaller than a threshold value of R_c≈15 nm (Fe), R_c≈25 nm (Py), and R_c≈30 nm (Co). Nevertheless, the contribution of dipolar energy increases with the system size, and at these threshold values, both the BP and the vortex become energetically favorable. Indeed, one can notice that the vortex configuration corresponds to the groundstate, while the BP has a slightly higher energy. As a result, the BP configuration is then a metastable state, whereas the vortex is the more stable state. Therefore, we claim that under certain conditions, a BP can be stabilized and conclude that in addition to its topological protection, the BP also has energetic metastability, compared to a quasi-uniform state, for radii greater than the material-dependent threshold value. To diminish computational effort, we will focus our discussion on a Fe nanosphere with R=15 nm, which is the lower limit to the critical radius allowing the BP metastability, and it is appreciated possesses the minimum energy difference with the vortex configuration. Nevertheless, no qualitative changes for the results presented here should be observed if we consider Py or Co nanospheres hosting BPs. After showing that BPs can appear as metastable states compared to quasi-uniform and vortex configurations, we analyze the properties of the magnetostatic field of such a system. The vector field of a BP can be parameterized by the normalized magnetization written in spherical polar coordinates as M/M_s =(sinΘcosΦ,sinΘsinΦ,cosΘ), where M_s is the magnetization saturation. Under this framework, the magnetic profile of a BP configuration can be modeled with the ansatz <cit.> Θ(θ) = pθ+π(1-p)/2 andΦ(ϕ) = ϕ+γ. Here, θ and ϕ are the standard polar and azimuthal angles describing the spherical coordinates, and p=± 1 is the BP polarity, which determines the orientation of the magnetic moments in nanosphere poles in the z-axis direction. In this case, the magnetic moments point outward or inward for p=+1 and p=-1, respectively, as depicted in Figs. <ref>a) and b). The parameter γ accounts for determining the BP helicity. For instance, γ=0 represents a hedgehog magnetization field pointing outward the sphere center, while γ=π/2 depicts a tangent-to-surface configuration in the sphere equator. The ansatz (<ref>) has been previously used to determine the magnetostatic field outside a BP nanosphere <cit.>, given by H(r,θ)=M_sR^4/48r^4(1-cosγ) [2 P_2(cosθ) r̂ +sin 2θ θ̂] , where P_2(x) is the Legendre polynomial of degree 2, and r is the radial component of the position of a point outside the nanosphere. From the BP nanosphere property that γ adopts a constant quasi-tangential configuration in the nanosphere equator <cit.>, one observes that the magnetostatic field outside the considered system consists of a quadrupole, which is consistent with Eq. (<ref>), and is also obtained in our micromagnetic simulations, as shown in Fig. <ref>c). Although this field profile seems to be a good candidate for MTs, the nanosphere is located at the center of the quadrupolar field, which avoids using this only structure for this application. Therefore, we discuss on the possibility of using an array of such elements to generate a magnetic field gradient with the features of an MT. The proposed arrays consist of four Fe BP nanospheres with a radius of 15 nm. These nanospheres are symmetrically positioned in the vertices of a square inside a rectangular prism with dimensions 120×120×60 m^3 (see Fig. <ref>-a)). The proposed arrays differ by the square side size and the BP polarities as presented in table <ref>, where p_i refers to the BP polarity in the vertex i. It is important to point out that in all the simulated arrays, the chirality acquired by the BPs emerges as a consequence of the energy minimization <cit.>. Firstly, we analyze the profile of the magnetic field of Array I. Main results are summarized in Fig. <ref> and Fig. <ref>a). In the former, we present the snapshots of the modulus of the magnetostatic field (H_d) profile in the xy and yz planes, respectively, in a longitudinal section of these planes. The color map of the magnetic field allows us to notice that Array I gives place to a range of magnetic fields going from H_d ≈ 0.5 T, in the regions surrounding the nanospheres, until a minimum value of H_d ≈ 6.8 × 10^-4 T in the center of the array, as shown in Figs. <ref>a) and b). The detailed analysis presented in Fig. <ref>a) of the field profile in the longitudinal sections in the xy and yz planes reveals that the magnetic field generated by Array I presents local and global minima depending on the position. While the local minima occur in the center of two adjacent nanospheres, the global one is in the system center. Therefore, the presented results show the existence of a magnetic field gradient in space, which can be numerically determined. We obtain that the field gradients are in the order of ∼ 10^5-10^6 T/m, much higher than the field gradient of conventional MTs <cit.>. The existence of high gradients of magnetic fields yields narrower confinement, making systems with this property very interesting for applications in MTs <cit.>. Also, the similar behavior of the magnetic field in both longitudinal sections allows the symmetric confinement in three different places (local and global minima of magnetic fields) of particles if they are charged from x or y axes. Finally, local minima have the advantage of ensuring higher stability to the trapped particles. Because changing the distance among the nanoparticles affects the strength of the magnetostatic field <cit.>, we also propose changes in the structure of the array. Therefore, we analyze the field profile when the nanosphere polarity distribution is given by Array II. Fig. <ref>b) shows the field distribution and its strength as a function of the position along the longitudinal sections along xy and yz planes. One notices that the generated magnetostatic field has exactly the same behavior in both longitudinal sections, reaching the maximum values in the space between two neighbor spheres (≈ 30 nm y ≈ 90 nm) and a unique global minimum in the array center. The appearance of just one minimum weakens the implementation of Array II as an MT. Finally, we consider the magnetic field generated by Array III, whose results are given in Fig. <ref>c). In this case, we obtain that the field profiles of the longitudinal sections along xy and yz planes are different. Indeed, the magnetic field along the xy plane has two maxima between the BP nanospheres 1 and 3, and 2 and 4, and a nonzero minimum in the array center. On the other hand, the field profile along the yz plane presents a maximum value in the array center and two minima between BP nanospheres 1 and 2, and 3 and 4. Therefore, Array III generates a magnetic field with a triple saddle point, and this array does not work as a potential MT since the magnetic field does not have the features to stabilize atoms or particles with magnetic moments. The above-described results show that different distributions of magnetic fields are obtained depending on the BP nanosphere polarity distribution. Two of these fields present the features to be used as MTs. The main advantages of using an array of BP nanospheres to generate a gradient of magnetic fields are the lower cost of production when compared to lithographic processes that use materials such as Al_2O_3, AIN, Si, and GaAs to fabricate conductor nanowires in a chip <cit.>. In addition, the proposed setting also has the advantage of avoiding energy losses due to the heating of the nanospheres. We highlight that although the BPs are metastable states, the increase in the temperature of the MT due to the motion of the trapped particles is not big enough to denucleate the BP from the nanospheres. In summary, we have analyzed the magnetostatic properties of magnetic nanospheres hosting a BP as a metastable state. In addition to their topological protection, BPs have energetic metastability in nanospheres with a radius above a threshold value that depends on the material parameters. After discussing the energy of BP nanospheres, we determine the magnetostatic field generated outside it. The micromagnetic simulations reveal the appearance of a quadrupolar field, as previously reported from analytical calculations <cit.>. We then analyzed the magnetic field profile of different arrays of BP nanospheres to propose the production of a magnetic trap. We showed that the array with the better features to be used as magnetic traps consists of four nanospheres hosting BPs with positive polarities. Although we analyzed the proposal by projecting the magnetostatic field profiles into a given plane, they are essentially three-dimensional quadrupolar fields. This feature adds a new degree of freedom to potential MTs by allowing charging particles from different directions of 3D space. Acknowledgments: The work of F.T. was supported by ANID + Fondecyt de Postdoctorado, convocatoria 2022 + Folio 3220527. V.L.C.-S. acknowledges the support of the INCT of Spintronics and Advanced Magnetic Nanostructures (INCT-SpinNanoMag), CNPq 406836/2022-1. V.L.C.-S. also thanks the Brazilian agencies CNPq (Grant No. 305256/2022-0) and Fapemig (Grant No. APQ-00648-22) for financial support. N. V-S acknowledges funding from ANID Fondecyt Iniciacion No. 11220046. Data availability: The data that support the findings of this study are available from the corresponding author upon reasonable request. 99 Shinjo-Book T. Shinjo, Nanomagnetism and spintronics. Elsevier, First edition (2009). Hirohata-JMMM A. Hirohata, K. Yamada, Y. Nakatani, I.-L. Prejbeanu, B. Diény, P. Pirro, and B. Hillebrands: Review on spintronics: Principles and device applications. J. Magn. Mag. Mat. 509, 166711 (2020). Hrcak G. Hrkac, J. Dean, and D. A. Allwood: Nanowire spintronics for storage class memories and logic. Philos. Trans. R. Soc. A 369, 3214 (2011). Vander-JPD J. Vandermeulen, B. Van de Wiele, L. Dupré, and B Van Waeyenberge: Logic and memory concepts for all-magnetic computing based on transverse domain walls. J. Phys. D: Appl. Phys. 48, 275003 (2015). Goolap-SciRep S. Goolaup, M. Ramu, C. Murapaka, and W. S. Lew: Transverse Domain Wall Profile for Spin Logic Applications. Sci. Rep. 5, 9603 (2014). Torrejon-Nat J. Torrejon, M. Riou, F. A. Araujo, S. Tsunegi, G. Khalsa, D. Querlioz, P. Bortolotti, V. Cros, K. Yakushiji, A. Fukushima, H. Kubota, S. Yuasa, M. D. Stiles, and J. Grollier: Neuromorphic computing with nanoscale spintronic oscillators. Nature 547, 428 (2017). Grolier-Nat J. Grollier, D. Querlioz, K. Y. Camsari, K. Everschor-Sitte, S. Fukami, and M. D. Stiles: Neuromorphic spintronics. Nat Electron 3, 360 (2020). Parkin-Nat S. Parkin, and S.-H. Yang: Memory on the racetrack. Nat. Nano. 10, 195 (2015). Parkin-Nat2 K. Gu, Y. Guan, B. K. Hazra, H. Deniz, A. Migliorini, W. Zhang, and S. S. P. Parkin: Three-dimensional racetrack memory devices designed from freestanding magnetic heterostructures. Nat. Nano. 17, 1065 (2022). CellMark D. Högemann, V. Ntziachristos, L. Josephson, R. Weissleder, High throughput magnetic resonance imaging for evaluating targeted nanoparticle probes. Bioconjug. Chem. 13, 116 (2002). Drug1 J. P. Fortin, F. Gazeau, and C. Wilhelm, Intracellular heating of living cells through Néel relaxation of magnetic nanoparticles. Eur. Biophys. J. 37, 223 (2008). Drug2 F. Ye, A. Barrefelt, H. Asem, M. Abedi-Valugerdi, I. El-Serafi, M. Saghafian, K. Abu-Salah, S. Alrokayan, M. Muhammed, and M. Hassan, Biodegradable polymeric vesicles containing magnetic nanoparticles, quantum dots and anticancer drugs for drug delivery and imaging. Biomaterials 35, 3885 (2014). Drug3 B. Shen, Y. Ma, S. Yu, and C. Ji, Smart Multifunctional Magnetic Nanoparticle-Based Drug Delivery System for Cancer Thermo-Chemotherapy and Intracellular Imaging. ACS Appl. Mater. Interfaces 8, 24502 (2016). Contrast C. Billotey, C. Wilhelm, M. Devaud, J. C. Bacri, J. Bittoun, and F. Gazeau, Cell internalization of anionic maghemite nanoparticles: Quantitative effect on magnetic resonance imaging. Magn. Reson. Med. 49, 646 (2003). Hyp1 P. Moroz, S. K. Jones, and B. N. Gray, Magnetically mediated hyperthermia: Current status and future directions. Int. J. Hyperth. 18, 267 (2002). Hyp2 X. Liu, Y. Zhang, Y. Wang, et al., Comprehensive understanding of magnetic hyperthermia for improving antitumor therapeutic efficacy. Theranostics 10, 3793 (2020). Hyp3 H. Gavilán, T. Fernández-Cabada, N. Soni, M. Cassani, B. T. Mai, R. Chantrell, and T. Pellegrino, Magnetic nanoparticles and clusters for magnetic hyperthermia: optimizing their heat performance and developing combinatorial therapies to tackle cancer. Soc. Rev. 50, 11614 (2021). Hyp4 N. Hallali, P. Clerc, D. Fourmy, V. Gigoux, and J. Carrey, Influence on cell death of high frequency motion of magnetic nanoparticles during magnetic hyperthermia experiments. Appl. Phys. Lett. 109, 032402 (2016). Kim-Nat D.-H. Kim, E. Rozhkova, I. Ulasov, S. Bader, T. Rajh, M. Lesniak, and V. Novosad, Biofunctionalized magnetic-vortex microdiscs for targeted cancer-cell destruction. Nature Mater. 9, 165 (2009). Trap1 A. N. Ii, S.-C. Lin, B. Lepene, W. Zhou, K. Kehn-Hall, and M. L. van Hoek, Use of magnetic nanotrap particles in capturing Yersinia pestis virulence factors, nucleic acids and bacteria. J. Nanobiotechnology 19, 186 (2021). Golub R. Golub, and J. B. Pendlebury, Ultra-cold neutrons. Rep. on Prog. Phys. 42, 439 (1979). Kugler K. J. Kügler, K. Moritz, W. Paul, and U. Trinks, Nestor — A magnetic storage ring for slow neutrons. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 228, 240 (1983). Artetal L. A. Artsimovich, A. C. Kolb, K. S. Pease, and H. P. Furth. Controlled Thermonuclear Reactions. Phys. Today, 18, 75 (1965). Kral N. A. Krall, and A. W. Trivelpiece. Principles of Plasma Physics. International series in pure and applied physics. McGraw-Hill (1973). Anderson M. H. Anderson, J.R. Ensher, M. R. Matthews, C. E. Wieman, and E. A. Cornell. Observation of Bose-Einstein Condensation in a Dilute Atomic Vapor. Science 269, 5221 (1995). Pritchard D. E. Pritchard. Cooling Neutral Atoms in a Magnetic Trap for Precision Spectroscopy. Phys. Rev. Lett. 51, 1336 (1983). Bradley C. C. Bradley, and R. G. Hulet, Laser Cooling and Trapping of Neutral Atoms. Exp. Meth. Phys. Sci. 29, 129 (1996). Gorgier-NJP R. Corgier, S. Amri, W. Herr, H. Ahlers, J. Rudolph, D. Guéry-Odelin, E. M. Rasel, E. Charron, and N. Gaaloul. Fast manipulation of Bose–Einstein condensates with an atom chip. New J. Phys. 20, 055002 (2018). Kustura-PRB K. Kustura, V. Wachter, A. E. R. López, and C. C. Rusconi. Stability of a magnetically levitated nanomagnet in vacuum: Effects of gas and magnetization damping. Phys. Rev. B 105, 174439 (2022). Briegel H.-J. Briegel, T. Calarco, D. Jaksch, J. I. Cirac, and P. Zoller. Quantum computing with neutral atoms. Journal of Modern Optics, 47, 415 (2000). FORTAG-RMP J. Fortágh and C. Zimmermann. Magnetic microtraps for ultracold atoms. Rev. Mod. Phys. 79, 235 (2007). Henriet L. Henriet, L. Beguin, A. Signoles, T. Lahaye, A. Browaeys, G. O. Reymond, and C. Jurczak. Quantum computing with neutral atoms. Quantum 4, 327 (2020). Singh-Laser V. Singh, V. B. Tiwari. and S. R. Mishra. On the continuous loading of a U-magnetooptical trap on an atom-chip in an ultra high vacuum. Laser Phys. Lett. 17. 035501 (2020). Quad1 O. Morizot, C. L. Garrido Alzar, P.-E. Pottie, V. Lorent, and H. Perrin, Trapping and cooling of rf-dressed atoms in a quadrupole magnetic field. J. Phys. B At. Mol. Opt. Phys. 40, 4013 (2007). Quad2 Z. Zhang, K. Huang, and C.-H. Menq, Design, implementation, and force modeling of quadrupole magnetic tweezer. IEEE/ASME Trans. Mechatron. 15, 704 (2010). Quad3 S. A. Zonouzi, R. Khodabandeh, H. Safarzadeh, H. Aminfar, Y. Trushkina, M. Mohammadpourfard, M. Ghanbarpour, and G. S. Alvarez, Experimental investigation of the flow and heat transfer of magnetic nanofuid in a vertical tube in the presence of magnetic quadrupole field. Exp. Term. Fluid. Sci. 91, 155 (2018). Bergman T. H. Bergeman, P. McNicholl, J. Kycia, H. Metcalf, and N. L. Balazs. Quantized motion of atoms in a quadrupole magnetostatic trap. J. Opt. Soc. Am. B 6, 2249 (1989). Sukumar C. V. Sukumar, and D. M. Brink, Spin-flip transitions in a magnetic trap. Phys. Rev. A 56, 2451 (1997). Weinstein J. D. Weinstein and K. G. Libbrecht, Microscopic magnetic traps for neutral atoms. Phys. Rev. A 52, 4004 (1995). Fortagh J. Fortagh, A. Grossmann, C. Zimmermann, and T. W. Hänsch, Miniaturized Wire Trap for Neutral Atoms. Phys. Rev. Lett. 81, 5310 (1998). Hess H. F. Hess, G. P. Kochanski, J. M. Doyle, N. Masuhara, D. Kleppner, and T. J. Greytak, Magnetic trapping of spin-polarized atomic hydrogen. Phys. Rev. Lett. 59, 672 (1987). Raab E. L. Raab, M. Prentiss, A. Cable, S. Chu, and D. E. Pritchard, Trapping of Neutral Sodium Atoms with Radiation Pressure. Phys. Rev. Lett. 59, 2631 (1987). Folman R. Folman, P. Krúger, D. Cassettari, B. Hessmo, T. Maier, and J. Schmiedmayer, Controlling Cold Atoms using Nanofabricated Surfaces: Atom Chips. Phys. Rev. Lett. 84, 4749 (2000). Muller D. Müller, D. Z. Anderson, R. J. Grow, P. D. D. Schwindt, and E. A. Cornell, Guiding Neutral Atoms Around Curves with Lithographically Patterned Current-Carrying Wires. Phys. Rev. Lett. 83, 5194 (1999). Dekker N. H. Dekker, C. S. Lee, V. Lorent, V., J. H. Thywissen, S. P. Smith, M. Drndic, M., R. M. Westervelt, and M. Prentiss, Guiding Neutral Atoms on a Chip. Phys. Rev. Lett. 84, 1124 (2000). Hansel W. Hänsel, J. Reichel, P. Hommelhoff, and T. W. Hänsch, Magnetic Conveyor Belt for Transporting and Merging Trapped Atom Clouds. Phys. Rev. Lett. 86, 608 (2001). Ott H. Ott, J. Fortagh, G. Schlotterbeck, A. Grossmann, and C. Zimmermann, Bose-Einstein Condensation in a Surface Microtrap. Phys. Rev. Lett. 87, 230401 (2001). Potting C. Henkel, S. Pötting, and M. Wilkens, Loss and heating of particles in small and noisy traps. Appl. Phys. B 69, 379 (1999). Trap2 J.-W. Kim, H.-K. Jeong, K. M. Southard, Y.-W. Jun, and J. Cheon, Magnetic Nano-tweezers for Interrogating Biological Processes in Space and Time. Acc Chem Res. 51, 839 (2018). Vuletik-PRL V. Vuletic, T. Fischer, M. Praeger, T. W. Hänsch, and C. Zimmermann, Microscopic Magnetic Quadrupole Trap for Neutral Atoms with Extreme Adiabatic Compression. Phys. Rev. Lett. 80, 1634 (1998). Jian-JPB B. Jian and W. A. van Wijngaarden. A linear array of 11 double-loop microtraps for ultracold atoms. J. Phys. B: At. Mol. Opt. Phys. 47, 215301 (2014). Roy-SciRep R. Roy, P. C. Condylis, V. Prakash, D. Sahagun, and B. Hessmo. A minimalistic and optimized conveyor belt for neutral atoms. Sci. Rep. 7, 13660 (2017). Luo-NJP X. Luo, L. Wu, J. Chen, R. Lu, R. Wang, and L. You. Generating an effective magnetic lattice for ultracold atoms. New J. Phys. 17, 083048 (2015). Singh-JAP V. Singh, V. B. Tiwari, A. Chaudhary, R. Shukla, C. Mukherjee , and S. R. Mishra. Development and characterization of atom chip for magnetic trapping of atoms. J. Appl. Phys. 133, 084402 (2023). Amir-PS A. Mohammadi, S. Ghanbari, and A. Pariz. A two-dimensional permanent magnetic lattice for ultracold atoms. Phys. Scr. 88, 015601 (2013). West2012 A. D. West, K. J. Weatherill, T. J. Hayward, P. W. Fry, T. Schrefl, M. R. J. Gibbs, C. S. Adams, D. A. Allwood, and I. G. Hughes, Realization of the manipulation of ultracold atoms with a reconfigurable nanomagnetic system of domain walls. Nano letters 12, 4065-4069 (2012). Allwood-2006 D. A. Allwood, T. Schrefl, G. Hrkac, I. G. Hughes, and C. S. Adams. Mobile atom traps using magnetic nanowires. Applied physics letters 89, 014102 (2006). Skyrmion1 R. Qin and Y. Wang, Magnetostatics of magnetic skyrmion crystals. New J. Phys. 20, 063029 (2018). Skyrmion2 R. Qin and Y. Wang, Control of ultracold atoms with a chiral ferromagnetic film. Phys. Rev. A 99, 013401 (2019). Skyrmion3 R. Qin and Y. Wang, Skyrmion-based magnetic traps for ultracold atoms. Phys. Rev. A 101, 053428 (2020). Zambrano-SciRep C. Zambrano‑Rabanal, B. Valderrama, F. Tejo, R. G. Elías, A. S. Nunez, V. L. Carvalho‑Santos, and N. Vidal‑Silva, Magnetostatic interaction between Bloch point nanospheres. Sci. Rep. 13, 7171 (2023). oommf M. J. Donahue and D. G. Porter, National Institute of Standards and Technology Interagency Report NISTIR No. 6376, 1999. Landeros P. Landeros, J. Escrig, D. Altbir, M. Bahiana, and J. d'Albuquerque e Castro. Stability of magnetic configurations in nanorings. J. Appl. Phys. 100, 044311 (2006). Vagson V. L. Carvalho-Santos, W. A. Moura-Melo, and A. R. Pereira. Miniaturization of vortex-comprising system using ferromagnetic nanotori. J. Appl. Phys. 108, 094310 (2010). Riveros2016 A. Riveros, N. Vidal-Silva, P. Landeros, D. Altbir, E. E. Vogel, and J. Escrig. Magnetic vortex core in cylindrical nanostructures: Looking for its stability in terms of geometric and magnetic parameters. J. Magn. Magn, 401, 848-852 (2016). bobbers F. N. Rybakov, A. B. Borisov, S. Blügel, and N. S. Kiselev: New type of stable particlelike states in chiral magnets. Phys. Rev. Lett. 115, 117201 (2015). MaloSlo A. P. Malozemoff, and J. C. Slonczewski, Magnetic Domain Walls in Bubble Materials (Academic, New York, 1979). Moreno R. Moreno, V. L. Carvalho-Santos, D. Altbir, and O. Chubykalo-Fesenko, Detailed examination of domain wall types, their widths and critical diameters in cylindrical magnetic nanowires. J. Magn. Mag. Mat. 542 168495 (2022). Elias-EPL R.G. Elías, and A. Verga, Magnetization structure of a Bloch point singularity. Eur. Phys. J. B 82, 159 (2011). Pyly-PRB O. V. Pylypovskyi, D. D. Sheka, and Y. Gaididei, Bloch point structure in a magnetic nanosphere. Phys. Rev. B 85, 224401 (2012). Tejo-SciRep F. Tejo, R. H. Heredero, O. Chubykalo‑Fesenko, and K. Y. Guslienko, The Bloch point 3D topological charge induced by the magnetostatic interaction. Sci. Rep. 11, 21714 (2021). Reichel J. Reichel and V. Vuletic, Atom Chips 1st ed. Wiley-VCH (2011). Aldrich S. Aldrich, MilliporeSigma | Life Science Products & Service Solutions. https://www. sigmaaldrich.com/ (2022).
http://arxiv.org/abs/2306.09496v1
20230615204238
Streamlining Input/Output Logics with Sequent Calculi
[ "Agata Ciabattoni", "Dmitry Rozplokhas" ]
cs.LO
[ "cs.LO" ]
From unbiased to maximal entropy random walks on hypergraphs Yamir Moreno received July 31, 2023 ============================================================ Input/Output (I/O) logic is a general framework for reasoning about conditional norms and/or causal relations. We streamline Bochman's causal I/O logics via proof-search-oriented sequent calculi. Our calculi establish a natural syntactic link between the derivability in these logics and in the original I/O logics. As a consequence of our results, we obtain new, simple semantics for all these logics, complexity bounds, embeddings into normal modal logics, and efficient deduction methods. Our work encompasses many scattered results and provides uniform solutions to various unresolved problems. § INTRODUCTION Input/Output (I/O) logic is a general framework proposed by <cit.> to reason about conditional norms. I/O logic is not a single logic but rather a family of logics, each viewed as a “transformation engine”, which converts an input (condition under which the obligation holds) into an output (what is obligatory under these conditions). Many different I/O logics have been defined, e.g., <cit.>, and also used as building blocks for causal reasoning <cit.>, laying down the logical foundations for the causal calculus <cit.>, and for legal reasoning <cit.>. I/O logics manipulate Input-Output pairs[Production rules A B, in Bochman's terminology.] (A, B), which consists of boolean formulae representing either conditional obligations (in the case of the original I/O logics) or causal relations (A causes B, in the case of their causal counterparts). Different I/O logics are defined by varying the mechanisms of obtaining new pairs from a set of pairs (entailment problem). Each I/O logic is characterized by its own semantics. The original I/O logics use a procedural approach, while their causal counterparts adopt bimodels, which in general consist of pairs of arbitrary deductively-closed sets of formulae. Additionally, each I/O logic is equipped with a proof calculus, consisting of axioms and rules but not suitable for proof search. This paper deals with the four original I/O logics 1-4 in <cit.> and their causal counterpart 1-4 in <cit.>. We introduce proof-search-oriented sequent calculi and use them to bring together scattered results and to provide uniform solutions to various unresolved problems. Indeed <cit.> characterized many I/O logics through an argumentative approach using sequent-style calculi. Their calculi are not proof search-oriented. First sequent calculi of this kind for some I/O logics, including 1 and 3, have been proposed in <cit.>. Their implementation provides an alternative decidability proof, although not optimal (entailment is shown to be in Π^P_3). Moreover, the problem of finding proof-search-oriented calculi for 2 and 4 was left open there. A prover for these two logics was introduced in <cit.>. The prover encodes in classical Higher Order Logic their embeddings from <cit.> into the normal modal logics K and KT. Finding an embedding of 1 and 3 into normal modal logics was left as an open problem, that <cit.> indicates as difficult, if possible at all. An encoding of various I/O logics into more complicated logics (adaptive modal logics) is in <cit.>. Using their procedural semantics, <cit.> defined goal-directed decision procedures for the original I/O logics, without mentioning the complexity of the task. <cit.> showed that the entailment problem for 1, 2, and 4 is co-NP-complete, while for 3 the complexity was determined to lie within the first and second levels of the polynomial hierarchy, without exact resolution. In this paper we follow a new path that streamlines the considered logics. Inspired by the modal embedding of 2 and 4 in <cit.>, we design well-behaving sequent calculi for Bochman's causal I/O logics. The normal form of derivations in these calculi allows a simple syntactic link between derivability in the original I/O logics and in their causal versions to be established, making it possible to utilize our calculi for the original I/O logics as well. As a by-product: * We introduce a simple possible worlds semantics. * We prove co-NP-completeness and provide efficient automated procedures for the entailment problem; the latter are obtained via reduction to unsatisfiability of a classical logic formula of polynomial size. * We provide embeddings into the shallow fragment of the modal logics 𝐊, 𝐊𝐃 (i.e., standard deontic logic <cit.>), and their extension with axiom 𝐅. These results are uniformly obtained for all four original I/O logics and their causal versions. § PRELIMINARIES In the I/O logic framework, conditional norms (or causal relations) are expressed as pairs (B, Y) of propositional boolean formulae. The semantics is operational, rather than truth-functional. The meaning of the deontic/causal concepts in these logics is given in terms of a set of procedures yielding outputs for inputs. The basic mechanism underpinning these procedures is detachment (modus ponens). On the syntactic side, different I/O logics are obtained by varying the mechanisms of obtaining new input-output pairs from a given set of these pairs. The mechanisms introduced in the original paper <cit.> are based on the following (axioms and) rules ( denotes semantic entailment in classical propositional logic): TOP (⊤, ⊤) is derivable from no premises BOT (, ) is derivable from no premises WO (A,X) derives (A, Y) whenever X Y SI (A,X) derives (B, X) whenever B A AND (A,X_1) and (A,X_2) derive (A, X_1 ∧ X_2) OR (A_1,X) and (A_2,X) derive (A_1 ∨ A_2, X) CT (A,X) and (A ∧ X,Y) derive (A, Y) Different I/O logics are given by different subsets R of these rules, see  <ref>. The basic system, called simple-minded output 1, consists of the rules {TOP, WO, SI, AND}. Its extension with OR (for reasoning by cases) leads to basic output logic 2, with CT (for reusability of outputs as inputs in derivations) to simple-minded reusable output logic 3, and with both OR and CT to basic reusable output logic 4. Their causal counterpart <cit.>, that we denote by i for i= 1, … , 4, extends the corresponding logics with BOT. Given a set of pairs G and a set R of rules, a derivation in an I/O logic of a pair (B, Y) from G is a tree with (B, Y) at the root, each non-leaf node derivable from its immediate parents by one of the rules in R, and each leaf node is an element of G or an axiom from R. We indicate by G ⊢_OUT∗ (B,Y) that the pair (B, Y) is derivable in the I/O logic OUT∗ from the set of pairs in G (entailment problem). We will refer to (B, Y) as the goal pair, to the formulae B and Y as the goal input and goal output respectively, and to the pairs in G as deriving pairs. A derivation in I/O logic is a sort of natural deduction proof, acting on pairs, rather than formulae. This proof theory is however not helpful to decide whether G ⊢_OUT∗ (B,Y) holds, or to prove metalogical results (e.g., complexity bounds). The main reason is that derivations have no well-behaved normal forms, and in general are difficult to find. <cit.> introduced the first proof-search oriented calculi, which operate effectively only in the absence of OR (hence not for 2 and 4). The calculi use sequents that manipulate pairs expressed using the conditional logic connective >. § SEQUENT CALCULI FOR CAUSAL I/O LOGICS We present sequent-style calculi for the causal I/O logic 1-4. Their basic objects are (A_1, X_1), …, (A_n, X_n)BY dealing with pairs, as well as A_1, … A_n B_1, … , B_m dealing with boolean formulae (meaning that {A_1, …, A_n} (B_1 ∨…∨ B_m)). Our calculi are defined by extending the sequent calculus LK[We assume the readers to be familiar with LK (a brief overview is given in  <ref>) .] for classical logic with three rules manipulating I/O sequents: one elimination rule —different for each logic— that removes one of the deriving pair while modifying the goal pair, and two concluding rules that transform the derivation of the goal pair into an LK derivation of either the goal input or the goal output. The latter rules, which are the same for all the considered logics, are in  <ref>. B 1[]GBY Y 1[]GBY A derivation in our calculi is a finite labeled tree whose internal nodes are I/O or LK sequents s.t. the label of each node follows from the labels of its children using the calculus rules. We say that an I/O sequent (A_1, X_1), …, (A_n, X_n)BY is derivable if all the leaves of its derivation are LK axioms. A derivation of an I/O sequent consists of two phases. Looking at it bottom up, we first encounter rules dealing with pairs (pair elimination and concluding rules) followed by LK rules. The calculi, in a sense, uphold the ideological principles guiding I/O logics: pairs (i.e., conditional norms) are treated separately from the boolean statements. It is easy to see that using () and () we can derive TOP and BOT; their soundness in the weakest causal I/O logic 1 is proven below. () and () are derivable in 1. If B we have the following derivation in 1: from (,) and B (i.e., B) we get (B, ) by SI; the required pair (B,Y) follows by WO. Assume Y. From (⊤,⊤) and Y (i.e., ⊤ Y) by WO we get (⊤, Y), from which (B,Y) follows by SI. Henceforth, when presenting derivations in our calculi, we will omit the LK sub-derivations. §.§ Basic Production Inference 2 The calculus 2 for the causal[Called basic production inference in <cit.>.] basic output logic 2 is obtained by adding to the core calculus (consisting of LK with the rules () and ()) the pair elimination rule 2 in  <ref>. The rule 2 is inspired by the embedding in <cit.> of 2 into the modal logic 𝐊: (A_1, X_1) … (A_n, X_n) ⊢_2 (B, Y) iff (∗) (A_1 →□ X_1) … (A_n →□ X_n) B □ Y is derivable in 𝐊. To provide the rule's intuition we make use of the sequent calculus GK for 𝐊 in <cit.>. GK extends LK with the following rule for introducing boxes (or eliminating them, looking at the rule bottom up): 0.90A_1 … A_n B1[]□ A_1, …, □ A_n □ B To prove the sequent (∗) in GK we can apply the LK rule for → to one of the implications (A_i →□ X_i) on the left. This creates two premises: (a) G' B K□ Y A_i and (b) G' B □ X_i K□ Y (where G' is the set of all implications on the left-hand side but (A_i →□ X_i)). Now (a) G' B K□ Y A_i is equivalent in 𝐊 to (derivable in GK if and only if so is) the sequent G' (B ∧ A_i) K□ Y, that using the embedding again leads to the first premise of 2; (b) G' B □ X_i K□ Y is equivalent (for suitable G') to G' B K□ (Y ∨ X_i), which leads to the second premise of 2. Without eliminating implications on the left of (∗) there are basically two alternative ways to carry the sequent proof: either delete all formulae except B on the left and carry on a usual LK proof for B, or delete all formulae except □ Y on the right, eliminate box using modal rule and carry out a usual LK proof for Y. These two alternatives correspond to the two concluding rules shown in ????? <ref>. Each of the two rules focus only on one component of the goal pair and move from the settings of I/O pairs to the usual setting of classical propositions. This creates two premises without selected implication on the left and with either A_i added on the left or □ X_i added on the left. After a number of implications on the left are eliminated this way, it results in the premises of the form G', B, {□ X_j }_j ∈ J□ Y, { A_i }_i ∈ I for some sets of indices I and J and some subset G' of implications on the left from the initial sequent. Apart from the elimination of implications from G', globally, there are two ways to proceed in GK: * get rid of all formulae except {□ X_j }_J and □ Y, then discard boxes using modal rule and derive classical sequent { X_j }_J Y * derive classical sequent B { A_i }_I, ignoring the boxed formulae. In both cases, the derivation moves from the modal setting to the classical one. We can now move back and look at these possible proof steps from the I/O perspective, recasting them as rules operating on the I/O pairs. First, some pair (A_i, X_i) on the left can be eliminated, which should create two sub-derivations. In one sub-derivation, input X_i can be later used in the antecedent when the We then get two sequents G' B □ X_i K□ Y and G' B K A_i □ Y (where G' is the set of all implications on the left-hand side but (A_i →□ X_i)). While both these sequents have more complicated form than (∗), they are “equivalent” t (derivable ) By the invertibility of the (→, l) in the calculus GK, (∗) is derivable if and only if G' B □ X_i K□ Y and G' B K A_i □ Y are (where G' is the set of all implications on the left-hand side but (A_i →□ X_i)). The latter sequent can be rewritten into the equivalent sequent G' (B A_i) K□ Y, while the former into G' B K□ (Y X_i), that again correspond to some I/O derivability tasks by the embedding. This leads to the following rule for 2 eliminating an arbitrary deriving pair: GB AYGBY X2[2](A, X) GBY Suppose now we have eliminated this way all implications corresponding to pairs in the sequent calculus for K. We are then left with a sequent B' ⊢_K□ Y' for some modified formulae B' and Y', and we basically have two separate ways to proceed: derive B' ⊢_K not using □ Y' or delete B' by weakening, drop the □ modality and derive ⊢_K Y'. In both sequents all occurring formulae are propositional, so the rest of the derivation belongs to LK actually. This hints at the two rules below to derive a pair in 2 from the empty set of deriving pairs that switch to the phase of usual classical derivability in the sequent calculus LK (we denote LK-sequents by ΓΔ): B 1[]BY and Y1[]BY Thus, 2 consists of the rule 2 for eliminating deriving pairs and two concluding rules https://www.overleaf.com/project/635cfbdc4b31c0652114685fand . Now we can move away from the intuition of modal logic and study the calculus 2 on its own, and, in particular, prove its soundness and completeness w.r.t. 2. We prove below the soundness and completeness of the calculus 2 for 2. We start by describing a useful characterization of derivability in 2 of an I/O sequent (A_1, X_1), …, (A_n, X_n)BY via derivability of certain sequents in LK. (X) will denote the set of all partitions of the set X, i.e., (X) = { (I,J) | I ∪ J = X, I ∩ J = ∅} Notice that if a concluding rule or can be applied to the conclusion of 2, it can also be applied to its premises. This observation implies that if (A_1, X_1), …, (A_n, X_n) ⊢ (B,Y) is derivable in 2 there is a derivation in which the concluding rules are applied only when all deriving pairs are eliminated. We use this I/O normal form of derivations in the proof of the following lemma. (A_1, X_1), …, (A_n, X_n)BY is derivable in 2 iff for all partitions (I, J) ∈({1, … ,n }), either B {A_i}_i ∈ I or {X_j}_j ∈ J Y is derivable in LK. By induction on n. Base case: n = 0. The only partition is (∅, ∅). From derivability of B or Y follows BY by either or ; the converse also holds. Inductive case: from n to n+1. Let G = {(A_1, X_1), …, (A_n+1, X_n+1)} and consider only derivations in 2 in I/O normal form (so the last applied rule can only be 2). We characterize the condition when there exists a derivation of GBY whose last rule applied is 2 eliminating a pair (A_k,X_k), for some k ∈{1, … ,n+1}. This application leads to two premises: G'B ∧ A_kY and G'BY ∨ X_k, where G' = G ∖{(A_k,X_k)}. By the inductive hypothesis, the derivability of these premises is equivalent to the derivability of the following sequents: for each (I', J') ∈({1, …, n+1}∖{k}): * (a1) B ∧ A_k {A_i}_i ∈ I' or (a2) {X_j}_j ∈ J' Y, and * (b1) B {A_i}_i ∈ I' or (b2) {X_j}_j ∈ J' Y ∨ X_k. (a1) is equivalent in LK to (a1)' B {A_i}_i ∈ I' ∪{k}, and (b2) to (b2)' {X_j}_j ∈ J' ∪{k} Y. Hence (a1)', (a2) give the required condition for the partition (I' ∪{k}, J'), while (b1), (b2)' for the partition (I', J' ∪{k}). With this characterization it is easy to see that, first, we can conclude derivations in the same manner as in and rules when the set of deriving pairs is not empty. The following two rules B →1[]GBY and ⊤→ Y1[]GBY are admissible in 2. Second, we can prove that the rule 2 is invertible (as in the corresponding modal sequent calculus). If (A, X) GBY is derivable in 2 then GB AY and GBY X are also derivalbe in 2. The soundness and completeness proof of 2 makes use of the admissibility in the calculus of the structural rules for I/O sequents (weakening, contraction, and cut) in  <ref>. Recall that a rule is admissible if its addition does not change the set of sequents that can be derived. The rules , and in  <ref> are admissible in 2. By Lem. <ref> we can reduce the admissibility of these structural rules to the admissibility of weakening, contraction, and cut in LK. Consider the case . Let G = {(D_1, W_1) … (D_m, W_m) }, G' = {(A_1, X_1), … (A_n, X_n) } and (A_n+1, X_n+1) = (C, Z). Now G, G'BY is derivable in 2 iff for any (I_1, J_1) ∈({1,…,n}) and (I_2, J_2) ∈({1,…,m}) either B {A_i}_i ∈ I_1, {D_i}_i ∈ I_2 or {X_j}_j ∈ J_1, {W_j}_j ∈ J_2 Y is derivable in LK. It is tedious but easy to see that this holds by applying Lem. <ref> to the hypotheses (C,Z), GBY and G'CZ, and using the structural rules of LK. (Full proof in  <ref>). G ⊢ (B,Y) is derivable in 2 iff (B, Y) is derivable from the pairs in G in 2. (Completeness) Assume that (B,Y) is derivable in 2. We prove by induction on the derivation tree that for each pair (A, X) occurring in it, the I/O sequent GAX is derivable in 2. The case (A, X) ∈ G is: 0.90A ∧ A 1[]G'A ∧ AX X ∨ X1[]G'AX ∨ X2[2](A,X), G'AX We show the case of SI (B A iff B ∧ A): 0.90by I.H. ⋮1GAYB ∧ A 1[]B ∧ AY Y ∨ Y1[]BY ∨ Y2[2](A,Y)BY2[]GBY For AND: we derive (B, X_1), (B, X_2) ⊢ (B, X_1 X_2), and apply twice followed by many applications of contraction to the resulting derivation as follows 0.85by I.H. ⋮1GBX_1by I.H. ⋮1GBX_2(B, X_1) (B, X_2)BX_1 X_22 G (B, X_1)BX_1 X_22 G, GBX_1 X_21[× n]GBX_1 X_2 The claim follows by Lem <ref>. (Full proof in Appendix <ref>). Soundness See Lem. <ref> and  <ref> for the rule 2. §.§ Causal Production Inference 4 The calculus 4 for the causal version of reusable output logic 4 extends the core calculus (consisting of LK with the the rules () and ()) with the pair elimination rule 4 in  <ref>. If space allows As 4 has a similar embedding <cit.> into the modal logic T, we can repeat the same calculus construction for it as well. The only thing to modify actually is the pair elimination rule. The difference between sequent calculi for K and T is that the latter allows to remove box modalities in the antecedent. In our case boxed formula □ X_i gets into antecedent when we eliminate the implication (A_i →□ X_i) corresponding to a deriving pair. Afterward this formula X_i can be used in antecedent in both boxed and unboxed versions. We can reflect this in the elimination rule by adding X in both goal input and goal output in the second premise (see rule 4 in  <ref>). Now we can establish all the same properties for the calculus with this modified elimination rule (which we will denote 4). Inspired by the normal modal logic embedding of 4 in <cit.>, the shape of the rule 4 requires to amend the statement of the characterization lemma. The proof of this lemma is similar to the one for 2 (See full proof in Appendix <ref>). (A_1, X_1), …, (A_n, X_n)BY is derivable in 4 iff for all (I, J)∈ ({1, … ,n}), either B, {X_j}_j ∈ J{A_i}_i ∈ I or {X_j}_j ∈ J Y is derivable in LK. GBY is derivable in 4 iff the pair (B, Y) is derivable from the pairs in G in 4. (Completeness) To derive CT in 4 we first derive (A, X), (A ∧ X, Y)AY, and then apply and ( <ref>). The claim follows by the admissibility of these structural rules in 4, which can be reduced to the admissibility of the structural rules in LK as for 2. (Full proof in Appendix <ref>). (Soundness) Replace in  <ref> the subtree that derives (B ∧ A, Y ∨ X) from the pair (B, Y ∨ X) by the following derivation, which uses the rule CT 0.90(A, X)1[SI](B ∧ A, X)(B ∧ X, Y ∨ X)1[SI](B ∧ A ∧ X, Y ∨ X)2[CT](B ∧ A, Y ∨ X) §.§ Production Inference 1 and Regular Production Inference 3 The calculi 1 and 3 for the causal simple-minded output 1 and simple-minded reusable output 3 consist of LK with () and () extended with the pair elimination rules 1 and 3 in  <ref>, respectively. Unlike 2 and 4, there is no modal embedding in the literature to provide guidance for the development of the pair elimination rules for 1 and 3. These rules are instead designed by appropriately modifying 2 and 4. Indeed 1 and 3 impose restrictions on 2 and 4, respectively, by prohibiting the combination of inputs. The rules 1 and 3 are defined by reflecting this limitation. Due to the LK premise in their peculiar rules, derivations in 1 and 3 have a simpler form w.r.t. derivations in 2 and 4; this form could be exploited for the soundness and completeness proof. We proceed instead as for the latter calculi by proving the characterization lemma. This lemma will be key to solve the open problems about computational bounds and modal embeddings for 1 and 3. The proof of the lemma for 1 and 3 is less straightforward than for the other logics. The intuition here is that the characterization considers all possible ways to apply the rule 3 (or 1), by partitioning the premises (A_1, X_1), …, (A_n, X_n) into two disjoint sets (I of remaining deriving pairs and J of eliminated pairs). We will focus on the lemma for 3, the one for 1 being a simplified case (with a very similar proof). Its proof relies on the following result: If (A,X), GBY is derivable in 3, then so is GB ∧ XY ∨ X. Easy induction on the length of the derivation. We proceed by case distinction on the last applied rule. (Full proof in Appendix <ref>). Therefore, if GBY is derivable in 3, by applying this semi-invertibility to a number of pairs we can conclude that all sequents of the form {(A_i, X_i)}_i ∈ IB ∧⋀_j ∈ J X_jY ∨⋁_j ∈ J X_j are derivable for any partition (I,J) ∈({1,…,n}) and, in particular, can not be “stuck” (here meaning that at least one application of the rules of 3 is possible such that LK-premise of this rule application is derivable). It is important to note, that we talk here not only about the sequents that can appear in some proof of the end-sequent GBY in 3, but about all possible sequents of such form. At the same time, if GBY is not derivable in 3, then at least one of these sequents is stuck (since if we run a proof search for the initial underivable sequent it will get stuck at some point). The property of “getting stuck” when applying the rules of 3 can be easily described via derivability of LK-sequents that occur in possible rules application, and using this, we get a characterization in terms of all partitions that is surprisingly similar to the characterization lemmas before. (A_1, X_1), …, (A_n, X_n)BY is derivable in 3 iff for all (I, J) ∈({1, … , n}), one of the following holds: * B, {X_j}_j ∈ J A_i is derivable in LK for some i ∈ I, * B, {X_j}_j ∈ J is derivable in LK, * {X_j}_j ∈ J Y is derivable in LK. (⇒): Let (I, J) ∈({1, … , n}) be any partition. By (several application of)  <ref> to each (A_j, X_j) with j ∈ J, we get that the I/O sequent (∗) { (A_i, X_i) | i ∈ I }B ∧⋀_j ∈ J X_jY ∨⋁_j ∈ J X_j is derivable in 3. We consider the last rule (r) applied in the derivation of (∗). Three cases can arise: * (r) = 3 eliminating the deriving pair (A_k, X_k) with k ∈ I; hence B ∧⋀_j ∈ J X_j A_k (and therefore B, { X_j }_j ∈ J A_k) is derivable in LK, * (r) = then B, { X_j }_j ∈ J is derivable in LK. * (r) = then { X_j }_j ∈ J Y is derivable in LK. (⇐): We stepwise construct a derivation in 3 of (A_1, X_1), …, (A_n, X_n)BY. We start with (I,J) = ({1,…,n}, ∅), and we distinguish the three cases from the assumption of the lemma: (1) B, {X_j}_j ∈ J A_i for some i ∈ I, (2) B, {X_j}_j ∈ J, and (3) {X_j}_j ∈ J Y. If either (2) or (3) holds the derivation follows by applying a concluding rule (J = ∅). If (1) holds, we apply 3 bottom up getting {(A_t, X_t)}_t ∈{1,…,n}∖{i}B ∧ X_iY ∨ X_i as the second premise. We now apply the same reasoning to this latter sequent (considering the partition (I, J) = ({1,…,n}∖{i}, {i})), and keep applying it for the second premise of 3, until a concluding rule is applied. This will eventually happen since I loses the index of the eliminated pair at each step. The characterization lemma for 1 has the following formulation (the proof is similar to the proof for 3). (A_1, X_1), …, (A_n, X_n)BY is derivable in 1 iff for all partitions (I, J) ∈({1, … , n}), at least one of the following holds: * B A_i is derivable in LK for some i ∈ I, * B is derivable in LK, * {X_j}_j ∈ J Y is derivable in LK. GBY is derivable in 1 (3) iff (B, Y) is derivable from the pairs in G in 1 (3). (Completeness) uses the admissibility proof of the structural rules in  <ref> The case (A,X)AX is now handled as follows: 0.90A A X ∨ X1[]AX ∨ X2[1](A,X)AX The derivation of the rule SI directly uses the premise B A, instead of the rule . (Full proof in  <ref>). (Soundness): For 1, the derivation of 1 is obtained by juxtaposing to the subderivation of (B ∧ A, Y) from (A,X) and (B,Y ∨ X) in Fig. <ref> the following 0.90⋮1(B ∧ A, Y)1[SI](B, Y) noticing that the side condition B A of the rule 1 implies that B and B ∧ A are classically equivalent. The soundness of the rule 3 is very similar since the translation of the rule 4 from the proof of Theorem <ref> also contains a sub-derivation of the pair (B ∧ A, Y) from (A,X) and (B ∧ X, Y ∨ X) which uses only the rules of 3 (WO, SI, AND and CT). § CAUSAL I/O LOGICS VS. ORIGINAL I/O LOGICS We establish a syntactic correspondence between derivability in the original I/O logics and in their causal version. This correspondence obtained utilizing the sequent calculi 1-4, will enable to use them for 1-4, and to transfer all results arising from the calculi for the causal I/O logics to the original I/O logics. Note that 1-4 rely on the axiom BOT, which is absent in the original I/O logics. An inspection of the soundness proofs for our calculi shows that BOT is solely employed in the translation of the rule (Lemma <ref>). Can we simply remove this rule and hence get rid of axiom BOT? Yes, but only for 1 and 3, where, as evidenced by the completeness proof, is used to derive BOT and not utilized elsewhere. Hence, by removing the rule from 1 and 3 we get sequent calculi for 1 and 3. These calculi are close to the sequent calculi inspired by conditional logics introduced in <cit.>. The same does not hold for 2 and 4, where is needed, e.g., to derive SI. Instead of developing ad hoc calculi to handle the original I/O logics, we leverage 1-4 using the following result: (A_1, X_1), …, (A_n, X_n) ⊢_k (B, Y) iff (A_1, X_1), …, (A_n, X_n) ⊢_k (B, Y) and X_1, …, X_n Y in classical logic, for each k= 1, … , 4. (⇒) Derivability in k implies derivability in the stronger logic k. The additional condition X_1, …, X_n Y can be proved by an easy induction on the length of the derivation in the original I/O logics: it is enough to check that for every rule if the outputs of all (pairs-)premises follow from X_1 ∧…∧ X_n, then so is the output of the (pair-)conclusion. (⇐) Consider our sequent calculi 1 - 4. The translation constructed in their soundness theorem shows how to map a derivation of the I/O sequent (A_1, X_1), …, (A_n, X_n)BY in k into a I/O derivation of (B,Y) from the pairs {(A_1, X_1), …, (A_n, X_n)} using the rules of the logic k. As observed before, BOT appears in this transformed derivation only inside sub-derivations of the pairs (B', Y') derived in k by the rule . We can replace every such sub-derivation with a derivation that does not contain the axiom BOT and uses only the rules of the weakest logic 1. This latter derivation relies on the premise B' of the rule (which implies B'), on the condition X_1 … X_n Y from the statement of the lemma and the fact that Y Y' since in all our calculi the goal output in the premises of the elimination rules is the same or weaker than the goal output in the conclusion. The required derivation is the following: 0.90(A_1, X_1)1[SI](, X_1)…(A_n, X_n)1[SI](, X_n)3[AND×(n-1)](, X_1 ∧…∧ X_n)1[WO](, Y')1[SI](B', Y') After the replacement of all indicated sub-derivation of (B', Y') with the ones obove, we will get a derivation of (B, Y) that does not use the axiom BOT and thus (B, Y) is derivable in the original I/O calculus in <cit.> for k. The constructive proof above heavily relies on the restricted form of the I/O derivations resulting from translating our sequent derivations. If at all possible, finding ways to eliminate the use of the BOT axiom in arbitrary I/O derivations within k would be a challenging task. The power of structural proof theory lies in its capacity to solely examine well-behaved derivations. § APPLICATIONS Our proof-theoretic investigation is used here to establish the following results for the original and the causal I/O logics: uniform possible worlds semantics (Sec. <ref>), co-NP-completeness and automated deduction methods (Sec. <ref>), and new embeddings into normal modal logics (Sec. <ref>). §.§ Possible Worlds Semantics We provide possible worlds semantics for both the original and the causal I/O logics. Our semantics is a generalization of the bimodels semantics in <cit.> for 2; it turns out to be simpler than them for the remaining causal logics, and than the procedural semantics for the original I/O logics. As we will see, this semantics facilitates clean and uniform solutions to various unresolved inquiries regarding I/O logics that were only partially addressable. First, notice that a contrapositive reading of the characterization lemmas leads to countermodels for non-derivable statements in all considered causal I/O logics. These countermodels consist of (a partition and) several boolean interpretations (two for 2, 4 and their causal versions, and (n+2) for 1, 3 and their causal versions) that falsify the LK sequents from the respective lemma statement. We show below that a suitable generalization of these countermodels provides alternative semantic characterizations for both the original and the causal I/O logics. A possible worlds semantics for the causal I/O logics was introduced by <cit.> using bimodels. For the simplest case of 2, bimodel is a pair of worlds (here `world' can be seen as a synonym for boolean interpretation) corresponding to input and output states. <cit.> A pair (A,X) is said to be valid in a bimodel (, ) if A implies X. The adequacy of this semantics implies, in particular, that G ⊢_2(B, Y) if and only if the validity of all pairs from G implies validity of (B,Y) for all bimodels. The notion of bimodels for 1, 3 and 4 is more complex, with input and output states consisting of arbitrary deductively closed sets of formulae, instead of worlds. To construct our semantics, we look at the countermodels provided by the characterization lemma from the point of view of the simplest bimodels of 2. Lemma <ref> says indeed that if (A_1, X_1), …, (A_n, X_n)BY is not derivable in 2 there is a partition (I, J), and two boolean interpretations and such that: falsifies the LK-sequent B {A_i}_i ∈ I (meaning that B and ⊭ A_i for all i ∈ I) and falsifies the LK-sequent {X_j}_j ∈ J Y (meaning that X_j for all j ∈ J and ⊭ Y). These two interpretations lead to a bimodel that falsifies (A_1, X_1), …, (A_n, X_n)BY; indeed all pairs (A_i, X_i) for i ∈ I are valid in (, ) as ⊭ A_i, all pairs (A_j, X_j) for j ∈ J are valid in (, ) as X_j, but (B, Y) is not valid in (, ) because B and ⊭ Y. Reasoning in a similar way about the countermodels for 1 given by  <ref>, we observe there are now multiple input worlds, each falsifying in B A_i the input A_i (plus one additional input world that arises from the sequent B). This leads to the following generalization of bimodels with multiple input worlds. An I/O model is a pair (, ) where is the output world, and is a set of input worlds. The definition of validity in an I/O model will be modified to require that the input formula is true in all input worlds, rather than just in the unique input world. This update ensures that the existence of a single input world falsifying A is enough to establish the validity of the pair (A, X). Moreover, the additional ability to reuse outputs as inputs in the logics 3 and 4 can be expressed in these models by the requirement that a triggered output X should hold in the input worlds too. This leads to the following two definitions of validity in I/O models – one for the logics 1 and 2, the other for 3 and 4. * An I/O pair (A,X) is in an I/O model (, ) if (∀∈. A) implies X. * An I/O pair (A,X) is in an I/O model (, ) if (∀∈. A) implies (∀ w ∈{}∪. w X). When clear from the context, henceforth we will use the term validity to mean either (hence referring to the logics 1, 2, 1, and 2) or (hence referring to 3, 4, 3, and 4). G ⊢_k (B, Y) iff for all I/O models (satisfying the conditions in  <ref>) validity of all pairs from G implies validity of (B,Y). The proofs rely on the characterization lemmas for each of the four logics. We detail the proof for 3, the others being similar. The equivalence is proved by negating both statements. (⇐) Suppose (A_1, X_1), …, (A_n, X_n)BY is not derivable in 3 (and hence in 3). We show the existence of a countermodel. By  <ref> there exists a partition (I, J) such that the sequents {X_j}_j ∈ J Y, B, {X_j}_j ∈ J and B, {X_j}_j ∈ J A_i for all i ∈ I are not derivable in LK. By the soundness and completeness of LK w.r.t. the classical truth tables semantics there exist: * an interpretation , s.t. X_j ∀ j ∈ J and ⊭ Y, * an interpretation _0, s.t. _0 B and _0 X_j ∀ j ∈ J, * for every i ∈ I an interpretation _i, s.t. _i B, _i X_j for all j ∈ J and _i ⊭ A_i. Then the I/O model M = ({_0}∪{_i | i ∈ I}, ) (which has at least one input world) is a countermodel for the derivability of the I/O sequent (A_1, X_1), …, (A_n, X_n)BY. Indeed, every pair (A_i, X_i) for i ∈ I is 3-4-valid in M since A_i is not true in the input world _i; every pair (A_j, X_j) for j ∈ J is 3-4-valid in M since X_j is true in all worlds of M; but the pair (B, Y) is not 3-4-valid since B is true in all input worlds, while Y is not true in the output world . (⇒) Let M = (, ) (with ≠∅) be a countermodel. I.e. M 3-4-validates all pairs {(A_1, X_1),…,(A_n, X_n)}, but does not 3-4-validate the pair (B, Y). We prove that the sequent (A_1, X_1) …, (A_n, X_n)BY is not derivable in 3. We show the existence of a partition (I, J), such that none of the LK sequents {X_j}_j ∈ J Y, {X_j}_j ∈ J Y, and B, {X_j}_j ∈ J is derivable. The claim follows then by  <ref>. For such partition, we chose J as { j ∀∈. A_j }, and I as the rest of the indices. Notice that X_j is true in all worlds of M for every j ∈ J by definition of 3-4-validity of (A_j, X_j). Also, the fact that (B, Y) is not in M means that B holds in all input worlds, but there exists a world w^∗∈{}∪, s.t. w^∗⊭ Y. Then: * {X_j}_j ∈ J Y is not derivable in LK, because this sequent does not hold in the world w^∗. * B, {X_j}_j ∈ J is not derivable in LK, because this sequent does not hold in any input world of M (and there is at least one by the condition ≠∅). * For any i ∈ I there exists an input world _i, s.t. _i ⊭ A_i (by the choice of I). Hence B, {X_j}_j ∈ J A_i is not derivable in LK, as this sequent does not hold in _i. Dropping the condition of having at least one input world leads to models for the original I/O logics. G ⊢_k (B, Y) iff for all I/O models (satisfying the conditions in  <ref>) validity of all pairs from G implies validity of (B,Y). By  <ref>, (B,Y) is derivable from (A_1, X_1), …, (A_n, X_n) in k iff it is derivable in k together with the additional condition X_1, …, X_n Y. We prove that this additional condition is equivalent to the fact that every model with zero input worlds that validates all pairs from G also validates (B,Y). Notice that this will prove the proposition, as the only difference between the proposed semantics for a causal I/O logic and the corresponding original one is that the latter additionally considers models with zero input worlds (see  <ref>). For both notions of validity, the validity of a pair (A,X) in (∅, ) is equivalent to X. Now, X_1, …, X_n Y means that every interpretation that satisfies every X_i also satisfies Y, which is equivalent to the fact that every model (∅, ) (with arbitrary interpretation ) that validates every (A_i, X_i) also validates (B,Y). A natural interpretation for the I/O models in the deontic context regards input world(s) as (different possible instances of) the real world, and the output world as the ideal world, where all triggered obligations are fulfilled. §.§ Complexity and Automated Deduction We investigate the computational properties of the four original I/O logics and their causal versions. One corollary of our previous results is co-NP-completeness for all of them. Moreover, we can explicitly reduce the entailment problem in all these logics to the (un-)satisfiability of one classical propositional formula of polynomial size, a thoroughly studied problem with a huge variety of efficient tools available. The entailment problem is a co-NP-complete problem for all eight considered I/O logics. The characterization lemmas for the logics 1-4 imply that the non-derivability of a pair from n pairs can be non-deterministically verified in polynomial time by guessing the non-fulfilling partition (consisting of n bits) and then non-deterministically checking the non-derivability of all sequents (at most (n + 2)) for this partition; the latter task can be done in linear time. For the original I/O logics, by  <ref> we also need to verify that the additional condition does not hold (guessing a falsifying boolean assignment). Thus, the entailment problem belongs to co-NP for all considered I/O logics. The co-NP-completeness follows by the fact that any arbitrary propositional formula Y is classically valid iff (⊤,Y) can be derived from no pairs in any calculus for the considered logics (notice that the additional condition of  <ref> also boils down to the classical validity of Y). We provide an explicit reduction the derivability in I/O logics to the classical validity. For 2 and 4 this is already contained in <cit.>. Using the semantics introduced in Sec. <ref>, we obtain this result for Bochmann's causal I/O logics and their original version in a uniform way.  <ref> shows that the underivability of GBY in the causal I/O logics is equivalent to the existence of an I/O model that validates all pairs in G, but does not validate (B,Y). For 2 and 4 a countermodel should have exactly one input world, while for 1 and 3 there is always one with at most (|G| + 1) input worlds. We will encode existence of a countermodel to GBY with exactly k input worlds (with k = 1 for k=2,4, and k = |G| + 1 for k=1,3) in classical logic. For the encoding, we assign to the input worlds the numbers from 1 to k, and 0 to the output world. Let be the finite set of all propositional variables that occur in the formulae of G or (B,Y). For every variable x ∈, our encoding will use (k + 1) copies of this variable { x^0, …, x^k} with the intuitive interpretation that x^l is true iff x is true in the world number l. For an arbitrary formula A with variables from , let us denote by A^l the copy of A in which every variable x ∈ is replaced by its labeled version x^l. We read the formula A^l as “A is true in the world number l”. The exact connection with GBY is stated below. (A_1, X_1), … , (A_n, X_n) ⊢_k (B, Y) iff the classical propositional formula kn(B,Y)∧⋀_(A,X) ∈ Gkn(A,X) is unsatisfiable, where * kn(A,X) = (⋀_l = 1^k A^l) → X^0 for k = 1,2 * kn(A,X) = (⋀_l = 1^k A^l) → (⋀_l = 0^k X^l) for k = 3,4 We prove the contrapositive version. (⇒) Let ^L the set of all labeled copies of variables in (^L = { x^l | x ∈, l ∈{0,…,k}}). Suppose there is a valuation v ^L →{0,1}, that satisfies the formula in the statement (i.e., v kn(A,X) for all (A,X) ∈ G and v ⊭kn(B,Y)). v can be decomposed into (k + 1) valuations v_l →{0,1}, one for each label (v_l(x) = v(x^l)). It is easy to see that (∗): For every formula A with variables in , v A^l iff v_l A (it can be proven by trivial induction). The valuations {v_l} can then be turned into an I/O model M = ({v_1, …, v_k}, v_0). Then using the reading of v A^l given by (∗) we can see that v kn(A,X) (for k=1,2) iff (A,X) is 1-2-valid in M, and v kn(A,X) (for k=3,4) iff (A,X) is 3-4-valid in M. Therefore, since v satisfies the formula in the statement, M validates all pairs from G and does not validate (B,Y), which implies that GBY is not derivable in k. (⇐) Here instead of decomposing a valuation of labeled variables into (k + 1) worlds, we use a countermodel ({_1, …, _k}, ) to define a valuation v ^L →{0,1} of labeled variables (with v(x^0) = out(x) and v(x^l) = in_l(x)). The proof proceeds as in the other direction. The result is extended to the original I/O logics via Th. <ref>. (A_1, X_1), … , (A_n, X_n) ⊢_k (B, Y) iff the classical propositional formula ℱ^k_n ∨( Y ∧⋀_i=1^n X_i) is unsatisfiable, where ℱ^k_n is the formula encoding derivability of (A_1, X_1), …, (A_n, X_n)BY in k from  <ref>. The disjunct ( Y ∧ (X_1 ∧…∧ X_n)) arises from  <ref> (derivability in k is equivalent to derivability in k and the classical entailment of Y from {X_i}_i=1^n). The claim follows by  <ref>. §.§ Embeddings into Normal Modal Logics We provide uniform embeddings into normal modal logics. The embeddings are a corollary of soundness and completeness of I/O logics w.r.t. I/O models. More precisely we show that GBY in I/O logics iff a certain sequent consisting of shallow formulae only (meaning that the formulae do not contain nested modalities) is valid in suitable normal modal logics. To do that we establish a correspondence between pairs and shallow formulae. The I/O models already use the terminology of Kripke semantics that define normal modal logic. To establish a precise link between the two semantics we need only to define the accessibility relation on worlds. We will treat the set of input worlds as the set of worlds accessible from the output world (see  <ref>). Under this view on input worlds, (resp. ) of the pair (A,X) is equivalent to the truth of the modal formula □ A → X (resp. □ A → X ∧□ X) in the world . Also, the conditions on the number of input worlds that are used in Prop. <ref> and Prop <ref> to distinguish different I/O logics can be expressed in normal modal logics by standard Hilbert axioms. Specifically, axiom 𝐃□ A → A forces Kripke models to have at least one accessible world, while 𝐅 A →□ A forces them to have at most one accessible world. As proved below, the embedding works for the basic modal logic 𝐊 extended with 𝐃 (which results in the well-known standard deontic logic <cit.> 𝐊𝐃), with 𝐅, or both axioms. Henceforth we abbreviate, e.g., validity in the logics 𝐊 (respectively 𝐊+𝐅) with _𝐊/𝐊+𝐅. (B,Y) is derivable from pairs G in * 1 and 2 iff G^□_1/2_𝐊/𝐊+𝐅□ B → Y * 3 and 4 iff G_3/4^□_𝐊/𝐊+𝐅□ B → Y ∧□ Y * 1 and 2 iff G^□_1/2_𝐊𝐃/𝐊𝐃+𝐅□ B → Y * 3 and 4 iff G_3/4^□_𝐊𝐃/𝐊𝐃+𝐅□ B → Y ∧□ Y where G_1/2^□ = {□ A_i → X_i | (A_i,X_i) ∈ G }, and G_3/4^□ = {□ A_i → X_i ∧□ X_i | (A_i,X_i) ∈ G }. We show these equivalences by transforming the I/O countermodels given by  <ref> and  <ref> into Kripke countermodels for the corresponding modal logic and vice versa. The transformations will be the same for all the considered logics. We detail the case of 3. (⇐) Assume G_3^□_𝐊𝐃□ B → Y ∧□ Y does not hold. Then there should exist a Kripke model M in which every world has at least one world accessible from it, and a world w in M, such that w □ A → X ∧□ X for every (A,X) ∈ G and w ⊭□ B → Y ∧□ Y. Let N(w) be the set of all worlds accessible from w in M. Then the I/O model (N(w), w) will be a countermodel for GBY; notice indeed that w □ A → X ∧□ X means exactly of (A,X) in (N(w), w), so all pairs in G are in (N(w), w), but (B,Y) is not , and |N(w)| ≥ 1. Hence GBY is not derivable in 3. (⇒) Assume GBY is not derivable in 3. Then there is some I/O model (, ) with || ≥ 1, s.t. all pairs from G are in (, ) and (B,Y) is not in (, ). Consider the Kripke model M that consists of worlds ∪{} with accessibility relation defined as shown in  <ref> (all input worlds are accessible from the output world and every input world is accessible from itself). M satisfies the frame condition for 𝐊𝐃 as there is at least one accessible world from out because of || ≥ 1, and exactly one accessible world for every input world (itself). satisfies the modal formula A → X ∧□ X (with A and X being propositional formulae) in the Kripke model M iff the I/O pair (A,X) is in (, ). So in M, A → X ∧□ X for every pair (A,X) ∈ G and ⊭ B → Y ∧□ Y. Therefore G_3^□_𝐊𝐃□ B → Y ∧□ Y does not hold. Modal embeddings were already known for the causal logics 2 and 4. The embedding for 2 was translating the pair (A,X) into the K formula A →□ X. In <cit.> this embedding was stated for 2 and 4 together with the additional condition X_1, … , X_n Y (appearing in our Th. <ref>). Note that moving the modality to inputs allows for a more refined embedding. The validity of the statement (A →□ X), …, (A →□ X) (B →□ Y) is indeed the same in all four target logics we use (K, KD, K + F and KD + F), while the validity of (□ A → X), …, (□ A → X) (□ B → Y) is different. § CONCLUSIONS We have introduced sequent calculi for I/O logics. Our calculi provide a natural syntactic connection between derivability in the four original I/O logic <cit.> and in their causal version <cit.>. Moreover, the calculi yield natural possible worlds semantics, complexity bounds, embeddings into normal modal logics, as well as efficient deduction methods. It is worth noticing that our methods for the entailment problem offer derivability certificates (i.e., derivations) or counter-models as solutions. The efficient discovery of the latter can be accomplished using SAT solvers, along the line of <cit.>. The newly introduced possible worlds semantics might be used to import in I/O logics tools and results from standard modal theory. Our work encompasses many scattered results and presents uniform solutions to various unresolved problems; among them, it contains first proof-search oriented calculi for 2 and 4; it provides a missing[ From  <cit.>: "As a matter of facts, there is no direct (formal) connection between the semantics Bochman proposes and the operational semantics for I/O logic. The linkage between the two is established through the axiomatic characterization: both the possible-worlds semantics and the operational semantics give rise to almost the same axiom system"] direct formal connection between the semantics of the original and the causal I/O logics; it introduces a uniform embedding into normal modal logics, that also applies to 1 and 3, despite the absence in these logics of the OR rule[ From <cit.>: “As far as the authors are aware, it is not possible to characterise the system of simple-minded output (with or without reusability) by relabeling or modal logic in a straightforward way. The OR rule appears to be needed, so that we can work with complete sets.”]; moreover, it settles the complexity of the logics 3 and 3. The latter logic has been used in <cit.> as the base for actual causality and in <cit.>, together with 4, to characterize strong equivalence of causal theories w.r.t. two different semantics: general and causal non-monotonic semantics. Strong equivalence is an important notion as theories satisfying it are ‘equivalent forever’, that is, they are interchangeable in any larger causal theory without changing the general/causal non-monotonic semantics. Furthermore 4 has been used as a base for formalizing legal concepts <cit.>. The automated deduction tools we have provided might be used also in these contexts. In this paper, we have focused on monotonic I/O logics. However, due to their limitations in addressing different aspects of causal reasoning <cit.> and of normative reasoning, several non-monotonic extensions have been introduced. For example <cit.> have proposed non-monotonic extensions that have also been applied to represent and reason about legal knowledge bases, as demonstrated in the work by Robaldo et al. <cit.>. Our new perspective on the monotonic I/O logics contributes to increase their understanding and can provide a solid foundation for exploring non-monotonic extensions. § DISCUSSION? §.§ Importance and possible applications in causal context In the causal context, monotonic logics have significance on their own (especially 3 and 4). That's due to the fact that they are “maximal inference relations that are adequate for reasoning with causal theories: any postulate that is not valid for regular production relations (3) can be ‘falsified’ by finding a suitable extension of two causal theories that would determine different nonmonotonic semantics, and hence would produce different nonmonotonic conclusions" (<cit.>, Section 4). Another even more philosophical take on importance of the monotonic fragment to describe “informational content” of theories can be found in Subsection 4.1 there. More formally, <cit.> establishes that two causal theories are strongly eqivalent w.r.t. “general non-monotonic semantics” iff they equivalent in 3 and strongly eqivalent w.r.t. “causual non-monotonic semantics” iff they equivalent in 4 (these are two versions of semantics he talks about at great length, how important each of them is I can not tell). Strongly equivalent theories are ‘equivalent forever’, that is, they are interchangeable in any larger causal theory without changing the general nonmonotonic semantics. Our paper then provides tools for such reasoning (automatable, in particular). Additionally, the paper <cit.> shows how actual causality can be defined in terms of 3 (building another non-monotonic level above it). Using our framework we can even construct SAT-encoding for actual causality in this definiton, but this will require quite some details. At the same time, just mentioning it will allow us to cite both <cit.> and <cit.> as a pre-requisit. Whether monotonic (= unconstrained) I/O logics are important in deontic context I can not tell. §.§ Connection between original and causal I/O logics Our framework establishes a rather simple connection between original and causal I/O logics, both in terms of syntactic derivability ( <ref>) and in terms of models ( <ref>). I still convinced that  <ref> about connection is very much expected from the point of view of procedural semantics (and it looks like it is used implicitly sometimes in the original paper <cit.>), so I wouldn't emphasize this contribution too much. The simple connection of the level seems more surprising to me. Here is the quote from the Deontic Handbook (section 2.4 in the I/O chapter) that may be relevant: "As a matter of facts, there is no direct (formal) connection between the semantics Bochman proposes and the operational semantics for I/O logic. The linkage between the two is established through the axiomatic characterization: both the possible-worlds semantics and the operational semantics give rise to almost the same axiom system." §.§ Automated reasoning The previous automated procedures for checking derivability in I/O logics (that we know of) are: * <cit.>: Goal-directed decision procedures (based on procedural semantics, with arbitrary prover for base logic inside) for the four original I/O logics * <cit.>: proof search based on sequent calculi for a number of unconstrained and constrained I/O logics (but not 2 and 4) * <cit.>: Embedding of 2 and 4 in HOL, based on known modal embeddings, which allows to use HOL automated countermodel generator (I doubt whether it is complete procedure) There is also work (for example <cit.>) by Livio Robaldo on automated reasoning for reified I/O logic — an extension of I/O logic — but it seems rather distant from what we are doing. Our approach gives suggests to automate checking by reduction to SAT-solving. The reduction is easy and fast part, SAT solving is the hard part and there are lots of efficient tools for that. This “reductionist approach” is the standard practice for solving NP-complete problems (although for some applications alternative approaches can be more preferable), many gigantic combinatorial problems were solved by this approach (I can take some examples from Declarative Problem Solving course if needed), there are many popular articles advocating for it, for example, <cit.> says this: “SAT solvers, programs that solve SAT formulas, have become extremely powerful over the last two decades. Progress has been by leaps and bounds, starting with the pioneering work by Davis and Putnam until the early 1990s when solvers could handle formulas with thousands of clauses. Today's solvers can handle formulas with millions of clauses. This performance boost resulted in the SAT revolution: encode problems arising from many interesting applications as SAT formulas, solve these formulas, and decode the solutions to obtain answers for the original problems. This is in a sense just using the NP-completeness of SAT: every problem with a notion of "solution" where these solutions are relatively short and where an alleged solution can be verified (or rejected) quickly can be reduced to SAT efficiently. For many years NP-completeness was used only as a sign of "you can not solve it!", but the SAT revolution has put this back on its feet. For many applications, including hardware and software verification, SAT solving has become a disruptive technology that allows problems to be solved faster than by other known means.” I believe that our approach will be much faster than the ones mentioned above (only in the case of the first one I am not so sure). But really testing it will take effort and a description of it will take space. Another conceptual advantage of our framework that I would like to emphasize is that it allows certificates for both verdicts: if the query is not derivable SAT solver will return us an I/O countermodel (and it is easy to check this countermodel), if the query is derivable we can run terminating search in our calculi and get sequent derivation (which we can translate into I/O derivation based on the proof of soundness), although searching for derivation may be relatively non-efficient. §.§ Modal embeddings The modal embeddings were known only for causal logics 2 and 4. Interestingly, they were first described for 2 and 4 in the seminal paper on the original I/O logics <cit.>, together with the connection given by  <ref>. It is followed by the following phrase about the barrier for embedding of other two I/O logics (subsection 5.5): “As far as the authors are aware, it is not possible to characterise the system of simple-minded output (with or without reusability) by relabeling or modal logic in a straightforward way. The OR rule appears to be needed, so that we can work with complete sets.” The embeddings we have, which are an immediate consequence of I/O models, cover all considered I/O logics and allows to distinguish logics only in terms of conditions on semantic frames (= by choosing a target logic), except for the case of CT axiom where we need to modify the embedding. These are embeddings different from the known ones, but they seem simple and natural, especially in the case of 1 (I'm not sure if it is clear in current description). There are also known approach <cit.> for encoding both unconstrained and constrained I/O logic in a powerful framework of “adaptive modal logics”. They introduce modality In for input and Out for output, develop the axiom schemes specifically to capture different I/O logics, and show various application of the obtained modal logics (in particular, describing violations and sanctions). So this is embedding in a very different sense, I guess. § ACKNOWLEDGEMENTS Work partially supported by the European Union's Horizon 2020 research and innovation programme under grant agreement No 101034440, and the WWTF project ICT22-023. kr § LK SEQUENT CALCULUS The sequent calculus LK is a deductive system for (first-order) classical logic, introduced by Gentzen. Its basic objects are sequents, that are derivability assertions of the form A_1, …, A_n B_1, …, B_m, where {A_1, …, A_n} and {B_1, …, B_m} are multisets of formulae (assumptions, and conclusions, respectively). Axioms and rules of (the propositional fragment of) LK are as follows (Γ and Δ denote arbitrary multisets of formulae): 1[AX]Γ, x x, Δ 1[ L]Γ, Δ 1[⊤ R]Γ⊤, Δ Γ, A, B Δ 1[∧ L]Γ, A ∧B Δ ΓA, Δ ΓB, Δ 2[∧ R]ΓA ∧B, Δ Γ, A Δ Γ, B Δ 2[∨ L]Γ, A ∨B Δ ΓA, B, Δ 1[∨ R]ΓA ∨B, Δ ΓA, Δ Γ, B Δ 2[→ L]Γ, A →B Δ Γ, A B, Δ 1[→ R]ΓA →B, Δ A derivation in LK is a finite labelled tree whose internal nodes are labelled with sequents s.t. the label of each non-leaf node follows from the labels of its children using the calculus rules and leafs are axioms of the calculus. We say that a sequent A_1, …, A_n B_1, …, B_m is derivable in LK, if there is a derivation in LK with this sequent as the root label. There are several known equivalent formulation of LK. The version we have presented is the one known as G3p. In this version, the standard structural rules in  <ref> are admissible (i.e. addition of these rules to the calculus does not change the set of sequents that can be derived). This makes the resulting derivations particularly well behaved as: (1) they can be constructed by safely applying the calculus rules bottom up, and (2) they satisfy the subformula property: all formulae occurring in a derivation are subformulae of the sequent to be proved. A sequent A_1, …, A_n B_1, …, B_m is derivable in LK iff A_1, … A_n B_1 ∨…∨ B_m in classical logic. kr biblio.bib § DETAILED PROOFS [<ref>] For every k ∈{1, 2, 3, 4}, (A_1, X_1), …, (A_n, X_n) (B, Y) in k iff (A_1, X_1), …, (A_n, X_n) (B, Y) in k and X_1, …, X_n Y in classical logic. One way to define the set of outputs established in the original paper on I/O logics <cit.> is through the intersection of certain sets of outputs. Specifically, for all four original I/O logics a set of outputs OUT_i(G, A) for a given set of deriving pairs G and input A is given by OUT_i(G, A) = ⋂_α∈𝒜 Cn(G(α)). Here Cn is a classical consequence operator, G(α) is an abuse of notation denoting the set of all outputs X such that there is some input A ∈α such that (A, X) ∈ G, and 𝒜 is a certain set of sets of formulae (it is defined differently for different original I/O logics, and the choice of these sets determines the logic). For the current lemma, the properties of 𝒜 that are sufficient to know are the fact that it contains only consequence-closed sets (for α∈𝒜, Cn(α) ⊂α) and the fact that the set L of all formulae in the considered language belongs to 𝒜. With this definition, it is easy to study the change if the set of deriving pairs is augmented by the pair (, ) (which is equivalent to addition of BOT axiom). Let's denote G' = G ∪{(, )}. For any α∈𝒜 that does not contain , G'(α) = G(α). The only α∈𝒜 that does contain is L (since it is the only consequence-closed set containing ), and Cn(G'(L)) = L (since it is the consequences of the set of formulae containing ). Now we can clearly see the difference that addition of the (, ) pair brings: [ OUT_i(G, A) = Cn(G(L)) ∩ ⋂_α∈𝒜 α L Cn(G(α)); OUT_i(G ∪{(, )}, A) = ⋂_α∈𝒜 α L Cn(G(α)); ] Cn(G(L)) is exactly the formulae derivable from all deriving outputs, therefore the difference between derivable outputs in logics k and k for all k ∈{1,2,3,4} is given by the additional condition in the statement of the lemma. ================================================= ================================================= The same proof in a different notation: Briefly, the procedural semantics for the original I/O logics are given in terms of the two operators: [ Cn(α) def={ A | α A } where α is an arbitrary set of formulas; A is a formula; G(α) def={ X | (A, X) ∈ G for some A ∈α} where G is an arbitrary set of I/O pairs; α is an arbitrary set of formulas ] They are used to define the operator OUT_k(G, α) which determines the set of outputs that are derivable from the set inputs α for a given set of deriving pairs G. It relates to our formulation of I/O derivability in the following way: we say GBY in k when Y ∈ OUT_k(G, {B}). One of the equivalent definitions of I/O logic defines the operators OUT_k in the following uniform form: OUT_k(G, α) def=⋂{ Cn(G(β)) | β : 𝒫_k(G, α, β) } where predicate 𝒫_k determines which of sets β should generate intersected sets. The definition in this form can be found in the Handbook of Deontic logic <cit.> (Definition 2.1 in the chapter “Input/output Logic”). Only OUT_1 is formulated differently there, but the definition in the form claimed here is also possible and given in the original paper <cit.>. The only three properties of the predicate 𝒫_k that we need are the following ones: * For arbitrary G and α, 𝒫_k(G, α, 𝐋) holds, where 𝐋 is the set of all propositional formulas * For arbitrary G, α and β, if 𝒫_k(G, α, β) then Cn(β) ⊂β. * For arbitrary G, G', α and β, if G(β) = G'(β) then 𝒫_k(G, α, β) ⇔𝒫_k(G', α, β). For all four cases these properties are part of the definition or follow immediately from it. Let's examine this definition for the case when the set of deriving pairs is extended with the pair (, ) (which is equivalent to extension of logic with the axiom BOT). Let G^ = G ∪{(, )}. Comparing Cn(G(β)) and Cn(G^(β)) we need to distinguish two cases: when β = 𝐋 and otherwise: * Suppose β = 𝐋. ∈𝐋 therefore ∈ G^(𝐋) (by the definition of G^). Then Cn(G^(𝐋)) = L (since any formula can be derived from ). * Suppose 𝒫(G^, α, β) and β≠𝐋. Then ∉β (since by the first property of 𝒫_k, β is deductively closed, and the only deductively closed set containing is 𝐋). Then G^(β) = G(β) (since G and G^ differ only on ). Then Cn(G^(β)) = Cn(G(β)). Now we can unfold the definition of OUT_k(G^, α) and separate the intersection with the set Cn(G^(𝐋)) (using that 𝒫_k(G', A, β) by the second condition of 𝒫). [ OUT_k(G^, α) = ⋂{ Cn(G^(β)) | β : 𝒫_k(G^, α, β) }; 2-nd cond.= Cn(G^(𝐋)) ∩ ⋂{ Cn(G^(β)) | β : 𝒫_k(G^, α, β), β≠𝐋}; = 𝐋 ∩ ⋂{ Cn(G^(β)) | β : 𝒫_k(G^, α, β), β≠𝐋}; = ⋂{ Cn(G^(β)) | β : 𝒫_k(G^, α, β), β≠𝐋}; = ⋂{ Cn(G^(β)) | β : 𝒫_k(G^, α, β), β≠𝐋}; 3-rd cond.= ⋂{ Cn(G^(β)) | β : 𝒫_k(G, α, β), β≠𝐋}; ] This allows us to express OUT_k(G, α) via OUT_k(G^, α): [ OUT_k(G, α) = ⋂{ Cn(G(β)) | β : 𝒫_k(G, α, β) }; 2-nd cond.= Cn(G(𝐋)) ∩ ⋂{ Cn(G(β)) | β : 𝒫_k(G, α, β), β≠𝐋}; = Cn(G(𝐋)) ∩ OUT_k(G^, α); ] Notice that this is exactly the statement of the lemma, since Y ∈ Cn(G(𝐋)) when α Y classically for α = { X | (A, X) ∈ G for some A ∈𝐋} [<ref>] Rules , and are admissible for the calculus 2. We will use the characterization from  <ref> to prove the admissibility for all three rules by reducing them to the admissibility of the corresponding rules in LK. 5mm Weakening. Let G = {(A_1, X_1), …, (A_n, X_n)}. Denote (A_n+1, X_n+1) = (A, X). Suppose GBY is derivable. Then for every partition (I,J) ∈({1,…,n}), either B {A_i}_i ∈ I or {X_j}_j ∈ J Y is derivable in LK. We can weaken the left sequent by adding the formula A_n+1 on the right: either B {A_i}_i ∈ I, A_n+1 or {X_j}_j ∈ J Y is derivable in LK. Analogously, we can weaken the right sequent by adding the formula X_n+1 on the left: either B {A_i}_i ∈ I or {X_j}_j ∈ J, X_n+1 Y is derivable in LK. Notice that these two statements give exactly the characterization of derivability of (A, B), GBY by  <ref> (first one for the partitions that have index (n+1) in I, and second for the partitions that have index (n+1) in J). 5mm Contraction. Let G = {(A_1, X_1), …, (A_n, X_n)}. Suppose (A, X), (A, X), GBY is derivable. Let's denote (A_n+1, X_n+1) = (A_n+2, X_n+2) = (A,X) to have all pairs indexed for characterisation. Consider the partitions from ({1,…,n+2}) that have the form (I ∪{n+1,n+2},J). For any such partition, we have either B {A_i}_i ∈ I, A, A or {X_j}_j ∈ J Y is derivable in LK. After rewriting the first sequent into B {A_i}_i ∈ I, A using the classical contraction this condition turns exactly into the condition of the characterization for (A,X), GBY (which has (n+1) pairs on the left) for the partition (I ∪{n+1}, J). Now consider the partitions from ({1,…,n+2}) that have the form (I,J ∪{n+1,n+2}). For any such partition, we have either B {A_i}_i ∈ I or {X_j}_j ∈ J, X, X Y is derivable in LK. After rewriting the second sequent into {X_j}_j ∈ J, X Y using classical contraction this condition turns exactly into the condition of the characterization for (A,X), GBY for partition (I, J ∪{n+1}). Notice that these two cases cover all partitions from ({1,…,n+1}), so (A,X), GBY is derivable. 5mm Cut. Let G = {(D_1, W_1) … (D_m, W_m) } and G' = {(A_1, X_1) … (A_n, X_n) } and denote (A_n+1, X_n+1) = (C, Z). For derivability of G, G'BY we need to show that for any partition (I_1, J_1) ∈({1,…,n}) and any partition (I_2, J_2) ∈({1,…,m}) the following holds: (∗) either B {A_i}_i ∈ I_1, {D_i}_i ∈ I_2 or {X_j}_j ∈ J_1, {W_j}_j ∈ J_2 Y is derivable in LK. We first use that (C,Z), GBY is derivable. First, we use characterization lemma for the partition (I_1 ∪{n+1}, J_1) (so from the additional pair (C,Z), C goes to the inputs): (1) either B {A_i}_i ∈ I_1, C or {X_j}_j ∈ J_1 Y is derivable in LK. If the right sequent is derivable then the right sequent of (∗) is also derivable and the proof is completed. We proceed in the case when the left sequent is derivable: B {A_i}_i ∈ I_1, C is derivable in LK. Second, we use the characterization lemma for the partition (I_1, J_1 ∪{n+1}) (so from the additional pair (C,Z), Z goes to the outputs): (2) either B {A_i}_i ∈ I_1 or {X_j}_j ∈ J_1, Z Y is derivable in LK. If the left sequent is derivable then the left sequent of (∗) is also derivable and the proof is completed. We proceed in the case when the right sequent is a derivable: (2) {X_j}_j ∈ J_1, Z Y is derivable in LK. Now we use the derivability of G'CZ. We use the characterization lemma for the partition (I_2, J_2): (3) either C {D_i}_i ∈ I_2 or {W_j}_j ∈ J_2 Z is derivable in LK. Now we show that for any possible choice of derivable sequent (left or right) in (1), (2) and (3), we can always derive one of the sequents of (∗) (either by weakening or cut in LK): * If the right sequent of (1) is derivable, then the right sequent of (∗) is derivable by weakening in LK. * If the left sequent of (2) is derivable, then the left sequent of (∗) is derivable by weakening in LK. * If the left sequent of (3) is derivable and the left sequent of (1) is derivable, we can use the LK cut on them and get exactly the left sequent of (∗). * If the right sequent of (3) is derivable and the right sequent of (2) is derivable, we can cut them using the LK cut and get exactly the right sequent of (∗). Thus, we have proved (∗) for all possible cases. [<ref> (I/O cut for 2)] If both GCZ and (C, Z), G'BY are derivable in 2 for some pair (C, Z) then G G'BY is also derivable in 2. Let G = {(D_1, W_1) … (D_m, W_m) } and G' = {(A_1, X_1) … (A_n, X_n) }. By  <ref> to prove derivability of G G'BY it is sufficient to prove that for every leaf either the output is a classically valid formula or the negation of the input is a classically valid formula. Let's take an arbitrary leaf or, equivalently, an arbitrary partition of deriving pairs on those that we take input from for this leaf and those that we take output from for these leaf. For some I ⊂ [1 … n] and J ⊂ [1 … m], let's consider the leaf where we take inputs from pairs { (A_i, X_i) | i ∈ I }∪{ (D_j, W_j) | j ∈ J } and outputs from all other pairs, e.g. the following leaf: B (⋀_i ∈ I A_i) (⋀_j ∈ J D_j)Y (⋁_i ∉I X_i) (⋁_j ∉J W_j) We need to prove that either the negation of the formula B (⋀_i ∈ I A_i) (⋀_j ∈ J D_j) is classically valid or the formula Y (⋁_i ∉I X_i) (⋁_j ∉J W_j) is classically valid. Equivalently, we need to show one of the two following facts: [ (a) B → (⋁_i ∈ I A_i) ∨ (⋁_j ∈ J D_j) is valid or; (b) (⋀_i ∉I X_i) ∧ (⋀_j ∉J W_j) → Y is valid ; ] First, let us consider the derivation (C, Z) G'BY and the leaf B (⋀_i ∈ I A_i)Y ∨ (⋁_i ∉I X_i) ∨ ( Z) in this derivation. Since this leaf belongs to the correct derivation either its output is valid or the negation of its input is valid. In the latter case the proof is completed, because (a) holds (and the disjuncts D_j there are unnecessary for the validity). In the former case we know that [ (1) (⋀_i ∉I X_i) ∧ Z → Y is valid ; ] Second, let us consider the derivation (C, Z) G'BY and the leaf B (⋀_i ∈ I A_i) ∧ ( C)Y ∨ (⋁_i ∉I X_i) in this derivation (different from the previous one only in that the deriving pair (C,Z) contributed not to output, but to input). Since this leaf belongs to the correct derivation either its output is valid or the negation of its input is valid. In the former case the proof is completed, because (b) holds (and the conjuncts W_j there are unnecessary for the validity). In the latter case we know that [ (2) B → (⋁_i ∈ I A_i) ∨ C is valid ; ] Now, if the proof is still not completed and we have facts (1) and (2), let us consider the derivation GCZ and the leaf C (⋀_j ∈ J D_i)Y ∨ (⋁_j ∉J W_i) in this derivation. Since this leaf belongs to the correct derivation either its output is valid or the negation of its input is valid. Equivalently, one of the following two facts holds: [ (3) C → (⋁_j ∈ J D_i) is valid or; (3') (⋀_j ∉J W_j) → Z is valid ; ] If (3) holds we can cut it with (2) (using usual cut for classical logic LK) and get required (a). Otherwise, if (3') holds we can cut it with (1) (using usual cut for classical logic LK) and get required (b). [<ref> (Soundness and completeness of 2)] GBY is derivable in 2 iff the pair (B, Y) is derivable from the pairs in G in 2. Completeness. Proven constructively by induction on I/O derivation in 2. Suppose we have a derivation of (B,Y) from pairs G in 2. We prove by induction on this derivation that for each occurring pair (A, X), the derivability statement GAX is derivable in 2. For the base case when the pair (A, X) belongs to G (and so G = {(A, X)}∪ G' for some G') we have the following derivation. A ∧ A 1[]G'A ∧ AX X ∨ X1[]G'AX ∨ X2[2](A,X), G'AX For I/O rules with zero premises (TOP and BOT), we have the following derivations. ⊤1[]G⊤⊤ and 1[]G For I/O rules with one premise and a side condition of classical entailment (the rules WO and SI) we take the following derivations that apply cut to the inductive hypothesis. Notice the LK sequents in the derivations marked by the blue color. These sequents are derivable because they are equivalent to the side condition of a considered I/O rule (X Y for WO, and B A for SI). For WO: by I.H. ⋮1GBXB ∧ B 1[]B ∧ BY Y ∨ X1[]BY ∨ X2[2](B,X)BY2[Cut]GBY For SI: by I.H. ⋮1GAYB ∧ A 1[]B ∧ AY Y ∨ Y1[]BY ∨ Y2[2](A,Y)BY2[Cut]GBY For the rules with two premises (AND and OR) we use the following derivations. For AND: B ∧ B 1[](B, X_2)B ∧ BX_1 ∧ X_2B ∧ B 1[]B ∧ B(X_1 ∧ X_2) ∨ X_1 (X_1 ∧ X_2) ∨ X_1 ∨ X_21[]B(X_1 ∧ X_2) ∨ X_1 ∨ X_22[2](B, X_2)B(X_1 ∧ X_2) ∨ X_12[2](B, X_1) (B, X_2)BX_1 X_2 For OR: (A_1 ∨ A_2) ∧ A_1 ∧ A_2 1[](A_1 ∨ A_2) ∧ A_1 ∧ A_2Y Y ∨ Y1[](A_1 ∨ A_2) ∧ A_1Y ∨ Y2[2](A_2, Y)(A_1 ∨ A_2) ∧ A_1Y Y ∨ Y1[](A_2, Y)A_1 ∨ A_2Y ∨ Y2[2](A_1, Y) (A_2, Y)A_1 ∨ A_2Y We afterwards cut these derivations twice with both inductive hypotheses and use contraction to remove duplicate pairs from the result. We show how to do it for AND (for OR it is exactly the same): by I.H. ⋮1GBX_1by I.H. ⋮1GBX_2(B, X_1) (B, X_2)BX_1 X_22[]G (B, X_1)BX_1 X_22[]G, GBX_1 X_21[× n]GBX_1 X_2 Soundness. Proven constructively by induction on derivations in 2. Inductive base: the rules and . If B (which means B in classical logic) we have the following I/O derivation for the required pair (B,Y) in 2. 1[BOT](,)1[SI](B, )1[WO](B, Y) If Y (which means ⊤ Y in classical logic) we have the following I/O derivation for the required pair (B, Y) in 2. 1[TOP](⊤,⊤)1[WO](⊤, Y)1[SI](B, Y) The inductive step is the rule 2. By inductive hypotheses we have that both pairs (B ∧ A, Y) and (B, Y ∨ X) are derivable from pairs G in 2. We now need to prove that (B,Y) is derivable from the pairs in G ∪{(A,X)}. We do it with the following I/O derivation of the pair (B,Y) from the pairs (A,X), (B ∧ A,Y) and (B, Y ∨ X) in the logic 2. If you take it and plug in the derivations of (B ∧ A,Y) and (B, Y ∨ X) from G obtained by the inductive hypotheses you will get the required derivation from G ∪{(A,X)}. (A, X)1[WO](A, Y ∨ X)1[SI](B ∧ A, Y ∨ X)(B, Y ∨ X)1[SI](B ∧ A, Y ∨ X)2[AND](B ∧ A, (Y ∨ X) ∧ (Y ∨ X))1[WO](B ∧ A, Y ∨ (X ∧ X))1[WO](B ∧ A, Y)(B ∧ A, Y)2[OR]((B ∧ A) ∨ (B ∧ A), Y)1[SI](B ∧ (A ∨ A), Y)1[SI](B, Y) [<ref>] (A_1, X_1), …, (A_n, X_n)BY is derivable in 4 iff for all partitions (I, J) ∈({1, … , n}), either B, {X_j}_j ∈ J{A_i}_i ∈ I or {X_j}_j ∈ J Y is derivable in LK. Analogous to the proof of  <ref>. The only difference is that the sequents that we have in the conditions for derivability obtained from the inductive hypothesis applied to premises in the inductive step are different (due to modifications in both pair elimination rule and characterization statement). Specifically, the existence of the derivation that starts with the application of the rule 4 that eliminates a pair (A_k, X_k), is equivalent (by applying inductive hypothesis to the premises) to the derivability of the following sequents for all (I', J') ∈({1, …, n+1}∖{k}): * (a1) B ∧ A_k, {X_j}_j ∈ J'{A_i}_i ∈ I' or (a2) {X_j}_j ∈ J' Y * (b1) B ∧ X_k, {X_j}_j ∈ J'{A_i}_i ∈ I' or (b2) {X_j}_j ∈ J' Y ∨ X_k But these two modifications complement each other: they both introduce deriving outputs on the left side of the first sequent. Therefore when moving A_k and X_k to the other deriving inputs/outputs, we are also able to move X_k from B ∧ X_k to {X_j}_j ∈ J' on the same side. This way we again get two equivalent conditions for every (I', J') ∈({1, …, n+1}∖{k}): * (a1) B, {X_j}_j ∈ J'{A_i}_i ∈ I' ∪{k} or (a2) {X_j}_j ∈ J' Y * (b1) B, {X_j}_j ∈ J' ∪{k}{A_i}_i ∈ I' or (b2) {X_j}_j ∈ J' ∪{k} Y Which gives exactly the statement of the lemma. If GBY is derivable in 2 it is also derivable in 4. We prove the more general statement: if GBY is derivable in 2 then for any B' such that B' B in classical logic, GB'Y is also derivable in 4. The proof proceeds by induction on the derivation of GBY in 2. * If GBY is derived by the rule then B is derivable in LK, so B' is also derivable in LK and GB'Y can be derived in 4 by . * If GBY is derived by the rule then Y is derivable in LK, so GB'Y can be derived in 4 by . * Suppose GBY is derived by the rule 2 from the premises G'B ∧ AY and G'BY ∨ X for some pair (A,X) such that G = G' ∪ (A,X). By the inductive hypotheses G'B' ∧ AY and G'B' ∧ XY ∨ X are derivable in 4 (since from B' B follows that B' ∧ A B ∧ A and B' ∧ X B classically). We then can derive GB'Y in 4 from these premises by 4. The rules , and are admissible in the calculus 4. Analogous to the proof <ref> of  <ref>. By the characterization lemma ( <ref>) the difference is that the output part {X_j}_j ∈ J appear on the left of the first sequent. This does not affect the proof of admissibility of and — weakening and contraction from LK can be applied on the left where the additional formulae appear the same way as in other parts to reduce the sequents to the required form. For rule admissibility there is one additional LK cut that may be required in one of the cases. Specifically to prove admissibility of following the proof <ref> of Lemma <ref> we have to prove the following characterization statement for any partition (I_1, J_1) ∈({1,…,n}) and partition (I_2, J_2) ∈({1,…,m}). (∗) either B, {X_j}_j ∈ J_1, {W_j}_j ∈ J_2{A_i}_i ∈ I_1, {D_i}_i ∈ I_2 or {X_j}_j ∈ J_1, {W_j}_j ∈ J_2 Y is derivable in LK. Again, we first get the following statement by applying characterization lemma to the derivation of (C,Z), GBY for the partition (I_1 ∪{n+1}, J_1): (1) either B, {X_j}_j ∈ J_1{A_i}_i ∈ I_1, C or {X_j}_j ∈ J_1 Y is derivable in LK. Second, we get the following statement by applying the characterization lemma to the derivation of (C,Z), GBY for the partition (I_1, J_1 ∪{n+1}): (2) either B, Z, {X_j}_j ∈ J_1{A_i}_i ∈ I_1 or Z, {X_j}_j ∈ J_1 Y is derivable in LK. Third, we get the following statement by applying the characterization lemma to the derivation of G'CZ for the partition (I_2, J_2): (3) either C, {W_j}_j ∈ J_2{D_i}_i ∈ I_2 or {W_j}_j ∈ J_2 Z is derivable in LK. Now we show that for any possible choice of derivable sequent (left or right) in (1), (2) and (3), we can always derive one of the sequents of (∗) (either by weakening or by cut in LK): * If the right sequent of (1) is derivable, then the right sequent of (∗) is derivable by weakening in LK. * If the left sequent of (1) is derivable and the left sequent of (3) is derivable, we can cut them using the LK cut and get exactly the left sequent of (∗). * If the right sequent of (3) is derivable and the left sequent of (2) is derivable, we can cut them using the LK cut and weaken the result to get the left sequent of (∗). * If the right sequent of (3) is derivable and the right sequent of (2) is derivable, we can cut them using the LK cut and get exactly the right sequent of (∗). Thus, we have proved (∗) for all possible cases. [<ref>] GBY is derivable in 4 iff (B, Y) is derivable from the pairs in G in 4. Analogous to the proof <ref> of the  <ref>. Completeness. We need to check the derivability of all rules of 4 in 4. While this can be done directly, we also can use the derivability of all axioms 2 in 2, that was proven in the proof <ref> of  <ref>, and the additional simple fact that anything derivable in 2 is also derivable in 4 (lemma <ref>). Then the only rule of 4 left to check is the peculiar rule CT. Its derivation in 4 is given below. A ∧ (A ∧ X) ∧ A 1[]A ∧ (A ∧ X) ∧ AYA ∧ (A ∧ X) ∧ X 1[]A ∧ (A ∧ X) ∧ XY ∨ X2[4](A,X)A ∧ (A ∧ X)Y Y ∨ Y1[](A, X)A ∧ YY ∨ Y2[4](A, X) (A ∧ X, Y)AY Then this derivations of 4 rules are combined using structural rules to build a derivation in 4 analogously to the proof <ref> of  <ref>. Admissibility of structural rules in 4 is given by  <ref>. Soundness. Analogous to the proof <ref> of  <ref>. The soundness of the rule 4 in 4 is given by the following derivation. (A, X)1[WO](A, Y ∨ X)1[SI](B ∧ A, Y ∨ X)(A, X)1[SI](B ∧ A, X)(B ∧ X, Y ∨ X)1[SI](B ∧ A ∧ X, Y ∨ X)2[CT](B ∧ A, Y ∨ X)2[AND](B ∧ A, (Y ∨ X) ∧ (Y ∨ X))1[WO](B ∧ A, Y ∨ (X ∧ X))1[WO](B ∧ A, Y)(B ∧ A, Y)2[OR]((B ∧ A) ∨ (B ∧ A), Y)1[SI](B ∧ (A ∨ A), Y)1[SI](B, Y) [<ref>] If (A,X), GBY is derivable in 3, then GB ∧ XY ∨ X is also derivable 3. By induction on the derivation of (A,X), GBY in 3. * Suppose (A,X), GBY is derived by the rule . Then B is derivable in LK, so B ∧ X is also derivable in LK and GB ∧ XY ∨ X can be also derived by the rule . * Suppose (A,X), GBY is derived by the rule . Then Y is derivable in LK, so Y ∨ X is also derivable in LK and GB ∧ XY ∨ X can be also derived by the rule . * If (A,X), GBY is derived by elimination of the pair (A,X) with the rule 3, than there is a derivation of the premise GB ∧ XY ∨ X. * Suppose (A,X), GBY is derived by elimination of some other pair (A',X') with the rule 3 (G = {(A',X')}∪ G' for some G'). The first premise is then B A', which infers B ∧ X A'. The second premise is (A,X), G'B ∧ X'Y ∨ X' to which we can apply the inductive hypothesis to get a derivation of G'B ∧ X' ∧ XY ∨ X' ∨ X. The required sequent can be derived as follows: B ∧ X A'G'B ∧ X' ∧ XY ∨ X' ∨ X2[3](A', X'), G'B ∧ XY ∨ X [<ref>] (A_1, X_1), …, (A_n, X_n)BY is derivable in 3 iff for all partitions (I, J) ∈({1, … , n}), at least one of the following is true: * B, {X_j}_j ∈ J A_i is derivable in LK for some i ∈ I, * B, {X_j}_j ∈ J is derivable in LK, * {X_j}_j ∈ J Y is derivable in LK. Left to Right. By  <ref>, for any partition (I, J) ∈({1, … , n}) derivability of (A_1, X_1), …, (A_n, X_n)BY infers derivability of { (A_i, X_i) | i ∈ I }B ∧⋀_j ∈ J X_jY ∨⋁_j ∈ J X_j (we need to apply the lemma for all pairs { (A_j, X_j) | j ∈ J } one by one). Since there exists a derivation for the latter sequent, there exists the first applied rule in this derivation (with the LK-premise also derivable), and this fact infers the characterization statement for the partition (I,J): * if the first rule is 3 eliminating one of the pairs from { (A_i, X_i) | i ∈ I } on the left then for the index k ∈ I of this pair B ∧⋀_j ∈ J X_j A_k and therefore B, { X_j }_j ∈ J A_k are derivable in LK, * if the first rule is then B ∧⋀_j ∈ J X_j and therefore B, { X_j }_j ∈ J are derivable in LK. * if the first rule is then Y ∨⋁_j ∈ J X_j and therefore { X_j }_j ∈ J Y are derivable in LK. Right to Left. As was already noted, the characterization statement encodes the fact that the sequents of the form { (A_i, X_i) | i ∈ I }B ∧⋀_j ∈ J X_jY ∨⋁_j ∈ J X_j can not be stuck. The initial sequent has this form (for (I,J) = ({1,…,n}, ∅)) and all I/O sequents that may appear in its derivation also have this form (if we take a sequent of this form for some partition (I,J) and eliminate one of the pairs (A_k, X_k) for k ∈ I, the I/O sequent in the premise will have this form for the partition (I ∖{k}, J ∪{k})). Therefore it is possible to construct a derivation for the initial sequent: there always will be a rule to apply by the assumption (with LK-premise derivable), and the process will terminate eventually since the number of the pairs on the left decreases. The rules , and are admissible for the calculi 1 and 3. Analogous to the proof of Lemma <ref> for the calculus 2 and its analogue for the calculus 4. The difference is that the characterization lemma in the case of 1 and 3 does not have only two, but (|I| + 2) LK-sequents one of which should be derivable for each partition (I,J). However the form of these sequents is the same, so the structure of the proof does not change. For admissibility of weakening and contraction the modification is minimal: instead of applying the LK rules of weakening and contraction to add/duplicate the formula A (for weakened/contracted pair (A,X)) we add/duplicate the whole separate sequent that has A on the right. Since the characterization statement requires at least one of the sequents to be derivable, addition of a new sequent or duplication of the existing sequent results in a statement that is weaker (or equivalent), as needed. We will now show how LK cuts (and weakenings) should be applied to establish cut admissibility in 3 (case of 1 is exactly the same, but the sequents are simpler, since they do not have additional outputs on the left). Using the characterization lemma <ref>, for arbitrary partitions (I_1 ,J_1) ∈({1,…,n}) and (I_2 ,J_2) ∈({1,…,m}) we need to prove the following characterization statement (∗) at least one of the following (|I_1| + |I_2| + 2) sequents should be derivable: (∗1) B, {X_j}_j ∈ J_1, {W_j}_j ∈ J_2 A_i for some i ∈ I_1 (∗2) B, {X_j}_j ∈ J_1, {W_j}_j ∈ J_2 D_l for some l ∈ I_2 (∗3) B, {X_j}_j ∈ J_1, {W_j}_j ∈ J_2 (∗4) {X_j}_j ∈ J_1, {W_j}_j ∈ J_2 Y Again, we first apply the characterization lemma to derivability of (C,Z), GBY for the partition (I_1 ∪{n+1}, J_1) and get the following statement (a) at least one of the following (|I_1| + 3) sequents should be derivable: (a1) B, {X_j}_j ∈ J_1 A_i for some i ∈ I_1 (a2) B, {X_j}_j ∈ J_1 C (a3) B, {X_j}_j ∈ J_1 (a4) {X_j}_j ∈ J_1 Y Second, we get the following statement by applying the characterization lemma to derivability (C,Z), GBY for the partition (I_1, J_1 ∪{n+1}): (b) at least one of the following (|I_1| + 2) sequents should be derivable: (b1) B, {X_j}_j ∈ J_1, Z A_i for some i ∈ I_1 (b2) B, {X_j}_j ∈ J_1, Z (b3) {X_j}_j ∈ J_1, Z Y Third, we get the following statement by applying the characterization lemma to derivability G'CZ for the partition (I_2, J_2): (c) at least one of the following (|I_2| + 2) sequents should be derivable: (c1) C, {W_j}_j ∈ J_2 D_l for some l ∈ I_2 (c2) C, {W_j}_j ∈ J_2 (c3) {W_j}_j ∈ J_2 Z Now we show that for any possible choice of derivable sequent in (1), (2) and (3), we can always derive one of the sequents of (∗) (either by weakening or cut in LK): * If one of the sequents in (a1) is derivable, one of sequents (∗1) is derivable by weakening. * If the sequent (a3) is derivable, the sequent (∗3) is derivable by weakening. * If the sequent (a4) is derivable, the sequent (∗4) is derivable by weakening. * If the sequent (a2) and one of the sequents (c1) are derivable we can use the LK cut on them and get one of the sequents (∗2). * If the sequent (a2) and the sequent (c2) are derivable we apply the LK cut to them and get the sequent (∗3). * If the sequent (c3) and one of the sequents (b1) are derivable we use the LK cut and get one of the sequents (∗1). * If the sequent (c3) and the sequent (b2) are derivable we can use the LK cut and get the sequent (∗3). * If the sequent (c3) and the sequent (b3) are derivable using the LK cut we get the sequent (∗4). Thus, we have proved that one of the sequents (∗) are derivable for all possible cases. [<ref>] For both k ∈{1,3}, GBY is derivable in k iff the pair (B, Y) is derivable from the pairs in G in k. Analogous to the proofs of Theoremes <ref> and <ref> for the calculi 2 and 4. Completeness We need to check that all I/O rules of 1 (plus adequacy of derivability (A,X)AX) are derivable in 1 and all I/O rules of 3 are derivable in 3; combining instances of these rules via (admissible) structural rules is the same as in the proof of completeness of 2. Below are derivation of I/O rules of 1 in 1. * Base case ((A,X) belongs to G) A A X ∨ X1[]G'AX ∨ X2[1](A,X), G'AX * TOP ⊤1[]⊤⊤ * BOT 1[] * WO (the blue LK-sequent is derivable due to the side condition that X Y classically) B B Y ∨ X1[]BY ∨ X2[1](B,X)BY * SI (the blue LK-sequent is derivable due to the side condition that B A classically) B A Y ∨ Y1[]BY ∨ Y2[1](A,Y)BY * AND B BB B (X_1 ∧ X_2) ∨ X_1 ∨ X_21[]B(X_1 ∧ X_2) ∨ X_1 ∨ X_22[1](B, X_2)B(X_1 ∧ X_2) ∨ X_12[1](B, X_1) (B, X_2)BX_1 X_2 These rules can be derived the same way for 3, or alternatively analogously to the  <ref> we can prove that every sequent derivable in 1 is also derivable in 3 and refer to the derivations in 1 above. The only rule left to prove is the derivability in 3 of the additional axiom CT of 3. The derivation is below. A AA ∧ X A ∧ X Y ∨ X ∨ Y1[]A ∧ X ∧ YY ∨ X ∨ Y2[3](A ∧ X, Y)A ∧ XY ∨ X2[3](A, X) (A ∧ X, Y)AY Then the derivations of the rules are combined using structural rules to build a derivation in 1 (or 3) analogously to the proof <ref> of  <ref>. Admissibility of structural rules in 4 is given by  <ref>. Soundness Analogous to the proof <ref> of  <ref>. To establish soundness of the rule 1 in 1, notice that if first premise B A is derivable, then B A (by soundness and completeness of LK), and therefore B B ∧ A. Then the following derivation provides the soundness for the rule 1 in 1. (A, X)1[WO](A, Y ∨ X)1[SI](B ∧ A, Y ∨ X)(B, Y ∨ X)1[SI](B ∧ A, Y ∨ X)2[AND](B ∧ A, (Y ∨ X) ∧ (Y ∨ X))1[WO](B ∧ A, Y ∨ (X ∧ X))1[WO](B ∧ A, Y)1[SI](B, Y) Analogously, for the soundness of the rule 3 in 3 we have the following derivation. (A, X)1[WO](A, Y ∨ X)1[SI](B ∧ A, Y ∨ X)(A, X)1[SI](B ∧ A, X)(B ∧ X, Y ∨ X)1[SI](B ∧ A ∧ X, Y ∨ X)2[CT](B ∧ A, Y ∨ X)2[AND](B ∧ A, (Y ∨ X) ∧ (Y ∨ X))1[WO](B ∧ A, Y ∨ (X ∧ X))1[WO](B ∧ A, Y)1[SI](B, Y) [<ref>] The sequent A_10…A_nnB_10…B_nn is derivable in LK iff for some k the sequent A_k B_k is derivable in LK. If for some k the sequent A_k B_k is derivable, A_kkB_kk is also derivable by variable renaming, then the first sequent is also derivable by weakening. If for all k the sequents A_k B_k are not derivable, we can see that the first sequent is also not derivable through semantics. Specifically, if for some k the sequent A_k B_k is not derivable, then there is a boolean variable assignment ρ: →{, ⊤} that falsifies it (i.e. satisfies A_k and falsifies B_k). Then the same assignment ρ∘_k^-1 of variables from _k satisfies A_kk and falsifies B_kk. Since all _i are disjoint we can merge all (n + 1) counter-model assignments together into a boolean assignment on ⋃_i=0^n_i, that will falsify the sequent A_10…A_nnB_10…B_nn by construction.
http://arxiv.org/abs/2306.05824v1
20230609114927
BCS Critical Temperature on Half-Spaces
[ "Barbara Roos", "Robert Seiringer" ]
math-ph
[ "math-ph", "cond-mat.supr-con", "math.MP", "81Q10 (Primary) 82D55 (Secondary)" ]
1]Barbara [email protected] 1]Robert [email protected] [1]Institute of Science and Technology Austria, Am Campus 1, 3400 Klosterneuburg, Austria BCS Critical Temperature on Half-Spaces [ July 31, 2023 ======================================= We study the BCS critical temperature on half-spaces in dimensions d=1,2,3 with Dirichlet or Neumann boundary conditions. We prove that the critical temperature on a half-space is strictly higher than on ℝ^d, at least at weak coupling in d=1,2 and weak coupling and small chemical potential in d=3. Furthermore, we show that the relative shift in critical temperature vanishes in the weak coupling limit. § INTRODUCTION AND RESULTS We study the effect of a boundary on the critical temperature of a superconductor in the Bardeen-Cooper-Schrieffer model. It was recently observed <cit.> that the presence of a boundary may increase the critical temperature. For a one-dimensional system with δ-interaction, a rigorous mathematical justification was given in <cit.>. Here, we generalize this result to generic interactions and higher dimensions. We compare the half infinite superconductor with shape Ω_1=(0,∞) ×^d-1 to the superconductor on Ω_0=^d in dimensions d=1,2,3. We impose either Dirichlet or Neumann boundary conditions, and prove that in the presence of a boundary the critical temperature can increase. The critical temperature can be determined from the spectrum of the two-body operator H_T^Ω= -Δ_x-Δ_y-2μ/tanh(-Δ_x-μ/2T)+tanh(-Δ_y-μ/2T)-λ V(x-y) acting in L_ sym^2(Ω×Ω)={ψ∈ L^2(Ω×Ω) |ψ(x,y)=ψ(y,x) for all x,y∈Ω} with appropriate boundary conditions <cit.>. Here, Δ denotes the Dirichlet or Neumann Laplacian on Ω and the subscript indicates on which variable it acts. Furthermore, T denotes the temperature, μ is the chemical potential, V is the interaction and λ is the coupling constant. The first term in H_T^Ω is defined through functional calculus. Importantly, the system is superconducting if infσ (H_T^Ω) <0. For translation invariant systems, i.e. Ω=^d, it was shown in <cit.> that superconductivity is equivalent to infσ (H_T^Ω) <0. In this case, there is a unique critical temperature T_c determined by infσ(H_T_c^Ω) = 0 which separates the superconducting and the normal phase. The critical temperatures T_c^0 and T_c^1 for Ω=^d and Ω=Ω_1, respectively, are defined as T_c^j(λ):= inf{T∈ (0,∞) |infσ(H_T^Ω_j)≥ 0}. In Lemma <ref> we prove that the infσ(H_T^Ω_1) ≤infσ(H_T^Ω_0). Therefore, T_c^1(λ)≥ T_c^0(λ). The main part is to show that the inequality is strict. Our strategy involves proving infσ(H_T_c^0(λ)^Ω_1)<0 using the variational principle. The idea is to construct a trial state involving the ground state of H_T_c^0(λ)^Ω_0. However, H_T^Ω_0 is translation invariant in the center of mass coordinate and thus has purely essential spectrum. To obtain a ground state eigenfunction, we remove the translation invariant directions, and instead consider the reduced operator H_T^0=-Δ-μ/tanh(-Δ-μ/2T)-λ V(r) acting in L_s^2(^d), where L_ s^2 (Ω)={ψ∈ L^2(Ω) |ψ(r)=ψ(-r)} (c.f. Lemma <ref>). Our trial state hence involves the ground state of H_T_c^0(λ)^0. In the weak coupling limit, λ→ 0, we can compute the asymptotic form of this ground state provided that μ>0 and the operator 𝒱_μ:L_ s^2(^d-1)→ L_ s^2(^d-1) with integral kernel 𝒱_μ(p,q)=1/(2π)^d/2V(√(μ)(p-q)) has a non-degenerate eigenvalue e_μ = supσ(𝒱_μ)>0 at the top of its spectrum <cit.>. Here, V(p)=1/(2π)^d/2∫_^d V(r) e^-i p · r r denotes the Fourier transform of V. For d=1, since L_ s^2(^0) is a one-dimensional vector space, 𝒱_μ is just multiplication by the number e_μ=V(0)+V(μ)/2(2π)^1/2. We make the following assumptions on the interaction potential. Let d∈{1,2,3} and μ>0. Assume that * V∈ L^1(^d) ∩ L^p_d(^d), where p_d=1 for d=1, and p_d>d/2 for d∈{2,3}, * V is radial, V≢0, * |·| V ∈ L^1(^d), * V(0)>0, * e_μ=supσ(𝒱_μ) is a non-degenerate eigenvalue. The assumption V∈ L^1(^d) implies that V is continuous and bounded. The operator 𝒱_μ is thus Hilbert-Schmidt and in particular compact. Due to Assumption (<ref>) we have e_μ>0. This in turn implies that the critical temperature T_c^0(λ) for the system on ^d is positive for all λ>0 (<cit.> for d=3, and <cit.> for d=1,2). Furthermore, radiality of V and (<ref>) imply that the eigenfunction corresponding to e_μ must be rotation invariant, i.e. the constant function. Assumption (<ref>) is satisfied if V≥ 0 <cit.>. These assumptions suffice to observe boundary superconductivity in d=1,2. For d=3, we need one additional condition. Let j_d(r;μ):=1/(2π)^d/2∫_^d-1 e^i ω· r √(μ)ω. Define m_3^D/N(r;μ):=∫_(j_3(z_1,r_2,r_3;μ)^2 - | j_3(z_1,r_2,r_3;μ) ∓ j_3(r;μ) |^2 χ_|z_1|<|r_1|) z_1 ∓π/μ^1/2 j_3(r;μ)^2, where the indices D and N as well as the upper/lower signs correspond to Dirichlet/Neumann boundary conditions, respectively. Our main result is as follows: Let d∈{1,2,3}, μ>0 and let V satisfy <ref>. Assume either Dirichlet or Neumann boundary conditions. For d=3 additionally assume that ∫_^3 V(r) m_3^D/N(r;μ) r>0. Then there is a λ_1>0, such that for all 0<λ<λ_1, T_c^1(λ)>T_c^0(λ). For d=3 we prove that (<ref>) is satisfied for small enough chemical potential. Let d=3 and let V satisfy <ref>(<ref>)-(<ref>). For Dirichlet boundary conditions, additionally assume that |·|^2 V ∈ L^1(^3) and ∫_^3 V(r) r^2 r >0. Then there is a μ_0>0 such that for all 0<μ<μ_0, ∫_^3 V(r) m_3^D/N(r;μ) r>0. In particular, if V additionally satisfies <ref>(<ref>) for small μ (e.g. if V≥ 0), then for small μ there is a λ_1(μ)>0 such that T_c^1(λ)>T_c^0(λ) for 0<λ<λ_1(μ). Numerical evaluation of m_3^D suggests that m_3^D≥ 0 (see Section <ref>, in particular Figure <ref>). Hence, for Dirichlet boundary conditions (<ref>) appears to hold under the additional assumption that V≥ 0. We therefore expect that for Dirichlet boundary conditions also in 3 dimensions boundary superconductivity occurs for all values of μ. There is no proof so far, however. One may wonder why in d=1,2 no condition like (<ref>) is needed. Actually, in d=1,2 the analogous condition is always satisfied if V(0)>0. The reason is that if one defines m_d^D/N(r;μ) by replacing j_3 by j_d in (<ref>), the first term diverges and m_d^D/N(r;μ)=+∞. Our second main result is that the relative shift in critical temperature vanishes as λ→0. This generalizes the corresponding result for d=1 with contact interaction in <cit.>. Let d∈{1,2,3}, μ>0 and let V satisfy <ref> and V≥ 0. Then lim_λ→0T_c^1(λ)-T_c^0(λ)/T_c^0(λ)=0. We expect that the additional assumption V≥ 0 in Theorem <ref> is not necessary; it is required in our proof, however. The rest of the paper is organized as follows. In Section <ref> we prove the Lemmas mentioned in the introduction. In Section <ref> we use the Birman-Schwinger principle to study the ground state of H_T_c^0(λ)^0. Section <ref> contains the proof of Theorem <ref>. Section <ref> discusses the conditions under which (<ref>) holds and in particular contains the proof Theorem <ref>. In Section <ref> we study the relative temperature shift and prove Theorem <ref>. Section <ref> contains the proof of auxiliary Lemmas from Section <ref>. § PRELIMINARIES The following functions will occur frequently K_T,μ(p,q):=p^2 + q^2 - 2μ/tanh(p^2-μ/2T)+tanh(q^2-μ/2T) and B_T,μ(p,q):=1/K_T,μ(p+q,p-q). We will suppress the subscript μ and write K_T,B_T when the μ-dependence is not relevant. The following estimate <cit.> will prove useful. For every T_0>0 there is a constant C_1(T_0,μ)>0 such that for T>T_0, C_1(T+p^2+q^2)≤ K_T(p,q). For every T>0 there is a constant C_2(T,μ)>0 such that K_T(p,q)≤ C_2(p^2+q^2+1). The minimal value of K_T is 2T. Since |tanh(x)|<1, we have for all p,q∈^d and T≥ 0 B_T(p,q)≤1/max{|p^2+q^2-μ|,2T} and B_T(p,q)χ_p^2+q^2>2μ>0≤C(μ)/1+p^2+q^2, where C(μ) depends only on μ. Assumption <ref>(<ref>) guarantees that V is infinitesimally form bounded with respect to -Δ_x-Δ_y <cit.>. By Lemma <ref>, H_T^Ω defines a self-adjoint operator via the KLMN theorem. Furthermore, H_T^Ω becomes positive for T large enough and hence the critical temperatures are finite. Let K_T^Ω be the kinetic term in H_T^Ω. The corresponding quadratic form acts as ⟨ψ, K_T^Ωψ⟩ = ∫_Ω^4ψ(x,y)K_T^Ω(x,y;x',y') ψ(x',y') x y x' y' where K_T^Ω(x,y;x',y') is the distribution K_T^Ω(x,y;x',y')=∫_^2dF_Ω(x,p) F_Ω(y,q)K_T(p,q) F_Ω(x',p) F_Ω(y',q) p q, with F_^d(x,p)=e^-i p · x/(2π)^d/2 and F_Ω_1(x,p)=(e^-i p_1 x_1∓ e^i p_1 x_1) e^-i p̃·x̃/2^1/2(2π)^d/2, where the -/+ sign corresponds to Dirichlet and Neumann boundary conditions, respectively. Here, x̃ denotes the vector containing all but the first component of x. (In the case d=1, x̃ is empty and can be omitted.) Let T,λ>0, d∈{1,2,3}, and let V satisfy <ref>(<ref>). Then infσ(H_T^Ω_1)≤infσ(H_T^Ω_0). The following Lemma shows that we may use H_T^0 instead of H_T^Ω_0 to compute T_c^0(λ). Let T,λ>0, d∈{1,2,3}, and let V satisfy <ref>(<ref>). Then infσ(H_T^Ω_0) = infσ(H_T^0). The essential spectrum of H_T^0 satisfies infσ_ ess(H_T^0)=2T (see e.g. <cit.>). Due to continuity of infσ(H_T^0) in T (see Lemma <ref>), infσ(H_T_c^0(λ)^0)=0. In particular, zero is an eigenvalue of H_T_c^0(λ)^0. §.§ Proof of Lemma <ref> Let S_l be the shift to the right by l in the first component, i.e. S_l ψ (x,y)= ψ(((x_1-l),x̃), (y_1-l,ỹ)). Let ψ be a compactly supported function in H_ sym^1(^2d), the Sobolev space restricted to functions satisfying ψ(x,y)=ψ(y,x). For l big enough, S_l ψ is supported on half-space and satisfies both Dirichlet and Neumann boundary conditions. The goal is to prove that lim_l→∞⟨ S_l ψ, H^Ω_1_T S_l ψ⟩ = ⟨ψ, H^Ω_0_T ψ⟩. Then, since compactly supported functions are dense in H_ sym^1(^2d), the claim follows. Note that ⟨ S_l ψ,V S_l ψ⟩ = ⟨ψ,V ψ⟩. Furthermore, using symmetry of K_T in p_1 and q_1 one obtains ⟨ S_l ψ,K_T^Ω_1 S_l ψ⟩ =∫_^2dψ(p,q)K_T(p,q)[ψ(p,q)∓ψ((-p_1,p̃),q) e^i 2l p_1∓ψ(p,(-q_1,q̃))e^i 2l q_1 +ψ((-p_1,p̃),(-q_1,q̃))e^i 2l (p_1+q_1)] p q for l big enough such that ψ is supported on the half-space. The first term is exactly ⟨ψ,K_T^Ω_0ψ⟩. Note that by the Schwarz inequality and Lemma <ref>, the function (p,q)↦ψ(p,q)K_T(p,q)ψ((-p_1,p̃),q) is in L^1(^2d) since ψ∈ H^1(^2d). By the Riemann-Lebesgue Lemma, the second term in (<ref>) vanishes for l→∞. By the same argument, also the remaining terms vanish in the limit. §.§ Proof of Lemma <ref> First, we prove the following inequality. For all x,y∈ we have x+y/tanh(x)+tanh(y)≥1/2(x/tanh(x)+y/tanh(y)) Suppose | x |≠| y |. Without loss of generality we may assume x>| y |. Since x/tanh x≥y/tanh y, x/2 tanh xtanh x-tanh y/tanh x + tanh y≥y/2 tanh ytanh x-tanh y/tanh x + tanh y This inequality is equivalent to (<ref>), as can be seen using tanh x-tanh y/tanh x + tanh y=2tanh x/tanh x + tanh y-1=1-2tanh y/tanh x + tanh y on the left and right side, respectively. By continuity, (<ref>) also holds in the case | x | = | y |. Let U denote the unitary transform Uψ(r,z)=1/2^d/2ψ((r+z)/2,(z-r)/2) for ψ∈ L^2(^2d). By Lemma <ref> we have UH_T^Ω_0U^†= -( ∇_r+∇_z)^2-(∇_r-∇_z)^2-2μ/tanh(-( ∇_r+∇_z)^2-μ/2T)+tanh(-( ∇_r-∇_z)^2-μ/2T)+V(r) ≥1/2(-( ∇_r+∇_z)^2-μ/tanh(-( ∇_r+∇_z)^2-μ/2T)+V(r))+ 1/2(-( ∇_r-∇_z)^2-μ/tanh(-( ∇_r-∇_z)^2-μ/2T)+V(r)) Both summands are unitarily equivalent to 1/2H_T^0⊗, where acts on L^2(^d). Therefore, infσ(H_T^Ω_0)≥infσ(H_T^0). For the opposite inequality let f∈ H^1(^d) with f(r)=f(-r) and ψ_ϵ(r,z)=e^-ϵ∑_j=1^d| z_j | f(r). Note that ‖ψ_‖^2_2=1/^d‖ f ‖_2^2. Since the Fourier transform of e^-ϵ| r_1 | in L^2() is √(2/π)ϵ/ϵ^2+p_1^2, we have ψ_(p,q)= 2^d/2/π^d/2∏_j=1^d ϵ/(ϵ^2+p_j^2)f(q). Therefore, ⟨ψ_ϵ| UH_T^Ω_0U^†ψ_ϵ⟩/‖ψ_ϵ‖^2=2^d/π^d ‖ f‖^2∫_^2d K_T(p+q,p-q)∏_j=1^d ϵ^3/(ϵ^2+p_j^2)^2 |f(q)|^2 p q =2^d/π^d ‖ f‖^2∫_^2d K_T( p+q, p-q)(∏_j=1^d 1/(1+p_j^2)^2) |f(q)|^2 p q, where we substituted p→ p in the second step. By Lemma <ref>, K_T( p+q, p-q)(∏_j=1^d 1/(1+p_j^2)^2) |f(q)|^2 ≤ C (1+d ^2+q^2)(∏_j=1^d 1/1+p_j^2) |f(q)|^2 , which is integrable. With ∫_1/(1+p_j^2)^2 p_j=π/2 it follows by dominated convergence that lim_ϵ→ 0⟨ψ_ϵ| UH_T^Ω_0U^†ψ_ϵ⟩/‖ψ_ϵ‖^2=⟨ f | H_T^0 f ⟩/‖ f‖^2. § GROUND STATE OF H_T_C^0(Λ)^0 To study the ground state of H_T_c^0(λ)^0, it is convenient to apply the Birman-Schwinger principle. For q∈^d let B_T(·,q) denote the operator on L^2(^d) which acts as multiplication by B_T(p,q) (defined in (<ref>)) in momentum space. The Birman-Schwinger operator corresponding to H_T^0 acts on L_s^2(^d) and is given by A_T^0=V^1/2 B_T(·,0) | V|^1/2, where we use the notation V^1/2(x)= (V(x))| V|^1/2(x). This operator is compact <cit.>. It follows from the Birman-Schwinger principle that supσ(A_T^0)=1/λ exactly for T=T_c^0(λ) and that the eigenvalue 0 of H_T_c^0(λ)^0 has the same multiplicity as the largest eigenvalue A_T_c^0(λ)^0. Let :L^1(^d)→ L^2(^d-1) act as ψ(ω)=ψ(√(μ)ω) and define O_μ=V^1/2^†| V|^1/2 on L_s^2(^d). Furthermore, let m_μ(T)=∫_0^√(2μ) B_T(t,0)t^d-1 t. Note that m_μ(T)= μ^d/2-1( ln(μ/T)+c_d)+o(1) for T→0, where c_d is a number depending only on d <cit.>. The operator O_μ captures the singularity of A_T^0 as T→ 0. The following has been proved in <cit.> for d=3 and in <cit.> for d=1,2. Let d∈{1,2,3} and μ>0 and assume <ref>. Then, sup_T∈(0,∞)‖ A_T^0 -m_μ(T) O_μ‖_ HS <∞, where ‖·‖_ HS denotes the Hilbert-Schmidt norm. Thus, the asymptotic behavior of supσ(A_T^0) depends on the largest eigenvalue of O_μ. Note that O_μ is isospectral to 𝒱_μ= V ^†, since both operators are compact. The eigenfunction of O_μ corresponding to the eigenvalue e_μ is Ψ(r):= V^1/2(r) j_d(r;μ), where j_d was defined in (<ref>). Note that j_1(r;μ)=√(2/π)cos(√(μ) r), j_2(r;μ)= J_0(√(μ)| r |), j_3(r;μ)=2/(2π)^1/2sin√(μ)| r |/√(μ)| r |, where J_0 is the Bessel function of order 0. Furthermore e_μ=1/(2π)^d/2∫_^d-1V(√(μ)((1,0,...,0)-p)) p = 1/|^d-1|∫_^d V(r)j_d(r;μ)^2 r The following asymptotics of T_c^0(λ) for λ→ 0 was computed in <cit.> and <cit.>. Let μ>0, d∈{1,2,3} and assume <ref>. Then lim_λ→ 0| e_μ m_μ(T_c^0(λ))-1/λ|=lim_λ→ 0| e_μμ^d/2-1ln(μ/T_c^0(λ))-1/λ|<∞. Lemma <ref> does not only contain information about eigenvalues, but also about the corresponding eigenfunctions. In the following we prove that the eigenstate corresponding to the maximal eigenvalue of A_T^0 converges to Ψ. Let μ>0, d∈{1,2,3} and assume <ref>. * There is a λ_0>0 such that for λ≤λ_0, the largest eigenvalue of A_T_c^0(λ)^0 is non-degenerate. * Let λ≤λ_0 and let Ψ_T_c^0(λ) be the eigenvector of A_T_c^0(λ)^0 corresponding to the largest eigenvalue, normalized such that ‖Ψ_T_c^0(λ)‖_2=‖Ψ‖_2. Pick the phase of Ψ_T_c^0(λ) such that ⟨Ψ_T_c^0(λ),Ψ⟩≥ 0. Then lim_λ→ 01/λ‖Ψ-Ψ_T_c^0(λ)‖_2^2<∞ Let λ_0 be as in Lemma <ref>. By the Birman-Schwinger principle, the multiplicity of the largest eigenvalue of A_T_c^0(λ)^0 equals the multiplicity of the ground state of H^0_T_c^0(λ). Hence, H^0_T_c^0(λ) has a unique ground state for λ≤λ_0. For d≥ 2, since H^0_T_c^0(λ) is rotation invariant, uniqueness of the ground state implies that the ground state is radial. For values of λ such that the operator H_T_c^0(λ)^0 has a non-degenerate eigenvalue at the bottom of its spectrum let Φ_λ be the corresponding eigenfunction, with normalization and phase chosen such that Ψ_T_c^0(λ)=V^1/2Φ_λ. The following Lemma with regularity and convergence properties of Φ_λ will be useful. Let d∈{1,2,3}, μ>0 and assume <ref>. For all 0<λ<∞ such that H_T_c^0(λ)^0 has a non-degenerate ground state Φ_λ, we have * |Φ_λ(p)|≤C(λ)/1+p^2|V Φ_λ (p)|≤C(λ) ‖ V‖_1^1/2‖Ψ‖_2 /1+p^2 for some number C(λ) depending on λ, * p ↦Φ_λ(p) is continuous, * ‖Φ_λ‖_1<∞ and ‖Φ_λ‖_∞<∞. Furthermore, in the limit λ→ 0 * ‖Φ_λχ_p^2>2μ‖_1=O(λ), * ‖Φ_λ‖_1=O(1), * and in particular ‖Φ_λ‖_∞=O(1). In three dimensions, because of the additional condition (<ref>), we need to compute the limit of Φ_λ. Let d=3, μ>0 and assume <ref>. Then ‖Φ_λ -j_3 ‖_∞=O(λ^1/2) as λ→0. §.§ Proof of Lemma <ref> (i) The proof uses ideas from <cit.>. Let M_T=B_T(·,0)-m_μ(T) ^†. By Lemma <ref>, for λ small enough the operator 1-λ V^1/2 M_T | V|^1/2 is invertible for all T. Then we can write 1-λ A_T^0 = (1-λ V^1/2 M_T | V|^1/2)(1-λ m_μ(T) /1-λ V^1/2 M_T | V|^1/2V^1/2^†| V|^1/2) Recall that the largest eigenvalue of A_T_c^0(λ)^0 equals 1/λ. Hence, 1 is an eigenvalue of λ m_μ(T_c^0(λ)) /1-λ V^1/2 M_T_c^0(λ)| V|^1/2V^1/2^†| V|^1/2 and it has the same multiplicity as the eigenvalue 1/λ of A_T_c^0(λ)^0. This operator is isospectral to the self-adjoint operator | V|^1/2λ m_μ(T_c^0(λ)) /1-λ V^1/2 M_T_c^0(λ)| V|^1/2V^1/2^† . Note that the operator difference | V|^1/21 /1-λ V^1/2 M_T_c^0(λ)| V|^1/2V^1/2^† - 𝒱_μ=λ| V|^1/2 V^1/2 M_T_c^0(λ)| V|^1/2/1-λ V^1/2 M_T_c^0(λ)| V|^1/2V^1/2^† has operator norm of order O(λ) according to Lemma <ref>. By assumption, the largest eigenvalue of 𝒱_μ has multiplicity one, and λ m_μ(T_c^0(λ)) e_μ =1+O(λ) by Lemma <ref>. Let α<1 be the ratio between the second largest and the largest eigenvalue of 𝒱_μ. The second largest eigenvalue of λ m_μ(T_c^0(λ)) 𝒱_μ is of order α+O(λ). Therefore, the largest eigenvalue of (<ref>) must have multiplicity 1 for small enough λ, and it is of order 1+O(λ), whereas the rest of the spectrum lies below α+O(λ). Hence, 1 is the maximal eigenvalue of (<ref>) and it has multiplicity 1 for small enough λ. (ii) Note that Ψ_T_c^0(λ) is an eigenvector of (<ref>) with eigenvalue 1. Furthermore, let ψ_λ be a normalized eigenvector of (<ref>) with eigenvalue 1. Then Ψ̃_T_c^0(λ)=‖Ψ‖_2/‖1 /(1-λ V^1/2 M_T_c^0(λ)| V|^1/2) V^1/2^†ψ_λ‖_21 /1-λ V^1/2 M_T_c^0(λ)| V|^1/2V^1/2^†ψ_λ agrees with Ψ_T_c^0(λ) up to a constant phase. Since ‖Ψ_T_c^0(λ)-Ψ‖^2 ≤‖Ψ̃_T_c^0(λ)-Ψ‖^2, it suffices to prove that the latter is of order O(λ) for a suitable choice of phase for ψ_λ. Let ψ(p)=1/|^d-1|^1/2. This is the eigenfunction of 𝒱_μ corresponding to the maximal eigenvalue, and Ψ=V^1/2^†ψ. In particular, for all ϕ∈ L^2(^d-1), ⟨ϕ, 𝒱_μϕ⟩≤ e_μ |⟨ϕ, ψ⟩|^2+α e_μ (‖ϕ‖_2^2-|⟨ϕ, ψ⟩|^2) We choose the phase of ψ_λ such that ⟨ψ_λ, ψ⟩≥ 0. We shall prove that ‖ψ_λ-ψ‖_2^2=O(λ). We have by (<ref>) and (<ref>) O(λ)=⟨ψ_λ,(1-λ m_μ(T_c^0(λ)) 𝒱_μ)ψ_λ⟩ ≥ 1-λ m_μ(T_c^0(λ)) e_μ|⟨ψ_λ,ψ⟩|^2-λ m_μ(T_c^0(λ))α e_μ (1-|⟨ψ_λ,ψ⟩|^2 ) =O(λ)+(1-α) (1-|⟨ψ_λ,ψ⟩|^2 ) where we used Lemma <ref> for the last equality. In particular, 1-|⟨ψ_λ,ψ⟩|^2=O(λ). Hence, ‖ψ-ψ_λ‖_2^2 =2 (1-⟨ψ_λ, ψ⟩)= 2 1-⟨ψ_λ, ψ⟩^2/1+⟨ψ_λ, ψ⟩ =O(λ). Using Lemma <ref> and that V^1/2^†: L^2(^d-1)→ L^2(^d) is a bounded operator, and then (<ref>) we obtain 1 /1-λ V^1/2 M_T_c^0(λ)| V|^1/2V^1/2^†ψ_λ=V^1/2^†ψ_λ+O(λ)=V^1/2^†ψ+O(λ^1/2), where O(λ) here denotes a vector with L^2-norm of order O(λ). Furthermore, |‖ (1-λ V^1/2 M_T_c^0(λ)| V|^1/2)^-1 V^1/2^†ψ_λ‖_2-‖ V^1/2^†ψ‖_2 | ≤‖ (1-λ V^1/2 M_T_c^0(λ)| V|^1/2)^-1V^1/2^†ψ_λ-V^1/2^†ψ‖_2 =O(λ^1/2). In total, we have Ψ̃_T_c^0(λ)= ‖Ψ‖_2/‖ V^1/2^†ψ‖_2+O(λ^1/2)(V^1/2^†ψ+O(λ^1/2))=‖Ψ‖_2/‖ V^1/2^†ψ‖_2V^1/2^†ψ+O(λ^1/2) =Ψ+O(λ^1/2) §.§ Regularity and convergence of Φ_λ In this section, we prove Lemma <ref> and Lemma <ref>. The following standard results (see e.g. <cit.>) will be helpful. * Let V ∈ L^p(^d), where p=1 for d=1, p>1 for d=2 and p=3/2 for d=3. Let ψ∈ H^1(^d). Then V^1/2ψ∈ L^2. * If V∈ L^1(^d), and ψ∈ L^2(^d), then V^1/2ψ∈ L^1 and hence V^1/2ψ is continuous and bounded. * For 1≤ t, ‖V^1/2ψ‖_s ≤ C ‖ V ‖_t^1/2‖ψ‖_2, where s=2t/(t-1) and C is some constant independent of ψ and V. * Let f be a radial, measurable function on ^3 and p≥ 1. Then there is a constant C independent of f such that sup_p_1 ∈‖ f(p_1, ·) ‖_L^p( ^2) = ‖ f(0, ·) ‖_L^p( ^2)≤ C(‖ f ‖_L^p(^3)^p+‖ f ‖_L^∞(^3)^p)^1/p. For (<ref>) and (<ref>) see e.g. <cit.>. For (<ref>) let s≥ 2. Applying the Hausdorff-Young and Hölder inequality gives ‖V^1/2ψ‖_s ≤ C ‖ V^1/2ψ‖_p ≤ C ‖ V ‖_t^1/2‖ψ‖_2, where 1=1/p+1/s and 1=p/2t+p/2. Hence, s=2t/(t-1). For (<ref>) we write ‖ f(p_1, ·) ‖_L^p( ^2)^p = 2π∫_0^∞ |f(√(p_1^2+t^2))|^p t t =2π∫_|p_1|^∞ |f(s)|^p s s ≤‖ f(0, ·) ‖_L^p( ^2)^p ≤ 2π∫_0^1 |f(s)|^p s +2π∫_0^∞ |f(s)|^p s^2 s ≤ 2π‖ f ‖_∞^p+1/2‖ f ‖^p_p, where in the second step we substituted s=√(p_1^2+t^2) and in the third step we used s≤max{1,s^2}. The eigenvalue equation H_T_c^0(λ)^0 Φ_λ =0 implies that Φ_λ(p)=λ B_T_c^0(λ)(p,0)VΦ_λ(p). Part (<ref>) follows with Lemma <ref> and <ref>(<ref>) and the normalization ‖ V^1/2Φ_λ‖_2=‖Ψ‖_2. For part (<ref>), note that p ↦ B_T(p,0) is continuous for T>0. Since Φ_λ∈ H^1(^d), continuity of VΦ_λ follows by Lemma <ref>(<ref>) and (<ref>). Note that ‖Φ_λ‖_∞≤ (2π)^-d/2‖Φ_λ‖_1 =(2π)^-d/2 (‖Φ_λχ_p^2<2μ‖_1 + ‖Φ_λχ_p^2>2μ‖_1 ). In particular, the second part of (<ref>) and (<ref>) follow from the first part of (<ref>) and (<ref>), respectively. Using (<ref>) and ‖Ψ_T_c^0(λ)‖_2= ‖Ψ‖_2 we obtain ‖Φ_λχ_p^2<2μ‖_1 ≤λ m_μ(T_c^0(λ))|^d-1| ‖V^1/2Ψ_T_c^0(λ)‖_∞≤λ m_μ(T_c^0(λ))|^d-1| ‖ V‖_1^1/2‖Ψ‖_2, where m_μ was defined in (<ref>). In particular, for fixed λ, ‖Φ_λχ_p^2<2μ‖_1 <∞ and from Lemma <ref> it follows that ‖Φ_λχ_p^2<2μ‖_1 is bounded for λ→ 0. It only remains to prove that ‖Φ_λχ_p^2>2μ‖_1 is bounded for fixed λ and is O(λ) for λ→ 0. By (<ref>) B_T(p,0)χ_p^2>2μ≤ C/(1+p^2) for some C independent of T. Using (<ref>) and applying Hölder's inequality and Lemma <ref>(<ref>), ‖Φ_λχ_p^2>2μ‖_s ≤ C λ‖1/1+|·|^2‖_p ‖V^1/2Ψ_T_c^0(λ)‖_q ≤ C λ‖1/1+|·|^2‖_p ‖ V‖_t^1/2‖Ψ‖_2 where 1/s=1/p+1/q and q=2t/(t-1). For d=1 the claim follows with the choice t=p=1. For d=2, V∈ L^1+ for some 0<≤ 1. With the choice t=1+, p=2t/(t+1)>1 the claim follows. For d=3, we may choose 1 ≤ t≤ 3/2 and 3/2<p≤∞ which gives ‖Φ_λχ_p^2>2μ‖_s=O(λ) for all 6/5<s≤∞. We use a bootstrap argument to decrease s to one. Let us use the short notation B for multiplication with B_T(p,0) in momentum space and F:L^2(^d)→ L^2(^d) the Fourier transform. Using (<ref>) one can find by induction that Φ_λχ_p^2>2μ= λ^n (χ_p^2>2μ B F V F^†)^n Φ_λχ_p^2>2μ+∑_j=1^n λ^j(χ_p^2>2μ B F VF^† )^j Φ_λχ_p^2<2μ for any n∈_≥ 1. The strategy is to prove that applying χ_p^2>2μ B F VF^† to an L^r function will give a function in L^s∩ L^∞, where s/r< c <1 for some fixed constant c. For n large enough, the first term will be in L^1, while the second term is in L^1 for all n since Φ_λχ_p^2<2μ is L^1. Let V∈ L^1∩ L^3/2+(^3) for some 0<≤ 1/2 and let 1≤ r ≤ 3/2 and f ∈ L^r(^3). Let 2≥ q≥ r and 3/2<t≤∞. * Then, ‖χ_p^2>2μ B F V F^† f ‖_s ≤ C(r,q)‖1/1+|·|^2‖_t ‖ V ‖_q‖ f ‖_r where 1/s=1/t+1/r-1/q and C(r,q) is a finite number. (For s<1, ‖·‖_s has to be interpreted as ‖ f ‖_s=(∫_^3 |f(p)|^s p)^1/s.) * Let c=/(3+)(3+2)>0 and let r/(1+c)≤ s ≤∞. Then ‖χ_p^2>2μ B F V F^† f ‖_s ≤ C(r,s) ‖ f ‖_r, where C(r,s) is a finite number. (<ref>): Using (<ref>) we have |χ_p^2>2μ B F V F^† f (p)|≤C/1+p^2 | V * f (p)|. By the Young and Hausdorff-Young inequalities, the convolution satisfies ‖V * f ‖_p ≤ C(q,r) ‖ V ‖_q ‖ f ‖_r for some finite constant C(q,r) where 1/p=1/r-1/q. The claim follows from Hölder's inequality. (<ref>): For fixed r and choosing q,t in the range r ≤ q ≤ 3/2+ϵ and 3/2+/2 ≤ t ≤∞, s=(1/t+1/r-1/q)^-1 can take all values in [r/(1+c), ∞]. The claim follows immediately from (<ref>). Let n be the smallest integer such that 7/51/(1+c)^n≤ 1. To bound the first term in (<ref>), recall from (<ref>) that ‖Φ_λχ_p^2>2μ‖_s=O(λ) for s=7/5. We apply the second part of Lemma <ref> n times. After the jth step, we have ‖ (χ_p^2>2μ B F V F^†)^j Φ_λχ_p^2>2μ‖_s=O(λ) for s=7/51/(1+c)^j. In the nth step we pick s=1 and obtain ‖ (χ_p^2>2μ B F V F^†)^n Φ_λχ_p^2>2μ‖_1=O(λ). To bound the second term in (<ref>) recall that ‖Φ_λχ_p^2<2μ‖_1=O(1). Applying the first part of Lemma <ref> with r=1, t=q=3/2+ implies that ‖∑_j=1^n λ^j(χ_p^2>2μ B F V F^†)^j Φ_λχ_p^2<2μ‖_1 ≤∑_j=1^n λ^j(C(1,3/2+)‖1/1+|·|^2‖_3/2+‖ V ‖_3/2+)^j‖Φ_λχ_p^2<2μ‖_1 =O(λ). It follows that ‖Φ_λχ_p^2>2μ‖_1 is finite and O(λ) for d=3. Using the eigenvalue equation (<ref>), we write Φ_λ(r)= ∫_| p |>√(2μ)e^i p · r/(2π)^3/2Φ_λ(p) p +λ∫_| p |<√(2μ)e^i p · (r-r')-e^i √(μ)p/|p|· (r-r')/(2π)^3 B_T_c^0(λ)(p,0)|V|^1/2 (r') Ψ_T_c^0(λ)(r') p r' +λ∫_| p |<√(2μ)e^i √(μ)p/|p|· (r-r')/(2π)^3 B_T_c^0(λ)(p,0)|V|^1/2 (r') (Ψ_T_c^0(λ)(r')-V^1/2 (r') j_3(r')) p r' +λ∫_| p |<√(2μ)e^i √(μ)p/|p|· (r-r')/(2π)^3 B_T_c^0(λ)(p,0)V(r')j_3(r') p r' We prove that the first three terms have L^∞-norm of order O(λ^1/2). For the first term this follows from Lemma <ref>. For the second term in (<ref>), we proceed as in the proof of <cit.>. First, integrate over the angular variables ∫_| p |<√(2μ)[e^i p · (r-r')-e^i √(μ)p/|p|· (r-r')] B_T_c^0(λ)(p,0) p =∫_| p |<√(2μ)[sin| p|| r-r'|/| p|| r-r'|-sin√(μ)| r-r'|/√(μ)| r-r'|] B_T_c^0(λ)(| p|,0) |p|^2 | p|, where we slightly abuse notation writing B_T(|p|,0) for the radial function B_T(p,0). Bounding the absolute value of this using |sin x/x-sin y/y|<C |x-y|/|x+y| and B_T(p,0)≤ 1/|p^2-μ| gives (<ref>)≤ C∫_| p |<√(2μ)|p|^2/( | p| +√(μ))^2| p|=:C̃<∞. In particular, the second term in (<ref>) is bounded uniformly in r by λC̃/(2π)^3‖ V‖_1^1/2‖Ψ_T_c^0(λ)‖_2 which is of order O(λ). To bound the absolute value of the third term in (<ref>), we pull the absolute value into the integral, carry out the integration over p and use the Schwarz inequality in r'. This results in the bound λ|^2 |/(2π)^3 m_μ(T_c^0(λ)) ‖ V‖_1^1/2‖Ψ_T_c^0(λ) -Ψ‖_2. By Lemma <ref>, λ m_μ(T_c^0(λ)) is bounded and by Lemma <ref>, ‖Ψ_T_c^0(λ)- Ψ‖_2 decays like λ^1/2 for small λ. The fourth term in (<ref>) equals λ m_μ(T_c^0(λ)) ^† V j_3, where we carried out the radial part of the p integration. Recall that j_3=^† 1_^2 and 𝒱_μ 1_^2 =e_μ 1_^2, where 1_^2 is the constant function with value 1 on ^2. Hence, ^† V j_3=^†𝒱_μ 1_^2 = e_μ j_3 and the fourth term in (<ref>) equals λ m_μ(T_c^0(λ)) e_μ j_3. By Lemma <ref>, λ m_μ(T_c^0(λ)) e_μ= 1+O(λ) as λ→ 0. Thus, ‖Φ_λ -j_3 ‖_∞= |λ m_μ(T_c^0(λ)) e_μ-1 |‖ j_3 ‖_∞+O(λ)=O(λ). § PROOF OF THEOREM <REF> Instead of directly looking at H_T^Ω_1, we extend the domain to L^2(^2d) by extending the wavefunctions (anti)symmetrically across the boundary. Recall that x̃ denotes the vector containing all but the first component of x. The half-space operator H_T^Ω_1 with Dirichlet/Neumann boundary conditions is unitarily equivalent to H^ ext_T=K_T^^d-λ V(x-y) χ_| x_1-y_1 | <| x_1+y_1 |-λ V(x_1+y_1,x̃-ỹ) χ_| x_1+y_1 |<| x_1-y_1 | on L^2(^d×^d) restricted to functions antisymmetric/symmetric under swapping x_1↔ -x_1 and symmetric under exchange of x ↔ y. Next, we express H^ ext_T in relative and center of mass coordinates r=x-y and z=x+y. Let U be the unitary on L^2(^2d) given by Uψ(r,z)=2^-d/2ψ((r+z)/2,(z-r)/2). Then H^1_T:= U H^ ext_TU^†=UK_T^^dU^†-λ V(r) χ_| r_1 | <| z_1 |-λ V(z_1,r̃) χ_| z_1 |<| r_1 | on L^2(^2d) restricted to functions antisymmetric/symmetric under swapping r_1↔ z_1 and symmetric in r. The spectra of H^1_T and H_T^Ω_1 agree. For an upper bound on infσ(H_T^1), we restrict H_T^1 to zero momentum in the translation invariant center of mass directions and call the resulting operator H̃_T^1. The operator H̃_T^1 acts on {ψ∈ L^2(^d×)| ψ(r,z_1)=ψ(-r,z_1)=∓ψ((z_1,r̃),r_1)}. The kinetic part of H̃_T^1 reads K̃_T(r,z_1;r',z_1')=∫_^d+1e^i p(r-r')+iq_1(z_1-z_1')/(2π)^d+1 B_T^-1(p,(q_1,0̃)) p q_1. An important property is the continuity of infσ(H_T^1), proved in Section <ref>. Let d∈{1,2,3} and let V satisfy <ref>. Then infσ(H_T^0) and infσ(H_T^1) depend continuously on T for T>0. To prove Theorem <ref> we show that there is a λ_1>0 such that for λ≤λ_1, infσ(H_T_c^0(λ)^1)≤infσ(H̃_T_c^0(λ)^1)<0. For all T<T_c^0(λ) we have by Lemma <ref> that infσ(H_T^1)≤infσ(H_T^Ω_0)<0. By continuity (Lemma <ref>) there is an ϵ>0 such that infσ(H_T^1)<0 for all T<T_c^0(λ)+ϵ. Therefore, T_c^1(λ)> T_c^0(λ). To prove that infσ(H̃_T_c^0(λ)^1)<0 for small enough λ, we pick a suitable family of trial states ψ_(r,z_1). Let λ be such that H_T_c^0(λ)^0 has a unique (and hence radial) ground state Φ_λ. According to Remark <ref>, this is the case for 0<λ≤λ_0. We choose the trial states ψ_ϵ(r,z_1)=Φ_λ(r)e^-ϵ| z_1 |∓Φ_λ(z_1, r̃)e^-ϵ| r_1 |, with the - sign for Dirichlet and + for Neumann boundary conditions. Since Φ_λ(r)=Φ_λ(-r)=Φ_λ(-r_1,r̃), these trial states satisfy the symmetry constraints and lie in the form domain of H̃_T^1. The norm of ψ_ diverges as →0. The trial state is the (anti-)symmetrization of Φ_λ(r)e^-ϵ| z_1 |, i.e. the projection of Φ_λ(r)e^-ϵ| z_1 | onto the domain of H̃_T^1. The intuition behind our choice is that, as we will see in Section <ref>, at weak coupling the Birman-Schwinger operator corresponding to H_T^Ω_1 approximately looks like A^0_T (defined in (<ref>)) on a restricted domain. This is why we want our trial state to look like the ground state Φ_λ of H_T^0 projected onto the domain of H̃_T^1. We shall prove that lim_→0⟨ψ_, H̃^1_T_c^0(λ)ψ_ϵ⟩ is negative for weak enough coupling. This is the content of the next two Lemmas, which are proved in Sections <ref> and <ref>, respectively. Let d∈{1,2,3}, μ>0 and assume <ref>. Let λ be such that H_T_c^0(λ)^0 has a unique ground state Φ_λ. Then, lim_ϵ→ 0⟨ψ_ϵ , H̃^1_T_c^0(λ)ψ_ϵ⟩ = -2λ(∫_^d+1 V(r) [ -|Φ_λ(r)∓Φ_λ(z_1, r̃) |^2 χ_| z_1 |< | r_1 | +|Φ_λ(z_1, r̃)| ^2 ] r z_1 ∓ 2π∫_^d-1Φ_λ(0,p̃)VΦ_λ(0,p̃) p̃), where the upper signs correspond to Dirichlet and the lower signs to Neumann boundary conditions. For d=1, the last term in (<ref>) is to be understood as ∓ 2πΦ_λ(0)VΦ_λ(0). For small λ we shall prove that the expression in the round bracket in (<ref>) is positive. Let d∈{1,2,3}, μ>0 and let V satisfy <ref>. Let λ_0 be as in Remark <ref>. Assume Dirichlet or Neumann boundary conditions. For d=3 assume that ∫_^3 V(r) m_3^D/N(r) r>0, where m_3^D/N was defined in (<ref>). Then there is a λ_0≥λ_1>0 such that for λ≤λ_1 the right hand side in (<ref>) is negative. Therefore, for small enough , ⟨ψ_, H̃^1_T_c^0(λ)ψ_ϵ⟩<0 proving that infσ(H̃_T_c^0(λ)^1)<0. This concludes the proof of Theorem <ref>. The additional condition ∫_^3 V(r) m_3^D/N(r) r>0 for d=3 is exactly the limit of the terms in the round brackets in (<ref>) for λ→0. Taking the limit amounts to replacing Φ_λ by j_3 (cf. Lemma <ref>). §.§ Proof of Lemma <ref> Let 0<T_0<T_1<∞. We claim that there exists a constant C_T_0,T_1 such that |K_T(p,q)-K_T'(p,q)|≤ C_T_0,T_1 |T-T'|(1+p^2+q^2) for all T_0≤ T,T'≤ T_1. To see this, compute / T K_T(p,q)=K_T(p,q)/2T^2(p^2-μ/2T)^2(p^2-μ)+(q^2-μ/2T)^2(q^2-μ)/tanh(p^2-μ/2T)+tanh(q^2-μ/2T). K_T can be estimated using Lemma <ref> and the remaining term is bounded. The kinetic part K_T^0 of H_T^0 acts as multiplication by K_T(p,0) in momentum space. For T_0<T,T'<T_1 and ψ in the Sobolev space H^1(^d), therefore ⟨ψ, (K_T^0-K_T'^0 )ψ⟩≤ C_T_0,T_1 |T-T'| ‖ψ‖_H^1(^d). Similarly, for T_0<T,T'<T_1 and ψ∈ H^1(^2d), ⟨ψ, (K_T^^d-K_T'^^d )ψ⟩≤ C_T_0,T_1 |T-T'| ‖ψ‖_H^1(^2d). Set D_0=H^1(^d) and D_1:={ψ∈ H^1(^2d) |ψ(x,y)=ψ(y,x)=∓ψ((-x_1,x̃) ,y)}, where -/+ corresponds to Dirichlet/Neumann boundary conditions, respectively. Let j∈{0,1} and ϵ>0. There is a family {ψ_T } of functions in D_j such that ‖ψ_T ‖_2=1 and ⟨ψ_T,H_T^j ψ_T ⟩≤infσ(H_T^j) +ϵ. We first argue that there is a constant C>0 such that for all T∈[T_0,T_1]: ‖ψ_T ‖_H^1<C. Recall that 2T lies in the essential spectrum of H_T^0. Together with Lemma <ref>, ⟨ψ_T,H_T^j ψ_T ⟩≤ 2T_1 +ϵ. Furthermore, by Lemma <ref>, the kinetic part of H_T^j is bounded below by some constant C_1(T_0)(1-Δ), where Δ denotes the Laplacian in all variables. Since the interaction is infinitesimally form bounded with respect to the Laplacian, there is a finite constant C_2(T_0), such that for all ψ∈ D_j with ‖ψ‖_2=1, ⟨ψ,H_T^j ψ⟩≥C_1(T_0)/2⟨ψ, (1-Δ)ψ⟩ -C_2(T_0)=C(T_0)/2‖ψ‖_H^1-C_2(T_0). In particular, ‖ψ_T‖_H^1≤2/C_1(T_0)( 2T_1 +ϵ+C_2(T_0))=:C. Let T,T'∈ [T_0,T_1]. Then infσ(H_T^j) +ϵ≥⟨ψ_T,H_T^j ψ_T ⟩=⟨ψ_T,H_T'^j ψ_T ⟩+⟨ψ_T,K_T-K_T'ψ_T ⟩ ≥infσ(H_T'^j) -|T-T'| C_T_0,T_1 C . Swapping the roles of T,T', we obtain infσ(H_T^j) -ϵ - |T-T'| C_T_0,T_1 C≤infσ(H_T'^j) ≤infσ(H_T^1) +ϵ +|T-T'| C_T_0,T_1 C and thus infσ(H_T^j) -ϵ≤lim_T'→ Tinfσ(H_T'^j) ≤infσ(H_T^j) +ϵ . Since ϵ was arbitrary, equality follows. Hence infσ(H_T^j) is continuous in T for T>0. §.§ Proof of Lemma <ref> The following technical lemma will be helpful for d=3. Let V,W ∈ L^1∩ L^3/2(^3), let W be radial and let ψ∈ L^2(^3). Then ∫_^5|V^1/2ψ(p) |1/1+p^2W(0,p̃-q̃) 1/1+p_1^2 +q̃^2|V^1/2ψ (p_1,q̃)| p q̃ ≤ C ‖W(0,·) ‖_L^3(^2)‖ V‖_3/2‖ψ‖_2^2<∞ for some constant C independent of V, W and ψ. By Lemma <ref>(<ref>), W(0, ·) ∈ L^3(^2)∩ L^∞(^2). By Young's inequality, the integral is bounded by C ‖W(0,·) ‖_L^3(^2)∫_|∫_^2|1/1+p^2V^1/2ψ(p) |^6/5p̃|^5/3 p_1 By Lemma <ref>(<ref>), ‖V^1/2ψ‖_6 ≤ C ‖ V ‖_3/2^1/2‖ψ‖_2. Applying Hölder's inequality in the p̃ variables, we obtain the bound C ‖W(0,·) ‖_L^3(^2)∫_|∫_^21/(1+p^2)^3/2p̃|^4/3|∫_^2 |V^1/2ψ(p)|^6p̃|^1/3 p_1 Applying Hölder's inequality in p_1, we further obtain C ‖W(0,·) ‖_L^3(^2)(∫_|∫_^21/(1+p^2)^3/2p̃|^2 p_1)^2/3‖V^1/2ψ‖_6^2 The remaining integral is finite. Plugging in the trial state and regrouping terms we obtain ⟨ψ_ϵ , H̃^1_T_c^0(λ)ψ_ϵ⟩ = 2∫_^2d+2 [ Φ_λ(r)e^-| z_1 |(K_T(r,z_1;r',z_1')-λ V(r)δ(r-r' )δ(z_1-z_1'))Φ_λ(r')e^-| z_1' | ∓Φ_λ(r)e^-| z_1 |(K_T(r,z_1;r',z_1')-λ V(r) δ(r-r' )δ(z_1-z_1'))e^-| r_1' |Φ_λ(z_1',r̃')] r z_1 r' z_1' +2∫_^d+1 [ λ V(r)χ_| z_1 |<| r_1 ||Φ_λ(r)|^2 e^-2| z_1 |∓Φ_λ(r)e^-| z_1 |λ V(r)χ_| z_1 |<| r_1 |e^-| r_1 |Φ_λ(z_1,r̃) + λ V(z_1, r̃)χ_| r_1 |<| z_1 ||Φ_λ(r)|^2 e^-2| z_1 |∓Φ_λ(r)e^-| z_1 |λ V(z_1, r̃)χ_| z_1 |>| r_1 |e^-| r_1 |Φ_λ(z_1,r̃) - λ V(z_1, r̃)|Φ_λ(r)|^2 e^-2| z_1 |±Φ_λ(r)e^-| z_1 |λ V(z_1, r̃)e^-| r_1 |Φ_λ(z_1,r̃) ] r z_1 We will prove that the first integral vanishes due to the eigenvalue equation H^0_T_c^0(λ)Φ_λ =0 as → 0. For the second integral in (<ref>), we will show that it is bounded as ϵ→ 0 and argue that it is possible to interchange limit and integration. The limit of the second integral is exactly the right hand side of (<ref>). The first two terms in the integrand of the second integral in (<ref>) can be bounded by λ‖Φ_λ‖_∞^2 |V(r)|χ_|z_1| <| r_1 |. This is an L^1 function, since |·| V∈ L^1 and ‖Φ_λ‖_∞< ∞ by Lemma <ref>. The same argument applies to the next two terms as well. For the fifth term in the second integral, we can interchange limit and integration by dominated convergence if ∫_^d+1| V(r)||Φ_λ(z_1, r̃) |^2 r z_1<∞. Observe that ∫_^d+1| V(r)||Φ_λ(z_1, r̃) |^2 r z_1 = (2π)^1-d/2∫_^2d-1Φ_λ(p)| V|(0,p̃ - q̃) Φ_λ(p_1,q̃) p q̃ According to Lemma <ref>(<ref>) the latter is bounded by C ∫_^2d-1|V^1/2Ψ_T_c^0(λ)(p) |1/1+p^2| V|(0,p̃-q̃) 1/1+p_1^2 +q̃^2|V^1/2Ψ_T_c^0(λ) (p_1,q̃)| p q̃ For d=1,2 we bound this by C ‖ V ‖_1^2 ‖Ψ‖_2^2 ∫_^2d-11/(1+p_1^2+p̃^2)(1+p_1^2+q̃^2) p q̃, which is finite. For d=3, (<ref>) is finite by Lemma <ref> since W=|V| is radial and in L^1∩ L^3/2. Hence, limit and integration can be interchanged for the fifth term in the second integral in (<ref>). For the last term in (<ref>) we have ∫_^d+1Φ_λ(r)e^-| z_1 | V(z_1,r̃)e^-| r_1 |Φ_λ(z_1,r̃) z_1 r =2/π∫_^d+1Φ_λ(p)/^2+q_1^2/^2+p_1^2VΦ_λ(q_1,p̃) p q_1 =2/π∫_^d+1Φ_λ( p_1,p̃)1/1+q_1^21/1+p_1^2VΦ_λ( q_1,p̃) p q_1. According to Lemma <ref>(<ref>) and Lemma <ref>(<ref>), the integrand is bounded by C(λ) /1+p̃^2‖ V‖_1 ‖Ψ‖_2^2/(1+q_1^2)(1+p_1^2). For d=1,2 this is integrable, so by dominated convergence and since ∫_1/1+x^2 x =π, this term converges to the last term in (<ref>). For d=3, the following result will be useful. Let λ,T,μ>0 and d=3 and let V satisfy <ref>. The functions f(p_1,q_1)=∫_^2Φ_λ(p)VΦ_λ(q_1,p̃) p̃ and g(p_1,q_1)= ∫_^2 B^-1_T(( p_1, p̃),( q_1,0̃)) Φ_λ( p_1, p̃)Φ_λ( q_1,p̃) p̃ are bounded and continuous. Its proof can be found after the end of the current proof. We write the term in (<ref>) as 2/π∫_^2f( p_1, q_1)/(1+q_1^2)(1+p_1^2) p_1 q_1. By Lemma <ref> we can exchange limit and integration by dominated convergence and (<ref>) converges to the last term in (<ref>). For the second summand in the first integral in (<ref>) we also want to argue using dominated convergence. The interaction term agrees with (<ref>). The kinetic term can be written as 4/π∫_^d+11/(1+q_1^2)(1+p_1^2)B^-1_T((ϵ p_1, p̃),( q_1,0̃)) Φ_λ(ϵ p_1, p̃) Φ_λ(ϵ q_1,p̃) p q_1 =4/π∫_^21/(1+q_1^2)(1+p_1^2) g(ϵ p_1, q_1) p_1 q_1 For d=3, we can apply dominated convergence according to Lemma <ref>. For d=1,2 note that by Lemma <ref> and Lemma <ref>, B^-1_T(p,(q_1,0̃)) |Φ_λ(p) ||Φ_λ(q_1, p̃)|≤ C_T,μ,λ1+p^2+q_1^2/(1+p^2)(1+q_1^2+p̃^2)‖ V‖_1 ‖Ψ‖_2^2 ≤ 2 C_T,μ,λ‖ V‖_1 ‖Ψ‖_2^2 /1+p̃^2 Therefore, the integrand is bounded by C ‖ V‖_1 ‖Ψ‖_2^2 /(1+q_1^2)(1+p_1^2)(1+p̃^2). For d=1,2 this is integrable and we can apply dominated convergence. We conclude that the limit of the second summand in the first integral in (<ref>) as ϵ→ 0 equals 4 π∫_^d-1( |Φ_λ(0,p̃) |^2/B_T((0,p̃),0)-λΦ_λ(0,p̃) V Φ_λ(0,p̃) ) p̃= 0 where we used that ∫_1/1+x^2 x =π and (<ref>). To see that the first summand in the first integral in (<ref>) vanishes as →0, we use (<ref>) to obtain 2/ϵλ∫_^d V(r) |Φ_λ(r)|^2 r = 2/∫_^d B_T^-1(p,0)|Φ_λ(p)|^2 p = 4/π∫_^d+1^2/(^2+q_1^2)^2 B_T^-1(p,0)|Φ_λ(p)|^2 p q_1. Hence, we need to prove that lim_ϵ→ 0∫_^d+1ϵ^2/(ϵ^2+q_1^2)^2 (B^-1_T(p,(q_1, 0̃))- B^-1_T(p,0))|Φ_λ(p) |^2 p q_1=0 We split the integration into two regions with | q_1 |>C_1 and | q_1 |<C_1, respectively. By Lemma <ref>, we have B^-1_T(p,q)≤ C_2 (1+p^2+q^2). Together with Φ_λ∈ H^1(^d) therefore ∫_^d+1, | q_1 |>C_1ϵ^2/(ϵ^2+q_1^2)^2| B^-1_T(p,(q_1,0̃))- B^-1_T(p,0)||Φ_λ(p) |^2 p q_1 ≤ 2C_2 ∫_^2, | q _1|>C_1^2(1+p^2+q_1^2)|Φ_λ(p) |^2 /q_1^4 p q_1 < C_3 ^2 ‖Φ_λ‖_H^1^2, which vanishes in the limit ϵ→ 0. For the case | q_1 |<C, the following Lemma is useful. Its proof can be found at the end of this Section. Let T,μ>0, d∈{1,2,3}. The function k(p,q):=1/| q|(B_T(p,q)- B_T(p,0)) is continuous at q=0 and satisfies k(p,0)=0 for all p∈^d. Furthermore, there is a constant C depending only on T,μ,d such that | k(p,q) | < C/1+p^2 for all p,q∈^d. Since B^-1_T(p,q)- B^-1_T(p,0)= -| q | k(p,q)/B_T(p,q) B_T(p,0), we have ∫_^d+1, | q_1 |<C_1ϵ^2/(ϵ^2+q_1^2)^2 ( B^-1_T(p,(q_1,0̃))- B^-1_T(p,0)) |Φ_λ(p) |^2 p q_1 =-∫_^d+1| q_1 |χ_| q_1 |<C_1/ϵ/(1+q_1^2)^2 k(p,(ϵ q_1,0̃))/B_T(p,(ϵ q_1,0̃)) B_T(p,0)|Φ_λ(p) |^2 p q_1 By Lemma <ref> and Lemma <ref>, we can bound the absolute value of the integrand by C | q_1 |χ_| q_1 |<C_1/ϵ/(1+q_1^2)^2 (1+p^2+^2 q_1^2) |Φ_λ(p) |^2 ≤ C | q_1 |/(1+q_1^2)^2 (1+p^2+C_1^2)|Φ_λ(p) |^2 The latter is integrable since Φ_λ∈ H^1(^d). Thus, by dominated convergence and since k(p,0)=0, the integral vanishes in the limit ϵ→ 0. For convenience, we introduce the notation D_f(p,q_1)=λ B_T(p,0) and D_g(p,q_1)=λ^2 B_T(p,0) B_T(p,(q_1,0̃))^-1B_T((q_1,p̃),0). For h∈{f,g}, D_f(p,q_1),D_g(p,q_1)≤C/1+p̃^2 by Lemma <ref> and (<ref>). Furthermore, h(p_1,q_1)=∫_^d-1VΦ_λ(p_1,p̃) D_h(p,q_1) VΦ_λ(q_1,p̃) p̃ using (<ref>). For h∈{f,g}, sup_p_1,q_1,w_1 ∈‖ D_h((p_1,·),q_1)VΦ_λ(w_1,·)‖_L^1(^2)≤sup_w_1 ∈‖C/1+|·|^2VΦ_λ(w_1,·)‖_L^1(^2)<∞. Using Hölder's inequality, ‖ D_h((p_1,·),q_1)VΦ_λ(w_1,·)‖_L^1(^2)≤‖C/1+|·|^2VΦ_λ(w_1,·)‖_L^1(^2) ≤1/(2π)^3/2∫_^2∫_^3C/1+p̃^2|V((w_1,p̃)-k)||Φ_λ(k)| k p̃ ≤ C ‖1/1+|·|^2‖_L^r(^2)∫_(∫_^2|V(w_1-k_1,p̃)|^s p̃)^1/s(∫_^2|Φ_λ(k)|k̃) k_1 ≤ C ‖1/1+|·|^2‖_L^r(^2)sup_k_1‖V(k_1,·)‖_s ‖Φ_λ‖_1 , where 1=1/r+1/s. For this to be finite we need r>1, i.e. s<∞. By Lemma <ref>(<ref>), sup_q_1‖V(q_1,·)‖_3 <∞. Furthermore ‖Φ_λ‖_1 is bounded by Lemma <ref>. The functions f and g are bounded, as can be seen using that ‖V Φ_λ‖_∞≤ C ‖ V‖_1^1/2‖Ψ_T_c^0(λ)‖_2 by Lemma <ref>(<ref>) and ‖Ψ_T_c^0(λ)‖_2=1, hence we get that for h∈{f,g} |h(p_1,q_1)|≤ C ‖ V‖_1^1/2sup_p_1,q_1‖ D_h((p_1,·),q_1)VΦ_λ(q_1,·)‖_L^1(^2), which is finite by Lemma <ref>. To see continuity, we write for h∈{f,g} |h(p_1+ϵ_1,q_1+ϵ_2)-h(p_1,q_1)|≤ [ | ∫_^2(VΦ_λ(p_1+_1,p̃)- V ϕ_T,λ(p)) D_h((p_1+_1,p̃),q_1+_2) VΦ_λ(q_1+_2,p̃)p̃| + |∫_^2VΦ_λ(p) D_h((p_1+_1,p̃),q_1+_2) (VΦ_λ(q_1+_2,p̃)-VΦ_λ(q_1,p̃)) p̃ k | + |∫_^2V Φ_λ(p) (D_h((p_1+_1,p̃),q_1+_2)-D_h(p,q_1))VΦ_λ(q_1,p̃) p̃| ] Observe that |V Φ_λ(p_1+_1,p̃)- V Φ_λ(p)| ≤1/(2π)^d/2∫_^d | e^i _1 r_1-1| |V(r)||Φ_λ(r)| r ≤_1 ‖Φ_λ‖_∞‖ |· | V ‖_1/(2π)^d/2 With Lemma <ref> and Lemma <ref>, we bound the first two terms in (<ref>) by C _1 and C_2, respectively. Hence they vanish as _1,_2→0. The absolute value of the integrand in the last term in (<ref>) is bounded by ‖V Φ_λ‖_∞2C/1+p̃^2VΦ_λ(q_1,p̃). By Lemma <ref>, this is an L^1 function. Hence, when taking the limit _1,_2→0, we are allowed to pull the limit into the integral by dominated convergence, showing that also the last term vanishes. Therefore, the functions f and g are continuous. This Lemma is a generalization of <cit.> and its proof follows the same ideas. For | q |>1, Lemma <ref> implies the bound | k(p,q) | < C/1+p^2. For | q |<1, we use the partial fraction expansion (see <cit.>) k(p,q)=2T∑_n∈| q | (2μ -q^2-2p^2+4 (p ·q/| q |)^2)- 4 i w_n p ·q/| q |/((p+q)^2-μ-iw_n)((p-q)^2-μ+iw_n)(p^2-μ-iw_n)(p^2-μ+iw_n) where w_n=(2n+1)π T. Continuity of k follows e.g. using the Weierstrass M-test. Noting that w_n=-w_-n-1, it is easy to see that k(p,0)=0. With the estimates sup_(p,q)∈^2d, | q |<1|| q | (2μ -q^2-2p^2+4 (p ·q/| q |)^2)/((p+q)^2-μ-iw_n)((p-q)^2-μ+iw_n)| ≤sup_(p,q)∈^2d, | q |<1| q | (2μ +q^2+6p^2) /√([(p+q)^2-μ]^2+w_0^2)√([(p-q)^2-μ]^2+w_0^2)=:c_1 <∞ and sup_(p,q)∈^2d, | q |<1|4 i w_n p/((p+q)^2-μ-iw_n)((p-q)^2-μ+iw_n)| ≤sup_(p,q)∈^2d, | q |<1 4| p|/√([( p + q)^2-μ]^2+w_0^2)=:c_2 <∞ one obtains | k(p,q) |≤ 2T(c_1+c_2)∑_n∈1/(p^2-μ)^2+w_n^2 Using that the summands are decreasing in n, we can estimate the sum by an integral | k(p,q) |≤4T(c_1+c_2)[ 1/(p^2-μ)^2+w_0^2 +∫_1/2^∞1/(p^2- μ)^2+4π^2T^2 x^2 x ] =4T(c_1+c_2) [ 1/(p^2-μ)^2+w_0^2 +arctan(| p^2- μ|/π T)/2π T | p^2- μ|] <C 1/1+p^2 for some constant C independent of p and q. §.§ Proof of Lemma <ref> Recall that Ψ_T_c^0(λ) =V^1/2Φ_λ with normalization ‖Ψ_T_c^0(λ)‖_2^2=‖Ψ‖_2^2=∫_^d V(r)j_d(r)^2 r, where j_d was defined in (<ref>). Recall from (<ref>) that -1/2λlim_ϵ→ 0⟨ψ_ϵ , H^1_T_c^0(λ),λψ_ϵ⟩ = ∫_^d+1 V(r) |Φ_λ(z_1, r̃)|^2 r z_1 - ∫_^d+1 V(r)|Φ_λ(z_1, r̃)∓Φ_λ(r) |^2 χ_| z_1 |< | r_1 | r z_1 ∓ 2π∫_^d-1Φ_λ(0,p̃)VΦ_λ(0,p̃) p̃ The claim follows, if we prove that the right hand side is positive in the limit λ→ 0. For d∈{1,2} we prove that the terms on the second line are bounded and the first term diverges as λ→0. For d=3 the first term is bounded too, so we need to compute the limit of all terms. The idea is that in the limit, one would like to replace Φ_λ by j_3 using Lemmas <ref> and <ref>. We consider each of the three summands in (<ref>) separately. Second term: The second term is bounded by 4‖ |· | V ‖_1 ‖Φ_λ‖_∞^2, which is bounded for small λ by Lemma <ref>. For d=3 we want to compute the limit. By Lemma <ref> the integrand is bounded by 8 |V(r)| ‖ j_3 ‖_∞^2 χ_|z_1|<|r_1| for λ small enough, which is integrable. By dominated convergence, the term thus converges to -∫_^4 V(r) | j_3(z_1,r̃) ∓ j_3(r) |^2 χ_|z_1|<|r_1| r z_1. Third term: Using (<ref>) the third term in (<ref>) equals ∓ 2πλ∫_^d-1|V^1/2Ψ_T_c^0(λ)(0,p̃)|^2 B_T_c^0(λ) ((0,p̃ ),0) p̃ For d=1, this is bounded by 2πλ B_T_c^0(λ) (0,0) ‖V^1/2Ψ_T_c^0(λ) ‖_∞^2. By Lemma <ref>(<ref>) and since sup_T B_T(0,0)=1/μ, this is O(λ) as λ→0. For d=2 we use (<ref>) to bound (<ref>) by 2πλ∫_|p̃|^2 <2μ B_T_c^0(λ)((0,p̃ ),0) p̃‖V^1/2Ψ_T_c^0(λ)‖_∞^2 +Cλ∫_|p̃|^2 >2μ1/1+p̃^2p̃‖V^1/2Ψ_T_c^0(λ) ‖_∞^2, where C is independent of λ. By Lemma <ref>(<ref>) ‖V^1/2Ψ_T_c^0(λ)‖_∞ is bounded as λ→0. The second term in (<ref>) thus vanishes as λ→ 0. For the first term, recall from (<ref>) that ∫_|p̃|^2 <2μ B_T_c^0,μ((0,p̃ ),0) p̃ = 2 π m_μ^d=2(T_c^0(λ)). By Lemma <ref> the first term is bounded for small λ. For d=3, we rewrite (<ref>) as ∓ 2πλ∫_p̃^2>2μ|V^1/2Ψ_T_c^0(λ)(0,p̃)|^2 B_T_c^0,μ ((0,p̃ ),0) p̃ ∓λ∫_p̃^2<2μ∫_^6V^1/2Ψ_T_c^0(λ)(x) e^ip̃·(x̃-ỹ)-e^i√(μ)p̃/|p̃|·(x̃-ỹ)/(2π)^2B_T_c^0,μ((0,p̃ ),0)V^1/2Ψ_T_c^0(λ)(y) x yp̃ ∓λ∫_p̃^2<2μ∫_^6(V^1/2Ψ_T_c^0(λ)(x)-Vj_3(x) )e^i√(μ)p̃/|p̃|·(x̃-ỹ)/(2π)^2B_T_c^0,μ((0,p̃ ),0) V^1/2Ψ_T_c^0(λ) (y) x yp̃ ∓λ∫_p̃^2<2μ∫_^6 V(x)j_3(x) e^i√(μ)p̃/|p̃|·(x̃-ỹ)/(2π)^2B_T_c^0,μ((0,p̃ ),0) (V^1/2Ψ_T_c^0(λ)(y)-Vj_3(y) ) x yp̃ ∓λ∫_p̃^2<2μ∫_^6 V(x)j_3(x) e^i√(μ)p̃/|p̃|·(x̃-ỹ)/(2π)^2B_T_c^0,μ ((0,p̃ ),0) V(y)j_3(y) x yp̃. We prove that the first four integrals vanish as λ→ 0 and compute the limit of the expression in the last line. Using (<ref>), Lemma <ref>(<ref>) and Ψ_T_c^0(λ)=V^1/2Φ_λ the first term in (<ref>) is bounded by C λ‖ V‖_1^1/2‖Ψ_T_c^0(λ)‖_2 ‖1/1+|·|^2VΦ_λ(0,·)‖_L^1(^2) where C is independent of λ. By (<ref>), ‖1/1+|·|^2VΦ_λ(0,·)‖_L^1(^2)≤‖1/1+|·|^2‖_L^3/2(^2)sup_k_1‖V(k_1,·)‖_3 ‖Φ_λ‖_1 By Lemma <ref>(<ref>), sup_k_1‖V(k_1,·)‖_3 <∞. Furthermore ‖Φ_λ‖_1 is bounded uniformly in λ by Lemma <ref>. In total, the first term in (<ref>) is O(λ) as λ→0. For the second line of (<ref>) we use that sup_λ>0sup_x̃,ỹ∈^2|∫_^2,p̃^2<2μe^ip̃·(x̃-ỹ)-e^i√(μ)p̃/|p̃|·(x̃-ỹ)/(2π)^3B_T_c^0(λ)((0,p̃ ),0)p̃| <∞, as was shown in the proof of <cit.>. Applying the Schwarz inequality, the second line is bounded by C λ‖ V ‖_1‖Ψ_T_c^0(λ)‖_2^2 for some constant C and vanishes for λ→0. We bound the third line of (<ref>) by λ/(2π)^2∫_^2,p̃^2<2μ∫_^6|(V^1/2Ψ_λ(x)-Vj_3(x) )| B_T_c^0(λ)((0,p̃ ),0) | V^1/2Ψ_T_c^0(λ) (y) | x yp̃ ≤λ|^1 |/(2π)^2 m_μ^d=2(T_c^0(λ))‖ V ‖_1 ‖Ψ_T_c^0(λ)‖_2‖Ψ_T_c^0(λ)-Ψ‖_2, where in the second step we carried out the p̃ integration and used the Schwarz inequality in x and y. By Lemma <ref>, λ m_μ^d=2(T_c^0(λ)) is bounded and by Lemma <ref>, ‖Ψ_T_c^0(λ) -Ψ‖_2 decays like λ^1/2. Hence, this vanishes for λ→0. Similarly, the fourth integral in (<ref>) is bounded by λ|^1 |/(2π)^2 m_μ^d=2(T_c^0(λ))‖ V ‖_1 ‖ V^1/2j_3 ‖_2‖Ψ_T_c^0(λ) -Ψ‖_2, which vanishes for λ→0. For the last line of (<ref>) we first carry out the integration over x,y and the radial part of p̃, and then use that Vj_3 is a radial function. This way we obtain ∓λ m_μ^d=2(T_c^0(λ)) 2π∫_^1|Vj_3(0,√(μ)w)|^2 w=∓λ m_μ^d=2(T_c^0(λ)) π∫_^2|Vj_3(√(μ)w)|^2 w The latter integral equals ⟨| V|^1/2j_3,O_μ V^1/2 j_3⟩ = e_μ∫_^3 V(x)j_3(x)^2 x. By Lemma <ref>, lim_λ→ 0λ m_μ^d=2(T_c^0(λ))e_μ =lim_λ→ 0λln(μ/T_c^0(λ))e_μ =1/μ^1/2. Therefore, the limit of the last line of (<ref>) for λ→ 0 equals ∓π/μ^1/2∫_^3 V(x)j_3(x)^2 x. First term: It remains to consider the first term in (<ref>). If V≥ 0, one could argue directly using the convergence of Φ_λ in Lemma <ref> for d=3. However, the analogue of Lemma <ref> does not hold for d=1. Instead, the strategy is to use the L^2-convergence of the ground state in the Birman-Schwinger picture, Lemma <ref>. This approach also allows us to treat V that take negative values. Switching to momentum space and using the eigenvalue equation (<ref>), we rewrite the first term in (<ref>) as (2π)^1-d/2∫_^2d-1Φ_λ (p)V(0,p̃-q̃)Φ_λ (p_1,q̃) p q̃ =(2π)^1-d/2λ^2⟨Ψ_T_c^0(λ), D_T_c^0(λ)Ψ_T_c^0(λ)⟩ , where D_T is the operator given by ⟨ψ, D_T ψ⟩=∫_^2d-1|V|^1/2ψ (p) B_T (p,0) V(0,p̃-q̃) B_T((p_1,q̃),0)|V|^1/2ψ(p_1,q̃) p q̃ for ψ∈ L^2(^d). We decompose (<ref>) as (2π)^1-d/2λ^2⟨Ψ_T_c^0(λ), D_T_c^0(λ)Ψ_T_c^0(λ)⟩ = (2π)^1-d/2λ^2 (⟨Ψ_T_c^0(λ)-Ψ, D_T_c^0(λ)Ψ_T_c^0(λ)⟩ +⟨Ψ, D_T_c^0(λ) ( Ψ_T_c^0(λ)- Ψ) ⟩+⟨Ψ, D_T_c^0(λ)Ψ⟩). Recall that by Lemma <ref>, ‖Ψ_T_c^0 - Ψ‖_2=O(λ^1/2). The strategy is to prove that ‖ D_T ‖ and ⟨Ψ, D_TΨ⟩ are of the same order for T→ 0. Then, the positive term ⟨Ψ, D_T_c^0(λ)Ψ⟩ will be the leading order term in (<ref>) as λ→0. The asymptotic behavior of ‖ D_T ‖ and ⟨Ψ, D_TΨ⟩ is the content of the following two Lemmas. These asymptotics strongly depend on the dimension and this is where the different treatment of d=3 versus d∈{1,2} in Theorem <ref> originates. It will be convenient to introduce the operator D_T^< as ⟨ψ, D_T^<ψ⟩=∫_| p|^2<2μ, |(p_1,q̃)|<2μ, p_1^2<μ|V|^1/2ψ (p) B_T (p,0) V(0,p̃-q̃) B_T((p_1,q̃),0)|V|^1/2ψ(p_1,q̃) p q̃ for ψ∈ L^2(^d). Let μ>0 and let V satisfy <ref>. There are constants C,T_0>0 such that for all 0<T<T_0 for d=1 ‖ D_T ‖≤ C/T, for d=2 ‖ D_T ‖≤ C(lnμ/T)^3 and for d=3, ‖ D_T ‖≤ C(lnμ/T)^2. For d=3, furthermore ‖ D_T-D_T^< ‖≤ C lnμ/T. Let μ>0 and let V satisfy <ref>. Recall that Ψ=V^1/2j_d. There are constants C,T_0>0 such that for all 0<T<T_0, ⟨Ψ, D_T Ψ⟩≥ C/T for d=1 and ≥ C(lnμ/T)^3 for d=2. For d=3, lim_λ→ 0 (2π)^-1/2λ^2⟨Ψ, D_T_c^0(λ)Ψ⟩ = ∫_^4 V(r) j_3(z_1,r̃;μ)^2 r z_1. For λ→ 0, by Lemma <ref>, ln(μ/T_c^0(λ)) is of order 1/λ, hence the last term in (<ref>) diverges for d=1,2. For d=3 we get the desired constant by Lemma <ref>. Assume that T/μ<1/2. We treat the different dimensions d separately. Dimension one: Note that |⟨ψ, D_T ψ⟩ |= |V(0)| ∫_ B_T (p,0)^2 ||V|^1/2ψ(p) |^2 p ≤‖ V ‖_1^2 ∫_ B_T (p,0)^2 p ‖ψ‖_2^2, where we used Lemma <ref>. Recall from (<ref>) that B_T (p,0) ≤min{1/| p^2-μ|, 1/2T}. We estimate the integral ∫_ B_T (p,0)^2 p ≤∫_√(μ)-T/√(μ)< | p |<√(μ)+T/√(μ)1/4T^2 p+∫_χ_| p |< √(μ)-T/√(μ)+χ_√(μ)+T/√(μ)<p<2√(μ)/μ (| p |-√(μ) )^2 p + ∫_ p>2√(μ)1/( p^2-μ )^2 p The first term equals (√(μ)T)^-1. The last term is a finite constant independent of T. In the second term we substitute || p | -√(μ)| by x and get the bound 2 ∫_T/√(μ)^√(μ)1/μ x^2 x = 2/√(μ)(1/T-1/μ) Dimension two: Using the Schwarz inequality we have ⟨ψ, D_T ψ⟩≤ C ‖ V‖_1^2 ∫_^3 B_T,μ (p,0) B_T,μ((p_1,q̃),0) p q̃‖ψ‖_2^2 The integral can be rewritten as ∫_(∫_ B_T, μ-p_1^2(p_2, 0) p_2 )^2 p_1, where B_T,μ here is understood as the function on × instead of ^2×^2. We prove that this integral is of order O(ln(μ/T)^3) for T→0. To bound the integral we consider three regimes, p_1^2<μ-T, μ-T<p_1^2<μ+T, and μ+T<p_1^2. Corresponding to these regimes, we need to understand ∫_ B_T, μ(p, 0) p for T/μ<1, -1<μ/T <1, and μ/T<-1. In the first regime, there is a constant C_1, such that for all T/μ<1 | √(μ)∫_ B_T,μ(p,0) p - 2 lnμ/T |≤ C_1. This follows from rescaling √(μ)∫_ B_T,μ(p,0) p = ∫_ B_T/μ,1 (p,0) p and applying <cit.>. For the second regime, we rewrite ∫_ B_T,μ(p,0) p = 1/√(T)∫_tanh((p^2-μ/T)/2)/p^2-μ/T p Since tanh (x)/x ≤min{1,1/| x|} the latter integral is uniformly bounded for |μ/T |<1, ∫_ B_T,μ(p,0) p ≤C_2/√(T). For the third regime, it follows from (<ref>) that ∫_ B_T,μ(p,0) p≤1/√(T)∫_1/p^2-μ/T p= 1/√(-μ)∫_1/p^2+1 p=:C_3/√(-μ). Combining the bounds in the three regimes, we bound (<ref>) from above by ∫_| p_1 |<√(μ-T)(2 ln(μ-p_1^2/T)+C_1)^2/μ-p_1^2 p_1+∫_√(μ-T)<| p_1 |<√(μ+T)C_2^2/T p_1+∫_√(μ+T)<| p_1 |C_3^2/p_1^2-μ p_1 The first integral is bounded above by (2 ln(μ/T)+C_1)^2 ∫_| p_1 |<√(μ-T)1/μ-p_1^2 p_1. Since ∫_| p_1 |<√(μ-T)1/μ-p_1^2 p_1=1/√(μ)ln(2μ-T+√(μ(μ-T))/T)=O(ln (μ/T)), the first integral in (<ref>) is of order O(ln (μ/T)^3). In the second integral, the size of the integration domain is 2T/√(μ)+O(T^2), so the integral is bounded as T→ 0. The third integral equals C_3^2/√(μ)ln(2μ+T+√(μ(μ+T))/T) =O(lnμ/T). Dimension three: For d=3, we first prove that ‖ D_T^<‖=O(ln(μ/T)^2). We bound (<ref>) using the Schwarz inequality ⟨ψ, D_T^< ψ⟩≤‖ V ‖^2_1 ‖ψ‖_2^2 ∫_^5χ_|p|^2<2μ, |(p_1,q̃)|<2μ, p_1^2<μ B_T,μ (p,0) B_T,μ((p_1,q̃),0) p q̃. The integral can be rewritten as 4π^2 ∫_0^√(μ)(∫_0^√(2μ-p_1^2) B_T, μ-p_1^2(t, 0) t t )^2 p_1 Substituting s=(t^2+p_1^2-μ)/T gives π^2 ∫_0^√(μ)(∫_-(μ-p_1^2)/T^μ/Ttanh(s)/s s )^2 p_1 ≤√(μ)π^2 (∫_-μ/T^μ/Ttanh(s)/s s )^2 Since tanh (x)/x ≤min{1,1/| x|}, this is bounded by √(μ)4π^2 (1+ln(μ/T) )^2. To bound ‖ D_T-D_T^<‖, we distinguish the cases were p^2 and (p_1,q̃)^2 are larger or smaller than 2μ. Using (<ref>) we bound ⟨ψ, D_T-D_T^< ψ⟩≤‖ V ‖^2_1 ‖ψ‖_2^2 ∫_^5χ_|p|^2<2μ, |(p_1,q̃)|<2μ,p_1^2>μ B_T,μ (p,0) B_T,μ((p_1,q̃),0) p q̃ +2‖ V ‖_1 ‖ψ‖_2^2 ∫_^5C/p̃^2+1|V(0,p̃-q̃)| B_T,μ((p_1,q̃),0)χ_|(p_1,q̃)|^2<2μ p q̃ +∫_^5|V|^1/2ψ (p)C/p^2+1|V(0,p̃-q̃) |C /p_1^2+q̃^2+1|V|^1/2ψ(p_1,q̃) p q̃, where C is a constant independent of T. For the first term, proceeding similarly to (<ref>)–(<ref>), the integral equals π^2 ∫_√(μ)^√(2μ)(∫_(p_1^2-μ)/T^μ/Ttanh(s)/s s )^2 p_1 ≤π^2 ∫_√(μ)^√(2μ)ln(μ/p_1^2-μ)^2 p_1<∞ For the second term in (<ref>) we apply Young's inequality to bound the integral by C ‖1/1+|·|^2‖_L^3/2(^2)‖V(0,·) ‖_L^3(^2) |^2| m_μ(T) which is O(lnμ/T). The third term in (<ref>) is bounded by C ‖ψ‖_2^2 by Lemma <ref>. By assumption, 0<e_μ= 1/(2π)^d/2∫_^d-1V(p- √(μ)ω) Ω(ω)=V j_d(| p|=√(μ)). By continuity of Vj_d(p) in p, there is an >0 such that V j_d(p)>1/2V j_d(| p|=√(μ))>0 for all √(μ)- < | p|< √(μ)+. In the following we treat the different dimensions separately. Dimension one: Suppose T<. Since V(0) >0, ⟨ V^1/2 j_1, D_T V^1/2 j_1⟩ = V(0) ∫_ B_T (p,0)^2 |V j_1(p) |^2 p ≥1/4V(0) |V j_1(√(μ)) |^2 ∫_√(μ)+T^√(μ)+ B_T (p,0)^2 p For p∈[√(μ)+T,√(μ)+], B_T (p,0)≥tanh(√(μ))/p^2-μ≥tanh(√(μ))/(2√(μ)+)(p-√(μ)). Since ∫_√(μ)+T^√(μ)+1/(p-√(μ))^2 p = 1/T-1/, we obtain the lower bound ⟨ V^1/2 j_1, D_T V^1/2 j_1⟩≥1/4V(0) |V j_1(√(μ)) |^2 tanh(√(μ))^2/(2√(μ)+)^2(1/T-1/) and the claim follows. Dimension two: Since V(0)> 0, by continuity also V(p)>0 for small |p|. Therefore, there are constants 0<δ<μ and C>0 such that for all √(μ-δ)<p_1≤√(μ) and 0≤ p_2,q_2<δ^1/2 V j_2 (p_1,p_2)V(0,p_2-q_2) V j_2(p_1,q_2) >C. Let A:={(p_1,p_2,q_2)∈^3 | √(μ-δ)<p_1<√(μ), 0<p_2,q_2<δ^1/2, p_1^2+p_2^2>μ+T,p_1^2+q_2^2>μ+T}. We estimate ⟨ V^1/2 j_2, D_T V^1/2 j_2 ⟩≥ C ∫_A B_T (p,0) B_T((p_1, q_2),0) p q_2 For (p_1,p_2,q_2)∈ A we have p_1^2+p_2^2-μ >T and thus B_T (p,0)≥tanh(1/2)/ p_1^2+p_2^2-μ For p_1^2>μ+T-δ ∫_√(μ+T-p_1^2)^δ^1/21/p_1^2+p_2^2-μ p_2 =1/√(μ-p_1^2)[(√(1-T/μ+T-p_1^2))-(√(μ-p_1^2/δ))]. Hence, the integral in (<ref>) is bounded below by tanh(1/2)^2 ∫_√(μ+T-δ)^√(μ)1/μ-p_1^2[(√(1-T/μ+T-p_1^2))-(√(μ-p_1^2/δ))]^2 p_1 Assume that T<δ/2. For a lower bound, we further restrict the p_1-integration to the interval (√(μ-δ/2), √(μ-μ^1/2T^1/2)). For these values of p_1, we have (√(μ-p_1^2/δ)) ≤(1/√(2)) ≤(√(1-T^1/2/μ^1/2)) ≤(√(1-T/μ+T-p_1^2)). Furthermore, ∫_√(μ-δ/2)^√(μ-μ^1/2T^1/2)1/μ-p_1^2 p_1 = 1/√(μ)(1-(√(μ)/a+1)(1-b/√(μ))/√(μ)/a-b/√(μ)), where a=√(μ-δ/2) and b=√(μ-μ^1/2T^1/2)≤√(μ). This is bounded below by 1/√(μ)(1-(√(μ)/a+1)(1-b/√(μ))/√(μ)/a-1). In total, (<ref>) is bounded from below by 1/√(μ)tanh(1/2)^2 ( (√(1-T^1/2/μ^1/2)) -(1/√(2)) )^2× (1-(√(μ)/a+1)(1-√(1-(T/μ)^1/2))/√(μ)/a-1) With (1-x) = 1/2ln 2/x+o(1) as x→ 0, we obtain that for T→0 (√(1-T^1/2/μ^1/2)) =1/4ln(16μ/T) +o(1) and (1-(√(μ)/a+1)(1-√(1-(T/μ)^1/2))/√(μ)/a-1)=1/4ln(16(√(μ)/a-1/√(μ)/a+1)^2μ/T) +o(1) In particular, we obtain ⟨ V^1/2 j_2, D_T V^1/2 j_2 ⟩≥C /√(μ)ln(μ/T)^3+O(ln(μ/T)^2) for some C>0 which implies the claim. Dimension three: Using Lemma <ref> and that lnμ/T_c^0(λ)∼ 1/λ by Lemma <ref>, lim_λ→ 0λ^2⟨ V^1/2j_3, D_T_c^0(λ) V^1/2j_3 ⟩ = lim_λ→ 0λ^2⟨ V^1/2j_3, D_T_c^0(λ)^< V^1/2j_3 ⟩ . By integrating out the angular variables ∫_^3 V(r) j_3(r;μ) e^i √(μ)r · p/| p |/(2π)^3/2 r = 1/|^2|∫_^3 V(r) j_3(r;μ)^2=e_μ. Therefore, we can write ⟨ V^1/2j_3, D_T_c^0(λ)^< V^1/2j_3 ⟩ =1/(2π)^3∫_^11; p̃^2, q̃^2<2μ-p_1^2, p_1^2<μ( V j_3(r;μ)(e^i r · p-e^i √(μ)r · p/| p |)× B_T_c^0(λ) (p,0) V(0,p̃-q̃) B_T_c^0(λ)((p_1,q̃),0)e^-i p · r' V j_3(r';μ) + V j_3(r;μ)e^i √(μ)r · p/| p | B_T_c^0(λ) (p,0) V(0,p̃-q̃) B_T_c^0(λ)((p_1,q̃),0)(e^-i p · r'-e^-i √(μ)r' · p/| p |) V j_3(r';μ) ) p q̃ r r' +e_μ^2 ∫_^8; p̃^2, q̃^2<2μ-p_1^2, p_1^2<μ B_T_c^0(λ) (p,0) e^i (p̃-q̃)r̃/(2π)^3/2V(r) B_T_c^0(λ)((p_1,q̃),0) p q̃ r By <cit.> |∫_^2 e^i | r | w · p-e^i √(μ) |r| w · p/| p | w |≤ C |p|-√(μ)/|p|+√(μ). Furthermore, note that B_T (p,0) |p|-√(μ)/|p|+√(μ)≤1/μ. Hence, the first integral in (<ref>) is bounded by C/μ‖ V j_3 ‖_1^2 ‖V‖_∞∫_p_1^2+q̃^2<2μ, p̃^2<2μ B_T_c^0(λ)((p_1,q̃),0) p_1 p̃q̃≤ C‖ V j_3 ‖_1^2 ‖V‖_∞ m_μ(T_c^0(λ)), which is of order 1/λ by Lemma <ref>. Changing to angular coordinates for the p̃ and q̃ integration, the integral on the last line of (<ref>) can be rewritten as 2 ∫_^3 r∫_0^√(μ) p_1 ∫_0^√(2μ-p_1^2) t ∫_0^√(2μ-p_1^2) s ∫_^1 w ∫_^1 w' B_T_c^0(λ) (√(p_1^2+t^2),0) t e^i (t w-s w')·r̃/(2π)^3/2× V(r)B_T_c^0(λ)(√(p_1^2+s^2),0) s =2 ∫_^3 r ∫_0^√(μ) p_1 ∫_p_1^√(2μ) x ∫_p_1^√(2μ) y ∫_^1 w ∫_^1 w' B_T_c^0(λ) (x,0) x e^i (√(x^2-p_1^2) w-√(y^2-p_1^2) w')·r̃/(2π)^3/2× V(r) B_T_c^0(λ)(y,0) y where we substituted x=√(p_1^2+t^2), y=√(p_1^2+s^2). Next, we want to replace the x^2 and y^2 in the exponent by μ. We rewrite (<ref>) as 2∫ B_T_c^0(λ) (x,0) x (e^i √(x^2-p_1^2) w·r̃-e^i √(μ-p_1^2) w·r̃)/(2π)^3/2V(r)e^-i √(y^2-p_1^2) w'·r̃ B_T_c^0(λ)(y,0) y p_1 r x y w w' +2 ∫ B_T_c^0(λ) (x,0) x e^i √(μ-p_1^2) w·r̃V(r)(e^-i √(y^2-p_1^2) w'·r̃-e^i √(μ-p_1^2) w'·r̃)/(2π)^3/2 B_T_c^0(λ)(y,0) y p_1 r x y w w' +2 ∫ B_T_c^0(λ) (x,0) x e^i √(μ-p_1^2) (w-w')·r̃/(2π)^3/2V(r) B_T_c^0(λ)(y,0) y p_1 r x y w w' By <cit.> |∫_^1e^i √(x-p_1^2) w ·r̃-e^i √(μ-p_1^2) w ·r̃/(2π)^2 w |≤ C |√(x^2-p_1^2)-√(μ-p_1^2)|^1/3 |(x^2-p_1^2)^-1/6+(μ-p_1^2)^-1/6| We bound this further by C |x^2-μ|^1/3 ((x^2-p_1^2)^-1/3+(μ-p_1^2)^-1/3). Using that B_T_c^0(λ)(x,0)≤ 1/|x^2-μ| by (<ref>) and recalling the definition of m_μ in (<ref>) we bound the first two lines in (<ref>) by C ‖ V ‖_1 m_μ^d=2(T_c^0(λ)) ∫_0^√(μ) p_1 ∫_p_1^√(2μ) x 1/| x-√(μ)|^2/3 (x+√(μ))^2/3 (1/(x^2-p_1^2)^1/3+1/(μ-p_1^2)^1/3) The integral is bounded by √(μ)∫_0^√(2) x ∫_0^x p_1 1/| x-1 |^2/3 (1/x^1/3(x-p_1)^1/3+1/(1-p_1)^1/3)<∞ Hence, the first two lines in (<ref>) are of order O(1/λ) by Lemma <ref>. For the third line we carry out the r-integration and obtain 2 ∫_0^√(μ)( ∫_p_1^√(2μ) B_T_c^0(λ) (x,0) x x )^2 (∫_^1∫_^1V(0,√(μ-p_1^2) (w-w')) w w') p_1. Note that ∫_p_1^√(2μ) B_T_c^0(λ) (x,0) x x =m_μ^d=2(T_c^0(λ)) -∫_0^p_1 B_T_c^0(λ) (x,0) x x and ∫_0^p_1 B_T_c^0(λ) (x,0) x x= 1/2∫_(μ-p_1^2)/T_c^0(λ)^μ/T_c^0(λ)tanh s/s s ≤1/2lnμ/μ-p_1^2 where we substituted s=(μ-x^2)/T_c^0(λ). In particular, | 2 ∫_0^√(μ)[( ∫_p_1^√(2μ) B_T_c^0(λ) (x,0) x x )^2 -m_μ^d=2(T_c^0(λ)) ^2]× (∫_^1∫_^1V(0,√(μ-p_1^2) (w-w')) w w') p_1 | ≤ 2|^1|^2 ‖V‖_∞∫_0^√(μ)( 1/4(lnμ/μ-p_1^2)^2+lnμ/μ-p_1^2 m_μ^d=2(T_c^0(λ))) p_1 ≤ C(1+m_μ^d=2(T_c^0(λ))) which is of order O(1/λ) by Lemma <ref>. In total, we thus obtain lim_λ→ 0λ^2⟨ V^1/2j_3, D_T_c^0(λ) V^1/2j_3 ⟩ =lim_λ→ 02λ^2 m_μ^d=2(T_c^0) ^2 √(μ)e_μ^2 ∫_0^1(∫_^1∫_^1V(0,√(μ)√(1-p_1^2) (w-w')) w w') p_1 By writing out the definition of j_3 and then switching to spherical coordinates and carrying out the r integration, we have ∫_^4 V(r) j_3(z_1,r̃;μ)^2 r z_1=∫_^2 u ∫_^2 v ∫_^7 p r z_1e^i p · rV(p)/(2π)^3/2e^i√(μ) (z_1,r̃)·(u-v)/(2π)^3 = 1/(2π)^3/2× ∫_( ∫_0^πsinθθ∫_0^πsinθ' θ' ∫_^1 w ∫_^1 w' V(0,√(μ) (sinθ w-sinθ' w') e^i √(μ) z_1(cosθ-cosθ')) z_1 =1/√(μ)(2π)^1/2∫_-1^1 t ∫_-1^1 s ∫_^1 w ∫_^1 w' V(0,√(μ) (√(1-t^2) w-√(1-s^2) w')δ(s-t), where in the last step we substituted t=cosθ, s=cosθ' and carried out the z_1 integration. Furthermore, according to Lemma <ref>, lim_λ→ 0λ m_μ^d=2(T_c^0) e_μ = 1/√(μ). This gives the desired lim_λ→ 0λ^2⟨ V^1/2j_3, D_T_c^0(λ) V^1/2j_3 ⟩ = (2π)^1/2∫_^4 V(r) j_3(z_1,r̃;μ)^2 r z_1 § BOUNDARY SUPERCONDUCTIVITY IN 3D In this section we shall prove Theorem <ref>, which provides sufficient conditions for (<ref>) to hold. Due to rotation invariance, we consider the spherical average of m̃^D/N_3 (defined in (<ref>)). With m^D/N_3(| r|;μ):=1/4π∫_^2m̃^D/N_3(| r|ω;μ) ω we have ∫_^3 V(r) m̃^D/N_3(r;μ) r =∫_^3 V(r) m^D/N_3(| r|;μ) r. Furthermore, we have the scaling property m_3^D/N(| r| ;μ)=1/√(μ) m_3^D/N(√(μ)| r| ;1). We shall derive the following, more explicit, expression for m^D/N_3 in Section <ref>. For x≥0 we can write m^D_3(x;1)=∑_j=1^4 t_j(x) and m^N_3(x;1)=∑_j=1^2 t_j(x)-∑_j=3^4t_j(x), where t_1(x) =4/π x∫_1^∞sin^2(x k)/k(k) k t_2(x) =- 2/πsin^2(x)/x t_3(x) =- 2sin^2( x)/x^2 t_4(x) = 4sin x/π x^2(sin x 2 x-cos x 2 x) = sin x/2π^3 x∫_^2∫_^2sin(x ω_1 |ω'_1 |) e^-i x ω̃·ω̃' /ω_1ωω' where (x)=∫_0^x 1-cos t/t t and (x)=∫_0^x sin t/t t. To determine for which interactions ∫_^3 V(r) m^D/N_3(| r|;μ) r >0 holds, we need to understand m^D/N_3(| r|;μ). In Figures <ref> and <ref> we plot m_3^D and m_3^N for μ=1, respectively. The function m_3^D seems to be nonnegative. If one could prove that m_3^D≥0, then Theorem <ref> would apply to all V≥ 0 satisfying <ref>. Unfortunately, this is beyond our reach. On the other hand, the function m_3^N changes sign, but is positive in a neighborhood of zero. To create the plots, it is computationally more efficient to use the first expression for t_4, whereas for the following analytic computations the second expression is more convenient. Intuitively, if we let μ→ 0, due to the scaling (<ref>) the sign of ∫_^3 V(r) m^D/N_3(| r|;μ) r is determined by the values of m_3^D/N(| r|;1) for r in the vicinity of zero. To obtain Theorem <ref>, we prove that both functions m_3^D/N(| r|;1) are non-negative in a neighborhood of zero. The following is proved in Section <ref>. The functions t_j for j=1,2,3,4 are bounded and twice continuously differentiable. The values of the functions and their derivatives at zero are listed in Table <ref>. We start with the case of Neumann boundary condition. By (<ref>), it suffices to prove that lim_μ→ 0∫_^3 V(r) m_3^N(√(μ)| r| ;1) r>0. With V∈ L^1 and Lemma <ref> it follows by dominated convergence that lim_μ→ 0∫_^3 V(r) m_3^N(√(μ)| r| ;1) r=m_3^N(0 ;1) ∫_^3 V(r) r=4 ∫_^3 V(r) r. Since V(0)>0 by assumption, this is positive. For Dirichlet boundary conditions, according to Lemma <ref>, m_3^D(0 ;1) and its first derivative vanish. Thus, we consider I(√(μ)):=1/μ∫_^3 m_3^D(√(μ)| r|;1) V(r) r. Since m_3^D(·;1) is bounded, I is continuous away from 0. It suffices to prove that lim_μ→0 I(√(μ))>0. According to Lemma <ref> and Taylor's theorem, we have m_3^D(x;1) =1/2 (m_3^D)”(0;1) x^2+R(x), where R is continuous with lim_x→ 0| R(x)|/x^2 =0. Let >0 and c:=sup_0≤ x< | R(x)|/x^2<∞. One can bound |1/μm_3^D(√(μ)| r|;1) V(r) |≤χ_√(μ)| r |<(1/2 (m_3^D)”(0;1)+c) | r^2 V(r) | + χ_√(μ)| r |>‖ m_3^D ‖_∞/^2| r^2 V(r) | ≤(1/2 (m_3^D) ”(0;1)+c+‖ m_3^D ‖_∞/^2) | r^2 V(r) |, which is integrable by the assumptions on V. By dominated convergence lim_μ→0 I(√(μ))=∫_^3lim_μ→0m_3^D(√(μ)| r|;1) /μ| r|^2 V(r) | r |^2 r= 1/2∫_^3 (m_3^D)”(0;1) V(r) | r |^2 r=2/9∫_^3 V(r) | r |^2 r, which is positive by assumption. §.§ Proof of Lemma <ref> With t̃_1(r) =∫_ j_3(z_1,r_2,r_3;1)^2 χ_|z_1|>|r_1| z_1 t̃_2(r) =- j_3(r;1)^2 ∫_χ_|z_1|<|r_1| z_1 t̃_3(r) = ∓π j_3(r;1)^2 t̃_4(r) =± 2 j_3(r;1) ∫_ j_3(z_1,r_2,r_3;1)χ_|z_1|<|r_1| z_1 one can write m^D_3(r;1)=∑_j=1^4 t̃_j(r) and m^N_3(r;1)=∑_j=1^2 t̃_j(r)-∑_j=3^4 t̃_j(r). Let t_j(| r|)=1/4π∫_^2t̃_j^D/N(| r|ω;μ) ω. The following explicit computations show that the t_j agree with the claimed expressions. Recall that j_3(r;1)=√(2/π)sin| r |/| r |. For t_1 we write out the integral in spherical coordinates and substitute z_1=x y and s=cosθ t_1(x) = 1/π2π/4π∫_0^π∫_sin^2 √(z_1^2+(x sinθ)^2)/ z_1^2+(x sinθ)^2 χ_|z_1|>x| cosθ|sinθ z_1 θ = 1/π x∫_-1^1 ∫_sin^2 x √(y^2+1-s^2)/y^2+1-s^2χ_|y|>| s| y s Next, we use the reflection symmetry of the integrand in s and y, substitute y by k=√(y^2+1-s^2) and then carry out the s integration to obtain t_1(x)= 4/π x∫_0^1 ∫_1^∞sin^2 x k/k √(k^2+s^2-1) k s = 4/π x∫_1^∞sin^2 x k/k (k) k. For t_2, we have t_2(x)=- 2/πsin^2 x / x^21/4π∫_^2 2 x |ω_1 |ω=- 2/πsin^2 x/ x. Since t̃_3 is radial, we have t_3=t̃_3. For t_4 we want to derive two expressions. For the first, we perform the same substitutions as for t_1 t_4(x)= 4/πsin x/ x2π/4π∫_0^π∫_sin√(z_1^2+(x sinθ)^2)/√( z_1^2+(x sinθ)^2 )χ_|z_1|<x| cosθ|sinθ z_1 θ = 2/πsin x/ x∫_-1^1 ∫_sin x √(y^2+1-s^2)/√(y^2+1-s^2)χ_|y|<| s| y s = 8/πsin x/ x∫_0^1 ∫_0^1 sin x k/√(k^2+s^2-1)χ_k^2+s^2>1 k s = 8/πsin x/ x∫_0^1sin x k k k = ±4sin x/π x^2(sin x 2 x-cos x 2 x ) To obtain the second expression for t_4, note that ∫_ e^-i ω_1 z_1χ_| z_1 |<| r_1 | z_1=2 sinω_1 |r_1|/ω_1. Therefore, t_4(x)=2 √(2/π)sin x/ x1/4π∫_^2∫_∫_^2e^-i ω· (z_1,x ω̃' )/(2π)^3/2χ_| z_1 |<x |ω_1' |ω z_1 ω' = 1/2π^3sin x/ x∫_^2∫_^2sin x ω_1 |ω_1'|/ω_1 e^-i x ω̃·ω̃' ωω' §.§ Proof of Lemma <ref> Since sin(x)/x is a bounded and smooth function, also t_2 and t_3 are bounded and smooth. Elementary computations give the entries in Table <ref>. For t_4 use the second expression in Lemma <ref>. Since the integrand is bounded and smooth and the domain of integration is compact, the integral is bounded and we can exchange integration and taking limits and derivatives. In particular, t_4 is bounded and smooth and it is then an elementary computation to verify the entries in Table <ref>. For instance, t_4'(0)=1/2 π^3∫_^2∫_^2|ω_1' |ωω'= 4/π. To study t_1 we define auxiliary functions f(x)=4/π x(x) and g(x)=sin(x)^2/x^2. Note that f(x) diverges logarithmically for x→1 and is continuous otherwise with f(0)=4/π. Furthermore, f(x) is increasing on [0,1) and for every 0<<1, sup_0≤ x<f'(x)/x=f'()/<∞ since all coefficients in the Taylor series of (x) are positive. We can write t_1(x)=∫_1^∞ x g(x k) f(1/k) k =∫_1^c x g(x k) f(1/k) k +∫_c x^∞ g(k) f(x/k) k for any constant c>1. The first integrand is bounded by Cx (k), the second one by C1/k^2 (since f is bounded on the integration domain). By dominated convergence we obtain that t_1 is continuous and t_1(0)=4/π∫_0^∞ g(k) k =2. For x>0 we compute the derivative t_1'(x)=∫_1^c (g(x k)+x k g'(x k)) f(1/k) k -cg(cx)f(1/c)+∫_c x^∞ g(k) f'(x/k) 1/k k =∫_1^c (g(x k) +x k g'(xk))f(1/k) k -cg(cx)f(1/c)+∫_c^∞ g(k x) f'(1/k) 1/k k, where we could apply the Leibnitz integral rule since f'(1/k) decays like 1/k for k→∞. By dominated convergence, t_1' is continuous for x>0. By continuity of t_1 and the mean value theorem, t_1'(0)=lim_x→0t_1(x)-t_1(0)/x=lim_x→0lim_y→ 0t_1(x)-t_1(y)/x-y=lim_x→0t_1'(x). We evaluate t_1'(0)=∫_1^c f(1/k) k -c f(1/c)+∫_c^∞ f'(1/k) 1/k k =∫_1^c (f(1/k) -f(1/c)) k-f(1/c)+∫_c^∞ f'(1/k) 1/k k This is a number independent of c. To compute the number, we let c→∞, and by monotone convergence t_1'(0)=∫_1^∞(f(1/k) -f(0)) k-f(0)=2/π-4/π=-2/π. Note that g'(k)=2(cos k -sin k/k)sin k/k^2 has a zero of order one at k=0. Therefore, | g'(k x) f'(1/k)|<C/x^2 k^3 and for x>0 the second derivative is t_1”(x)=∫_1^c (2x g'(x k)+x k^2 g”(x k)) f(1/k) k -c^2 g'(cx)f(1/c)+∫_c^∞ g'(k x) f'(1/k) k = ∫_1^c (2 x g'(x k)+x k^2 g”(x k)) f(1/k) k -c^2 g'(cx)f(1/c)+∫_cx^∞g'(y)/yf'(x/y)/x/y y We can bound g'(y)/y≤C/1+y^3 and sup_y |f'(x/y)/x/yχ_y>cx|=c f'(1/c)<∞. By dominated convergence, the function above is continuous (also at zero). We have t_1”(0)= ∫_0^∞g'(y)/y y lim_x→0f'(x)/x Since ∫_0^∞g'(y)/y y =-π/3 and lim_x→0f'(x)/x=8/3π we obtain t_1”(0)=-8/9. § RELATIVE TEMPERATURE SHIFT In this section we shall prove Theorem <ref>, which states that the relative temperature shift vanishes in the weak coupling limit. We proceed similarly to the δ-interaction case in one dimension analyzed in <cit.>. For this, we switch to the Birman-Schwinger formulation. Recall the Birman-Schwinger operator A_T^0 corresponding to H_T^0 from (<ref>). Let Ω̃_1={(r,z)∈^2d|| r_1 | < z_1 }. Define the operator A_T^1 on ψ∈ L_ s^2(Ω̃_1)={ψ∈ L^2(Ω̃_1) |ψ(r,z)=ψ(-r,z)} via ⟨ψ, A_T^1 ψ⟩ =∫_^4d+2(d-1) r r' p q z̃z̃' ∫_| r_1 |<z_1 z_1 ∫_| r_1 '|<z_1' z'_1 1/(2π)^2dψ(r,z) V(r)^1/2 e^i(p · z+q · r)× B_T(p,q) (e^-i(p_1 z'_1+q_1 r'_1)+e^i(p_1 z'_1+q_1 r'_1)∓ e^-i(q_1 z'_1+p_1 r'_1)∓ e^i(q_1 z'_1+p_1 r'_1)) e^-i(p̃·z̃'+q̃·r̃')| V(r')|^1/2ψ(r',z'), where the upper signs correspond to Dirichlet and the lower signs to Neumann boundary conditions, respectively. It follows from a computation analogous to <cit.> that the operator A_T^1 is the Birman-Schwinger operator corresponding to H_T^Ω_1 in relative and center of mass variables. The Birman-Schwinger principle implies that infσ(H_T^Ω_1) = (1/λ-infσ(A_T^1)), where we use the convention that 0 =0. One can reformulate the claim of Theorem <ref> in terms of the Birman-Schwinger operators. For j=1,2 let a_T^j=supσ (A_T^j). Then lim_λ→ 0T_c^1(λ)-T_c^0(λ)/T_c^0(λ) =0 ⇔lim_T→ 0(a_T^0-a_T^1)=0. This is a straightforward generalization of <cit.> and we refer to <cit.> for its proof. First we will argue that a_T^0≤ a_T^1 for all T>0. If infσ(K_T^0-λ V)<2T, then infσ(K_T^0-λ' V)<infσ(K_T^0-λ V) for all λ'>λ. Furthermore, infσ(K_T^0-(a_T^0)^-1V)=0=infσ(K_T^Ω_1-(a_T^1)^-1V)≤infσ(K_T^Ω_0-(a_T^1)^-1V), where we used Lemma <ref> in the last step. In particular, a_T^0≤ a_T^1. It remains to show that lim_T→ 0(a_T^0-a_T^1)≥ 0. Let ι: L^2(Ω̃_1)→ L^2(^2d) be the isometry ιψ(r_1,r̃,z_1,z̃) = 1/√(2) (ψ(r_1,r̃,z_1,z̃) χ_Ω̃_1(r,z) +ψ(-r_1,r̃,-z_1,z̃) χ_Ω̃_1(-r_1,r̃,-z_1,z̃)). Let F_2 denote the Fourier transform in the second variable F_2 ψ (r, q)=1/(2π)^d/2∫_^d e^-iq· zψ(r,z) z and F_1 the Fourier transform in the first variable F_1 ψ (p,q)=1/(2π)^d/2∫_^d e^-ip· rψ(r,q) r. Recall that by assumption V≥ 0 and for functions ψ∈ L^2(^d×^d) we have V^1/2ψ(r,q)=V^1/2(r)ψ(r,q). We define self-adjoint operators Ẽ_T and G_T on L^2(^2d) through ⟨ψ, Ẽ_Tψ⟩= a_T^0 ‖ψ‖_2^2 -∫_^2d B_T(p,q) |F_1 V^1/2ψ(p,q)|^2 p q and ⟨ψ, G_Tψ⟩=∫_^2dF_1 V^1/2ψ((q_1,p̃),(p_1,q̃)) B_T(p,q) F_1 V^1/2ψ(p,q) p q. With this notation, we have a_T^0 -A_T^1=ι^† F_2^† (Ẽ_T± G_T) F_2 ι, where denotes the identity operator on L_ s^2(Ω̃_1). In particular, a_T^0-a_T^1=inf_ψ∈ L^2_ s(Ω̃_1), ‖ψ‖_2=1⟨ F_2ιψ,(Ẽ_T± G_T )F_2ιψ⟩≥inf_ψ∈ L^2_ s(^2d), ‖ψ‖_2=1⟨ψ,(Ẽ_T± G_T) ψ⟩, where we used that ‖ F_2 ιψ‖_2= ‖ψ‖_2. Define the function E_T(q)= a_T^0-‖ V^1/2B_T(· ,q) V^1/2‖. We claim that ‖ V^1/2B_T(· ,q) V^1/2‖≤ a_T^0. The Birman Schwinger operator Ã_T corresponding to H_T^^d satisfies supσ(Ã_T)=sup_q ‖ V^1/2B_T(· ,q) V^1/2‖. Pick λ=supσ(Ã_T)^-1. According to the Birman Schwinger principle and Lemma <ref>, 0=infσ(H_T^Ω_0)=infσ(H_T^0). Using the Birman Schwinger principle for H_T^0, we obtain a_T^0=supσ(Ã_T)≥‖ V^1/2B_T(· ,q) V^1/2‖. Hence, E_T(q)≥ 0. Let E_T act on L^2(^2d) as E_Tψ (r,q)= E_T(q)ψ (r,q). Then a_T,μ^0-a_T,μ^1≥inf_ψ∈ L^2_s(^2d), ‖ψ‖_2=1⟨ψ,( E_T± G_T ) ψ⟩. It thus suffices to prove that lim_T→ 0infσ ( E_T± G_T)≥ 0. With the following three Lemmas, which are proved in the next sections, the claim follows completely analogously to the proof of <cit.>. For completeness, we provide a sketch of the argument in <cit.> after the statement of the Lemmas. Let μ>0, d∈{1,2,3} and let V≥ 0 satisfy <ref>. Then sup_T>0‖ G_T‖ <∞. Let μ>0, d∈{1,2,3} and let V≥ 0 satisfy <ref>. Let _≤ϵ act on L^2(^2d) as _≤ϵψ(r,p)=ψ(r,p)χ_| p|≤ϵ. Then lim_ϵ→ 0sup_T>0‖_≤ϵ G_T_≤ϵ‖ =0. Let μ>0, d∈{1,2,3} and let V≥ 0 satisfy <ref>. Let 0<ϵ<√(μ). There are constants c_1,c_2,T_0>0 such that for 0<T<T_0 and |q|> we have E_T(q)>c_1 |ln(c_2/T)|. Since E_T(q)≥ 0, we can write E_T± G_T+δ=√(E_T+δ)(±1/√(E_T+δ)G_T1/√(E_T+δ))√(E_T+δ) for any δ>0. It suffices to prove that for all δ>0 lim_T→ 0‖1/√(E_T+δ)G_T1/√(E_T+δ)‖=0 . To prove (<ref>), with the notation introduced in Lemma <ref> we have for all 0<ϵ<√(μ) ‖1/√(E_T+δ)G_T1/√(E_T+δ)‖≤‖_≤ϵ1/√(E_T+δ)G_T1/√(E_T+δ)_≤ϵ‖ +‖_≤ϵ1/√(E_T+δ)G_T1/√(E_T+δ)_>ϵ‖ +‖_>ϵ1/√(E_T+δ)G_T1/√(E_T+δ)‖ . With E_T≥ 0 and Lemma <ref> we obtain lim_T→ 0‖1/√(E_T+δ)G_T1/√(E_T+δ)‖≤sup_T>01/δ‖_≤ϵ G_T_≤ϵ‖+lim_T→ 02/(δ c_1 |ln(c_2/T)|)^1/2‖ G_T‖. The second term vanishes by Lemma <ref> and the first term can be made arbitrarily small by Lemma <ref>. Hence, (<ref>) follows. The variational argument above relies on A_T^1 being self-adjoint. This is why we assume V≥ 0 in Theorem <ref>. §.§ Proof of Lemma <ref> We have ‖ G_T ‖≤‖ G_T^< ‖+‖ G_T^> ‖, where for d∈{2,3} ⟨ψ, G_T^< ψ⟩=∫_^2dF_1 V^1/2ψ((q_1,p̃),(p_1,q̃)) B_T(p,q)χ_|p̃|<2√(μ) F_1 V^1/2ψ(p,q) p q, and for G_T^> change χ_|p̃|<2√(μ) to χ_|p̃|>2√(μ). For d=1 set G_T^<=G_T and G_T^>=0. We will prove that G_T^< and G_T^> are bounded uniformly in T. To bound G_T^> in d=2,3 we use the Schwarz inequality in p_1,q_1 to obtain ‖ G_T^>‖≤sup_ψ∈ L^2(^2d), ‖ψ‖=1∫_^2d B_T(p,q)χ_|p̃| >2 √(μ)| F_1 V^1/2ψ(p,q) |^2 q p The right hand side defines a multiplication operator in q. By (<ref>) there is a constant C>0 independent of T such that ‖ G_T^>‖≤ C ‖ M‖, where M:= V^1/21/1-Δ V^1/2 on L^2(^d). It follows from the Hardy-Littlewood-Sobolev and the Hölder inequalities that M is a bounded operator <cit.>. To bound G_T^< note that for fixed q, ‖ F_1 V^1/2ψ (·, q )‖_∞≤ C ‖ V ‖_1^1/2‖ψ(·, q ) ‖_2 by Lemma <ref>(<ref>). Therefore, we estimate ‖ G_T^< ‖≤ C^2 ‖ V ‖_1 sup_ψ∈ L^2(^2d), ‖ψ‖=1∫_^2d‖ψ(·, (p_1,q̃) ) ‖_2 B_T(p,q)χ_p̃^2<2μ‖ψ(·, q ) ‖_2 p q Since the right hand side defines a multiplication operator in q̃, we obtain ‖ G_T^< ‖≤ C^2 ‖ V ‖_1 sup_q̃∈^d-1sup_ψ∈ L^2(), ‖ψ‖=1∫_^d+1ψ(p_1) B_T(p,q)χ_p̃^2<2μψ(q_1 ) p q_1, where for d=1 the supremum over q̃ is absent. For d=1, the operator with integral kernel B_T(p,q) is bounded uniformly in T according to <cit.>, and thus the claim follows. For d∈{2,3} we need to prove that the operators with integral kernel ∫_^d-1 B_T(p,q) χ_|p̃| <2 √(μ)p̃ are bounded uniformly in q̃ and T. We apply the bound <cit.> B_T(p,q)≤2/| (p+q)^2-μ|+ | (p-q)^2-μ| Then, we scale out μ and estimate the expression by pulling the supremum over ψ into the p̃-integral sup_q̃∈^d-1sup_ψ∈ L^2(), ‖ψ‖=1∫_^d+12χ_|p̃|<2 √(μ)ψ(p_1)ψ(q_1)/| (p+q)^2-μ|+ | (p-q)^2 -μ| p q_1 =μ^d/2-1sup_q̃∈^d-1sup_ψ∈ L^2(), ‖ψ‖=1∫_^d+12χ_|p̃|<2 ψ(p_1)ψ(q_1)/| (p+q)^2-1|+ | (p-q)^2 -1 | p q_1 ≤μ^d/2-1sup_q̃∈^d-1∫_^d-1χ_|p̃|<2 [ sup_ψ∈ L^2(), ‖ψ‖=1∫_^22ψ(p_1)ψ(q_1)/| (p+q)^2-1|+ | (p-q)^2 -1 | p_1 q_1] p̃ Let μ_1=1-(p̃+q̃)^2 and μ_2=1-(p̃-q̃)^2. For fixed μ_1,μ_2 we need to bound the operator with integral kernel D_μ_1,μ_2(p_1,q_1)=2/| (p_1+q_1)^2-μ_1|+ | (p_1-q_1)^2 -μ_2|. Let μ_1,μ_2≤ 1 with min{μ_1,μ_2}≠ 0. The operator D_μ_1,μ_2 on L^2() with integral kernel given by (<ref>) satisfies ‖ D_μ_1,μ_2‖≤ C(1+d(μ_1,μ_2)|min{μ_1,μ_2}|^-1/2) for some finite C independent of μ_1,μ_2, where d(μ_1,μ_2)={ 1+ln(1+max{μ_1,μ_2}/|min{μ_1,μ_2}|) if min{μ_1,μ_2}<0≤max{μ_1,μ_2}, 1 otherwise.. This is a generalization of <cit.>. The proof of Lemma <ref> is based on the Schur test and can be found in Section <ref>. Since max{μ_1,μ_2}≤ 1, it follows from Lemma <ref> that for any α>1/2 one has ‖ D_μ_1,μ_2‖≤ C( 1+ |min{μ_1,μ_2}|^-α) for a constant C independent of μ_1,μ_2. The following Lemma concludes the proof of sup_T>0‖ G_T^<‖<∞. Let d∈{2,3} and 0≤α<1. Let μ_1=1-(p̃+q̃)^2 and μ_2=1-(p̃-q̃)^2. Then sup_q̃∈^d-1∫_^d-1χ_|p̃|<2 /|min{μ_1,μ_2}|^αp̃<∞. Lemma <ref> follows from elementary computations which are carried out in Section <ref>. §.§ Proof of Lemma <ref> With the notation introduced in the proof of Lemma <ref> we have ‖_≤ G_T _≤‖≤‖_≤ G_T^< _≤‖ +‖_≤ G_T^> _≤‖. For d=2,3 we have analogously to (<ref>) ‖_≤ G_T^> _≤‖≤sup_ψ∈ L^2(^2d), ‖ψ‖=1∫_^2dχ_|q|<χ_|(p_1,q̃)|<B_T(p,q)χ_|p̃| >2 √(μ)| F_1 V^1/2ψ(p,q) |^2 q p . Let 1<t<∞ such that V∈ L^t(^d). According to Lemma <ref>(<ref>), for fixed q we have ‖ F_1 V^1/2ψ(·, q) ‖_L^s(^d)≤ C ‖ V‖_t^1/2‖ψ(·, q) ‖_L^2(^d), where 2≤ s=2t/(t-1)<∞. By (<ref>) and Hölder's inequality in p, there is a constant C independent of T such that ‖_≤ G_T^> _≤‖≤ Csup_ψ∈ L^2(^2d), ‖ψ‖=1∫_^2dχ_|p_1|</1+p̃^2| F_1 V^1/2ψ(p,q) |^2 p q ≤ C ‖ V‖_t (∫_^dχ_|p_1|</(1+p̃^2)^t p)^1/t. In particular, the remaining integral is of order O(^1/t) and vanishes as →0. To estimate ‖_≤ G_T^< _≤‖ we proceed as in the derivation of the bound on ‖ G_T^<‖ from (<ref>) until the first line of (<ref>) and obtain ‖_≤ G_T^< _≤‖≤ C ‖ V ‖_1 sup_|q̃|<sup_ψ∈ L^2(), ‖ψ‖=1∫_^d+12χ_|p_1|, |q_1|<χ_|p̃|<2 √(μ)ψ(p_1)ψ(q_1)/| (p+q)^2-μ|+ | (p-q)^2 -μ| p q_1 Hence, we need that the norm of the operator on L^2() with integral kernel ∫_^d-12χ_|p_1|, |q_1|<χ_|p̃|<2 √(μ)/| (p+q)^2-μ|+ | (p-q)^2 -μ|p̃ vanishes uniformly in q̃ as →0. In d=1, the Hilbert-Schmidt norm clearly vanishes as →0. Similarly for d=2,3 the following Lemma implies that the Hilbert-Schmidt norm vanishes uniformly in q̃ as →0. Let d∈{2,3}. Then lim_→0sup_|q̃|<∫_^2χ_|p_1|, |q_1|<[∫_^d-12χ_p̃^2<2/| (p+q)^2-1 |+ | (p-q)^2-1 |p̃]^2 p_1 q_1=0 The proof can be found in Section <ref>. We give the proof for d=2 only; the one for d=3 works analogously and is left to the reader. §.§ Proof of Lemma <ref> Since a_T^0 diverges like e_μμ^d/2-1ln(μ/T) as T→0, the claim follows if we prove that sup_T>0sup_|q|>‖ V^1/2 B_T(·, q) V^1/2‖<∞ for |q|>. For d=1 we have ‖ V^1/2 B_T(·, q) V^1/2‖^2 ≤‖ V^1/2 B_T(·, q) V^1/2‖_HS^2 = ∫_^2 V(r) V(r') (∫_ B_T(p,q) e^i p(r- r')/2π p)^2 r r' ≤1/(2π)^2‖ V‖_1^2 (∫_ B_T(p,q) p)^2 It was shown in the proof of <cit.> that sup_T>0,|q|>ϵ∫_ B_T(p,q) p<∞ . For d∈{2,3}, the claim follows from the following Lemma which is proved below. Let d∈{2,3} and μ>0. Let V satisfy Assumption <ref> and V≥ 0. Recall that O_μ=V^1/2^† V^1/2 (defined above (<ref>)). Let f(x)=χ_(0,1/2)(x)ln(1/x). There is a constant C(d,μ,V) such that for all T>0, q∈^d, and ψ∈ L^2(^d) with ‖ψ‖_2=1 ⟨ψ, V^1/2 B_T(·, q) V^1/2ψ⟩≤μ^d/2-1⟨ψ, O_μψ⟩ f(max{T/μ, | q |/√(μ)}) + C(d,μ,V). This concludes the proof. Note that if we set q=0, and optimize over ψ, the left hand side would have the asymptotics a_T,μ^0∼ e_μμ^d/2-1ln(1/T) as T → 0. Intuitively, keeping q away from 0 on a scale larger than T will slow down the divergence. In the case q=0, divergence comes from the singularity on the set | p | =√(μ). For | q| >0, there will be two relevant sets, (p+q)^2=μ and (p-q)^2=μ. These sets are circles or spheres in 2d and 3d, respectively. The function B_T is very small on the region which lies inside exactly one of the disks or balls (see the shaded area in Figure <ref>). The part lying inside or outside both disks (the white area in Figure <ref>) will be relevant for the asymptotics. Define the family of operators Q_T(q): L^1(^d)→ L^∞(^d) for q∈^d through ⟨ψ, Q_T(q) ψ⟩ =χ_max{T/μ,| q |/√(μ)} < 1/2∫_^d|ψ(√(μ)p/| p |) |^2 B_T(p,q)χ_((p+q)^2-μ)((p-q)^2-μ)>0χ_p^2 < 3μ p. We claim that Q_T captures the divergence of B_T. Let d∈{2,3} and μ>0. Let V satisfy Assumption <ref>. Then sup_T>0sup_q ∈^d‖ V^1/2 B_T(·, q) |V|^1/2- V^1/2 Q_T(q) |V|^1/2‖ <∞. The proof of Lemma <ref> can be found in Section <ref>. It now suffices to prove that there is a constant C such that for all T> 0 and q∈^d ⟨ψ, Q_T(q) ψ⟩≤⟨ψ, ℱ^†ℱψ⟩ f(max{T/μ, | q |/√(μ)}) + C ‖ψ‖_1^2. Then for all ψ∈ L^2(^d) with ‖ψ‖_2=1 ⟨ψ, V^1/2 Q_T(q) V^1/2ψ⟩≤⟨ψ, O_μψ⟩ f(max{T/μ, | q |/√(μ)}) + C ‖ V ‖_1 and the claim follows with Lemma <ref>. We are left with proving (<ref>). By the definition of Q_T, it suffices to restrict to | q | <√(μ)/2, T<μ/2. Let R be the rotation in ^d around the origin such that q=R(| q |, 0̃). For d=2 the condition ((p+(|q|,0))^2-μ)((p-(|q|,0))^2-μ)>0 holds exactly in the white region sketched in Figure <ref>. The inner white region is characterized by (| p_1| +| q|)^2+p̃^2<μ, and the outer region by (| p_1| -| q|)^2+p̃^2>μ. Thus, ⟨ψ, Q_T(q) ψ⟩ = ∫_^d|ψ(√(μ)R p/| p |) |^2 [ χ_(| p_1| +| q|)^2+p̃^2<μ+χ_(| p_1| -| q|)^2+p̃^2>μ] B_T(p,(| q|,0̃))χ_p^2 < 3μ p, where we substituted p by R p. Let us use the notation r_±(e)=±| e_1 || q | +√(μ-e_2^2 | q|^2) and e_φ=(cosφ,sinφ), where the choice of r_± is motivated in Figure <ref>. For d=2 rewriting the integral (<ref>) in angular coordinates gives ∫_0^2π|ψ(√(μ)R e_φ|) |^2 [ ∫_0^r_-(e_φ) B_T(r e_φ ,(| q|,0)) r r +∫_r_+(e_φ)^√(3μ) B_T,μ(r e_φ ,(| q|,0)) r r ] φ. For d=3 with the notation e_φ,θ =(cosφ, sinφcosθ, sinφsinθ) and using that B_T(r e_φ,θ ,(| q|,0,0))= B_T(r e_φ ,(| q|,0)), (<ref>) equals ∫_0^π(∫_0^2π|ψ(√(μ)r e_φ,θ|) |^2 θ) [ ∫_0^r_-(e_φ) B_T(r e_φ ,(| q|,0)) r^2 r +∫_r_+(e_φ)^√(3μ) B_T(r e_φ ,(| q|,0)) r^2 r ] sinφφ. We distinguish two cases depending on whether r is within distance T/√(μ) to r_± or not. Note that r_-(e) ≥ -| q|+√(μ)≥√(μ)/2≥T/√(μ) and r_+(e)+T/√(μ)≤| q|+√(μ)+T≤ 2√(μ). If r is close to r_± we use that B_T(p,q)≤ 1/2T. Otherwise we use (<ref>). The expressions in the square brackets in (<ref>) and (<ref>) are thus bounded by ∫_0^r_-(e_φ)-T/√(μ)r^d-1/μ-r^2-q^2 r + ∫_r_-(e_φ)-T/√(μ)^r_-(e_φ)r^d-1/2T r + ∫_r_+(e_φ)^r_+(e_φ)+T/√(μ)r^d-1/2T r+∫_r_+(e_φ)+T/√(μ)^√(3μ)r^d-1/r^2+q^2-μ r The second and third term are clearly bounded for T<μ/2. Since ‖ψ‖_∞≤ (2π)^-d/2‖ψ‖_1, they contribute C ‖ψ‖_1 to the upper bound on ⟨ψ, Q_T(q) ψ⟩. To bound the contributions of the first and the last term in (<ref>) we treat d=2 and d=3 separately. Case d=2: The sum of the two integrals equals ln(√((μ-q^2)(2μ +q^2)/(μ-q^2-(r_-(e_φ)-T/√(μ))^2)((r_+(e_φ)+T/√(μ))^2+q^2-μ))) To bound this expression, we first make a few observations. Note that μ-q^2-(r_-(e_φ)-T/√(μ))^2=2 | e_1 || q | (√(μ-e_2^2 | q |^2)-| e_1|| q|) + T/√(μ) (2r_-(e_φ) -T/√(μ)) ≥ (√(3)-1) √(μ)| e_1 || q | + T/2, where we used that r_-(e_φ)≥√(μ)-| q| and | q|, T/√(μ)≤√(μ)/2. Similarly, (r_+(e_φ)+T/√(μ))^2+q^2-μ=2 | e_1 || q | (√(μ-e_2^2 | q |^2)+| e_1|| q|) + T/√(μ) (2r_+(e_φ) +T/√(μ)) ≥√(3)√(μ)| e_1 || q | + √(3) T Furthermore, note that 2μ +q^2 ≤5μ/4. The expression under the square root in (<ref>) is therefore bounded above by 5μ^2/4( (√(3)-1) √(μ)| e_1 || q |+ T/2)(√(3)√(μ)| e_1 || q | + √(3) T) We now bound this from above in two ways. First we drop the T terms in the denominator, and second we drop the other terms in the denominator, which gives 5μ/4√(3) (√(3)-1) | e_1 |^2 | q |^2 and 5μ^2 /2√(3)T^2, respectively. Thus, (<ref>) is bounded above by f(max{T/μ, | q |/√(μ)}) + ln(1/| e_1 |)+C. The contribution to the upper bound on ⟨ψ, Q_T(q) ψ⟩ is ∫_0^2π|ψ(√(μ) e_φ|) |^2 f(max{T/μ, | q |/√(μ)}) φ +(2π)^-2‖ψ‖_1^2 ∫_0^2π( ln(1/|cosφ|)+C )φ, where for the second term we used that |ψ(√(μ) e_φ|) |^2 ≤ (2π)^-2‖ψ‖_1^2. Note that the first summand equals ⟨ψ, ℱ^†ℱψ⟩ f(max{T/μ, | q |/√(μ)}) and that the integral in the second summand is finite. In total, we have obtained (<ref>) for d=2. Case d=3: Note that / r (-r + a (r/a))= r^2/(a^2-r^2) and / r (r - a (r/a))= r^2/(r^2-a^2). The sum of the first and the last integral in (<ref>) hence equals √(3μ)-r_+(e_φ)-r_-(e_φ)-√(μ-q^2)/2ln((√(μ-q^2)+√(3μ))/(√(3μ)-√(μ-q^2))) +√(μ-q^2)/2ln((√(μ-q^2)+r_-(e_φ)-T/√(μ))/(√(μ-q^2)-r_-(e_φ)+T/√(μ))(√(μ-q^2)+r_+(e_φ)+T/√(μ))/(r_+(e_φ)+T/√(μ)-√(μ-q^2))) The terms in the first line are bounded. The argument of the logarithm in the second line equals (√(μ-q^2)+r_-(e_φ)-T/√(μ))^2/(μ-q^2-(r_-(e_φ)-T/√(μ))^2)(√(μ-q^2)+r_+(e_φ)+T/√(μ))^2/((r_+(e_φ)+T/√(μ))^2-μ+q^2) ≤Cμ^2/( (√(3)-1) √(μ)| e_1 || q |+ T/2)(√(3)√(μ)| e_1 || q | + √(3) T)) where we used (<ref>) and (<ref>). Analogously to the case d=2 the contribution to the upper bound on ⟨ψ, Q_T(q) ψ⟩ is ∫_0^π( ∫_0^2π|ψ(√(μ) e_φ,θ|) |^2θ) f(max{T/μ, | q |/√(μ)})sinφφ +(2π)^-2‖ψ‖_1^2 ∫_0^π( ln(1/|cosφ|)+C ) sinφφ. and (<ref>) follows. § PROOFS OF AUXILIARY LEMMAS §.§ Proof of Lemma <ref> If we write D_μ_1,μ_2 as a sum D_μ_1,μ_2=∑_j=1^n D_μ_1,μ_2^j a.e. for some integral kernels D_μ_1,μ_2^j, then ‖ D_μ_1,μ_2‖≤∑_j=1^n ‖ D_μ_1,μ_2^j ‖. We will choose the D_μ_1,μ_2^j as localized versions of D_μ_1,μ_2 in different regions (by multiplying D_μ_1,μ_2 by characteristic functions). Let D_μ_1,μ_2^1=D_μ_1,μ_2χ_max{| p_1 |, | q_1|}>2 and D_μ_1,μ_2^2=D_μ_1,μ_2χ_max{| p_1 |, | q_1|}<2. We first prove that the Hilbert-Schmidt norm of ‖ D_μ_1,μ_2^1 ‖ is bounded uniformly in μ_1, μ_2. Note that if max{| p_1 |, | q_1|}>2, we have max{(p_1± q_1)^2}=(| p_1 |+| q_1 |)^2>4 and μ_1,μ_2≤1. Hence, D_μ_1,μ_2^1(p_1,q_1)≤2χ_max{| p_1 |, | q_1|}>2/(| p_1 |+| q_1 |)^2-1≤2χ_max{| p_1 |, | q_1|}>2/p_1^2+q_1^2-1. For the Hilbert-Schmidt norm we obtain ‖ D_μ_1,μ_2^1 ‖_ HS^2≤ 4 ∫_^2χ_max{| p_1 |, | q_1|}>2/(p_1^2+q_1^2-1)^2 p_1 q_1≤ 8 π∫_2^∞r/(r^2-1)^2 r =4π/3, and therefore ‖ D_μ_1,μ_2^1 ‖ is indeed bounded uniformly in μ_1,μ_2. For D_μ_1,μ_2^2 we first observe that ‖ D_μ_2,μ_1^2 ‖ =‖ D_μ_1,μ_2^2 ‖ since D_μ_1,μ_2^2(p_1,q_1)=D_μ_2,μ_1^2(p_1,-q_1). Hence, without loss of generality we may assume μ_1≤μ_2 from now on. To bound the norm of D_μ_1,μ_2^2 we distinguish the cases μ_1<0 and μ_1>0 and continue localizing. Case μ_1<0: We localize in the regions | p_1-q_1 |^2<μ_2 and | p_1-q_1 |^2>μ_2, where the first one only occurs if μ_2>0. Let D_μ_1,μ_2^3=D_μ_1,μ_2^2 χ_| p_1-q_1 |^2<μ_2 and D_μ_1,μ_2^4=D_μ_1,μ_2^2 χ_| p_1-q_1 |^2>μ_2. For D_μ_1,μ_2^3 we do a Schur test with test function h(p_1)=| p_1|^1/2. Using the symmetry of the integrand under (p_1,q_1)→ -(p_1,q_1), we have ‖ D_μ_1,μ_2^3 ‖≤sup_-2<p_1<2| p_1|^1/2∫_-2^2 1/2χ_| p_1-q_1 |^2<μ_2/p_1q_1+(μ_2-μ_1)/41/| q_1|^1/2 q_1 = χ_0<μ_2sup_0≤ p_1<2| p_1|^1/2∫_p_1-√(μ_2)^p_1+√(μ_2)1/21/p_1q_1+(μ_2-μ_1)/41/| q_1|^1/2 q_1. For μ_2>0, carrying out the integration we obtain ‖ D_μ_1,μ_2^3 ‖≤sup_0≤ p_1<22/√(μ_2-μ_1)[arctan(√(4p_1 (p_1+√(μ_2))/μ_2-μ_1)). .-χ_p_1>√(μ_2)arctan(√(4p_1 (p_1-√(μ_2))/μ_2-μ_1))+χ_p_1<√(μ_2)(√(4p_1 (√(μ_2)-p_1)/μ_2-μ_1))] ≤2/√(μ_2-μ_1)[π/2+(√(μ_2/μ_2-μ_1))], where we used the monotonicity of . Note that for x≥ 0, (√(1/1+x))=ln(√(1/x+1)+√(1/x))≤ln(2 √(1/x+1))= ln(2)+1/2ln(1+1/x). In total, we obtain ‖ D_μ_1,μ_2^3‖≤C/√(-μ_1)(1+ln(1+μ_2/-μ_1)) for some constant C. We bound the Hilbert-Schmidt norm of D_μ_1,μ_2^4 as ‖ D_μ_1,μ_2^4 ‖_ HS= (∫_(-2,2)^2χ_| p_1-q_1 |^2>μ_2/(p_1^2+q_1^2-μ_1+μ_2/2)^2 p_1 q_1)^1/2 For μ_2<0, we clearly have ‖ D_μ_1,μ_2^4 ‖_ HS≤‖ D_μ_1,0^4 ‖_ HS. For μ_2≥ 0 observe that the constraint | p_1-q_1 |^2>μ_2 implies p_1^2+q_1^2 >μ_2/2. Hence, ‖ D_μ_1,μ_2^4 ‖_ HS≤(2π∫_√(μ_2/2)^∞r/(r^2-μ_1+μ_2/2)^2 r)^1/2= (2π/-μ_1)^1/2. Case μ_1>0: We are left with estimating D_μ_1,μ_2^2 in the case that μ_1>0. First we sketch the location of the singularities of D_μ_1,μ_2^2(p_1,q_1). On each of the diagonal lines in Figure <ref>, one of the two terms |(p_1+q_1)^2-μ_1|,|(p_1-q_1)^2-μ_2| in the denominator of D_μ_1,μ_2^2(p_1,q_1) vanishes. The function D_μ_1,μ_2^2(p_1,q_1) thus has four singularities located at the crossings of the diagonal lines Figure <ref>. The coordinates of the singularities are (p_1,q_1)∈{(s_1,-s_2),(s_2,-s_1),(-s_1,s_2),(-s_2,s_1)}, where s_1=√(μ_1)+√(μ_2)/2, s_2=√(μ_2)-√(μ_1)/2. Note that s_1^2+s_2^2=μ_1+μ_2/2 and s_1 s_2=μ_2-μ_1/4. To bound ‖ D_μ_1,μ_2^2‖, the idea is to perform a Schur test with test function h(p_1)=min{|| p_1 | -s_1 |^1/2, || p_1 | -s_2 |^1/2}. Since the behavior of D_μ_1,μ_2^2(p_1,q_1) strongly depends on whether |p_1+q_1|≷√(μ_1),|p_1-q_1|≷√(μ_2) and which singularity of D_μ_1,μ_2^2 is close to p_1,q_1, we distinguish the ten different regions sketched in Figure <ref>. For 5≤ j ≤ 14, we define the operator D_μ_1,μ_2^j to be localized in region j, D_μ_1,μ_2^j=D_μ_1,μ_2^2 χ_j. According to the Schur test, ‖ D_μ_1,μ_2^j‖≤sup_|p_1|<2h(p_1)^-1∫_-2^2 D^j_μ_1,μ_2(p_1,q_1) h(q_1) q_1. The bounds on ‖ D_μ_1,μ_2^j‖ we obtain from the Schur test are listed in Table <ref>. In the following we prove all the bounds. Region 5: By symmetry of the integrand under (p_1,q_1)→ -(p_1,q_1) we have ‖ D_μ_1,μ_2^5‖ ≤sup_-2<p_1<2 h(p_1)∫_-2^2 χ_5/p_1^2+q_1^2-s_1^2-s_2^21/h(q_1) q_1 = sup_√(μ_1)+√(μ_2)/2<p_1<2| p_1-s_1 |^1/2[∫_√(μ_1)-p_1^-√(μ_2)/21/p_1^2+q_1^2-s_1^2-s_2^21/| q_1+s_1 |^1/2 q_1 +∫_√(μ_2)/2^p_1-√(μ_2)1/p_1^2+q_1^2-s_1^2-s_2^21/| q_1-s_1 |^1/2 q_1 ] ≤ 2 sup_√(μ_1)+√(μ_2)/2<p_1<2| p_1-s_1 |^1/2∫_√(μ_2)/2^p_1-√(μ_1)1/p_1^2+q_1^2-s_1^2-s_2^21/| q_1-s_1 |^1/2 q_1 ≤ 2sup_√(μ_1)+√(μ_2)/2<p_1<2| p_1-s_1 |^1/2/p_1^2+μ_2/4-s_1^2-s_2^2∫_√(μ_2)/2^p_1-√(μ_1)1/| q_1-s_1 |^1/2 q_1 Note that p_1^2+μ_2/4-s_1^2-s_2^2=p_1^2-μ_1/2-μ_2/4≥√(μ_2)/2 (p_1-√(μ_1/2+μ_2/4)). Carrying out the integration, (<ref>) is bounded above by 8/√(μ_2)sup_√(μ_1)+√(μ_2)/2<p_1<2| p_1-s_1 |^1/2/p_1-√(μ_1/2+μ_2/4)(( √(μ_1)/2)^1/2+χ_p_1>s_1+√(μ_1)| p_1-s_1-√(μ_1)|^1/2) Note that s_1>√(μ_1/2+μ_2/4). Using that for x≥ a≥ b, (x-a)/(x-b)≤ 1 we bound (<ref>) above by 8/√(μ_2)(( √(μ_1)/2)^1/2/|√(μ_1)+√(μ_2)/2-√(μ_1/2+μ_2/4)|^1/2 +1) ≤8/√(μ_2)(√(μ_1)+√(μ_2)/2+√(μ_1/2+μ_2/4)/√(μ_1)+2√(μ_2))^1/2 +8/√(μ_1)≤16/√(μ_1). Region 6: By symmetry under (p_1,q_1)→ -(p_1,q_1), we obtain ‖ D_μ_1,μ_2^6‖ ≤sup_-2<p_1<2 h(p_1)∫_-2^2 1/2χ_6/-p_1q_1-s_1s_21/h(q_1) q_1 ≤sup_-2<p_1<-s_2 h(p_1) ∫_max{-√(μ_1)-p_1,√(μ_2)/2,√(μ_2)+p_1}^min{-p_1+√(μ_1),2}1/-p_1q_1-s_1s_21/| q_1-s_1 |^1/2 q_1 We split the integral into the sum of the integral over q_1>s_1 and q_1<s_1. For p_1<-s_2 and q_1>s_1 we have -p_1q_1-s_1s_2>-(p_1+s_2)s_1. Hence, sup_-2<p_1<-s_2 h(p_1) ∫_s_1^min{-p_1+√(μ_1),2}1/-p_1q_1-s_1s_21/| q_1-s_1 |^1/2 q_1 ≤sup_-2<p_1<-s_21/| p_1+s_2 |^1/2s_1∫_s_1^-p_1+√(μ_1)1/| q_1-s_1 |^1/2 q_1= 2/s_1≤2/√(μ_1) The case q_1<s_1 only occurs for p_1>-s_1-√(μ_1). For -√(μ_2)/2<p_1<-s_2 and √(μ_2)+p_1<q_1<s_1 note that -p_1q_1-s_1s_2≥ -p_1 (√(μ_2)+p_1)-s_1s_2=| p_1+s_2|(p_1+s_1)≥| p_1+s_2|√(μ_1)/2. Hence, sup_-√(μ_2)/2<p_1<-s_2 h(p_1) ∫_√(μ_2)+p_1^s_11/-p_1q_1-s_1s_21/| q_1-s_1 |^1/2 q_1 ≤sup_-√(μ_2)/2<p_1<-s_22/√(μ_1)| p_1+s_2|^1/2∫_√(μ_2)+p_1^s_11/| q_1-s_1 |^1/2 q_1= 4/√(μ_1) For -s_1-√(μ_1)/2<p_1< -√(μ_2)/2 and √(μ_2)/2<q_1<s_1, we have -p_1q_1-s_1s_2≥μ_2/4-s_1 s_2 =μ_1/4. Therefore, sup_-s_1-√(μ_1)/2<p_1<-√(μ_2)/2 h(p_1) ∫_√(μ_2)/2^s_11/-p_1q_1-s_1s_21/| q_1-s_1 |^1/2 q_1 ≤sup_-s_1-√(μ_1)/2<p_1<-√(μ_2)/24| p_1+s_1 |^1/2/μ_1∫_√(μ_2)/2^s_11/| q_1-s_1 |^1/2 q_1 ≤8 (√(μ_1)/2)^1/2/μ_1(√(μ_1)/2)^1/2=4/μ_1^1/2 For -s_1-√(μ_1)<p_1<-s_1-√(μ_1)/2 and -p_1-√(μ_1)<q_1<s_1, we have -p_1q_1-s_1s_2≥ p_1(p_1+√(μ_1))-s_1 s_2 =-(p_1+s_1)(s_2-p_1). Hence, sup_-s_1-√(μ_1)<p_1<-s_1-√(μ_1)/2 h(p_1) ∫_-p_1-√(μ_1)^s_11/-p_1q_1-s_1s_21/| q_1-s_1 |^1/2 q_1 ≤sup_-s_1-√(μ_1)<p_1<-s_1-√(μ_1)/22| p_1+√(μ_1)+s_1 |^1/2/| p_1+s_1 |^1/2(s_2-p_1) = 2/s_2+s_1+√(μ_1)/2≤4/√(μ_1) In total, summing the contributions from q_1>s_1 and q_1<s_1 gives ‖ D_μ_1,μ_2^6‖≤6/√(μ_1) Region 7: By symmetry of the two components of region 7 we have ‖ D_μ_1,μ_2^7 ‖ ≤sup_-2<p_1<2 h(p_1)∫_-2^2 χ_7/p_1^2+q_1^2-s_1^2-s_2^21/h(q_1) q_1 ≤ 2 sup_-2<p_1<2|| p_1|-s_2 |^1/2∫_max{√(μ_1)-p_1, √(μ_2)+p_1}^2 1/p_1^2+q_1^2-s_1^2-s_2^21/| q_1-s_1 |^1/2 q_1 For | p_1 |>s_2, q_1>s_1 we observe p_1^2+q_1^2-s_1^2-s_2^2 ≥ (q_1+s_1)(q_1-s_1) ≥ 2s_1 (q_1-s_1). Therefore, sup_s_2<| p_1 |<2|| p_1|-s_2 |^1/2∫_max{√(μ_1)-p_1, √(μ_2)+p_1}^2 1/p_1^2+q_1^2-s_1^2-s_2^21/| q_1-s_1 |^1/2 q_1 ≤sup_s_2<| p_1 |<2|| p_1|-s_2 |^1/2/2s_1∫_max{√(μ_1)-p_1, √(μ_2)+p_1}^∞1/(q_1-s_1)^3/2 q_1 =sup_s_2<| p_1 |<2|| p_1|-s_2 |^1/2/s_1 (max{√(μ_1)-p_1, √(μ_2)+p_1}-s_1)^1/2=1/s_1≤1/√(μ_1). For | p_1 |<s_2, q_1>s_1 we have (p_1^2+q_1^2-s_1^2-s_2^2)(q_1-s_1)^1/2≥ (q_1+√(s_1^2+s_2^2-p_1^2))(q_1-√(s_1^2+s_2^2-p_1^2))^3/2≥ 2s_1 (q_1-√(s_1^2+s_2^2-p_1^2))^3/2. Hence, sup_| p_1 |<s_2 || p_1|-s_2 |^1/2∫_max{√(μ_1)-p_1, √(μ_2)+p_1}^2 1/p_1^2+q_1^2-s_1^2-s_2^21/| q_1-s_1 |^1/2 q_1 ≤sup_| p_1 |<s_2| p_1+s_2 |^1/2/2s_1∫_√(μ_2)+p_1^∞1/(q_1-√(s_1^2+s_2^2-p_1^2))^3/2 q_1 = sup_| p_1 |<s_2| p_1+s_2 |^1/2/s_11/(√(μ_2)+p_1-√(s_1^2+s_2^2-p_1^2))^1/2 =sup_| p_1 |<s_21/s_1| p_1+s_2 |^1/2(√(μ_2)+p_1+√(s_1^2+s_2^2-p_1^2))^1/2/(p_1+s_1)^1/2(p_1+s_2)^1/2 =sup_| p_1 |<s_21/s_1(√(μ_2)+p_1+√(s_1^2+s_2^2-p_1^2))^1/2/(p_1+s_1)^1/2≤(3/2+√(2))^1/2 s_1^1/2/s_1 μ_1^1/4≤(3/2+√(2))^1/2/μ_1^1/2 In total, we obtain ‖ D^7 ‖≤(6+2√(2))^1/2/√(μ_1). Region 8: Taking the supremum separately over the two symmetric components of region 8, we have ‖ D_μ_1,μ_2^8‖ ≤sup_-2<p_1<2 h(p_1)∫_-2^2 χ_8/s_1^2+s_2^2-p_1^2-q_1^21/h(q_1) q_1 ≤ 2 sup_- √(μ_2)/2<p_1<√(μ_1)-√(μ_2)/2 h(p_1) ∫_√(μ_2)/2^min{√(μ_2)+p_1,√(μ_1)-p_1}1/s_1^2+s_2^2-p_1^2-q_1^21/| s_1-q_1 |^1/2 q_1 ≤ 2sup_- √(μ_2)/2<p_1<√(μ_1)-√(μ_2)/2 h(p_1)/√(μ_2)∫_√(μ_2)/2^min{√(μ_2)+p_1,√(μ_1)-p_1}1/√(s_1^2+s_2^2-p_1^2)-q_11/| s_1-q_1 |^1/2 q_1, since √(s_1^2+s_2^2-p_1^2)+q_1> √(μ_1/2+μ_2/2-μ_2/4)+√(μ_2)/2≥√(μ_2). For | p_1 | >s_2 we have s_1>√(s_1^2+s_2^2-p_1^2), whereas for | p_1 |<s_2, s_1<√(s_1^2+s_2^2-p_1^2). For p_1<-s_2 we obtain sup_- √(μ_2)/2<p_1<-s_2 2 h(p_1)/√(μ_2)∫_√(μ_2)/2^min{√(μ_2)+p_1,√(μ_1)-p_1}1/√(s_1^2+s_2^2-p_1^2)-q_11/| s_1-q_1 |^1/2 q_1 ≤sup_- √(μ_2)/2<p_1<-s_22 | p_1+s_2 |^1/2/√(μ_2)∫_-∞^√(μ_2)+p_11/(√(s_1^2+s_2^2-p_1^2)-q_1)^3/2 q_1 =sup_- √(μ_2)/2<p_1<-s_24 | p_1+s_2 |^1/2/√(μ_2)(√(s_1^2+s_2^2-p_1^2)-√(μ_2)-p_1)^1/2 ≤sup_- √(μ_2)/2<p_1<-s_24 (√(s_1^2+s_2^2-p_1^2)+√(μ_2)+p_1)^1/2/2^1/2√(μ_2)(p_1+s_1)^1/2≤2^1/2 4 s_1^1/2/√(μ_2)μ_1^1/4≤2^1/2 4/μ_1^1/2 Similarly, for p_1>s_2 (which only occurs if 2√(μ_2)<3√(μ_1)), sup_s_2<p_1<√(μ_1)-√(μ_2)/22 h(p_1)/√(μ_2)∫_√(μ_2)/2^min{√(μ_2)+p_1,√(μ_1)-p_1}1/√(s_1^2+s_2^2-p_1^2)-q_11/| s_1-q_1 |^1/2 q_1 ≤sup_s_2<p_1<√(μ_2)/22 | p_1-s_2 |^1/2/√(μ_2)∫_-∞^√(μ_2)-p_11/(√(s_1^2+s_2^2-p_1^2)-q_1)^3/2 q_1 ≤2^1/2 4/μ_1^1/2, by (<ref>). For | p_1 |<s_2, sup_- s_2<p_1<s_22 h(p_1)/√(μ_2)∫_√(μ_2)/2^√(μ_1)-p_11/√(s_1^2+s_2^2-p_1^2)-q_11/| s_1-q_1 |^1/2 q_1 ≤sup_- s_2<p_1<s_22 || p_1|-s_2 |^1/2/√(μ_2)∫_-∞^√(μ_1)-p_11/| s_1-q_1 |^3/2 q_1 = sup_- s_2<p_1<s_24 || p_1|-s_2 |^1/2/√(μ_2)| s_2+p_1 |^1/2=4/√(μ_2) In total, we have ‖ D_μ_1,μ_2^8‖≤2^1/2 4/μ_1^1/2. Region 9: By taking the supremum separately over the two components of region 9 and using the symmetry in (p_1,q_1)→ -(p_1,q_1), we obtain ‖ D_μ_1,μ_2^9‖≤sup_-2<p_1<2 h(p_1)∫_-2^2 1/2χ_9/p_1q_1+s_1s_21/h(q_1) q_1 ≤sup_-s_2<p_1<2 h(p_1) ∫_max{√(μ_1)-p_1,√(μ_2)/2,p_1-√(μ_2)}^min{p_1+√(μ_2),2}1/p_1q_1+s_1s_21/| q_1-s_1 |^1/2 q_1 For p_1>-s_2 and max{√(μ_1)-p_1, √(μ_2)/2}<q_1<√(μ_2)+p_1 note that p_1 q_1+s_1 s_2 ≥{ p_1(√(μ_2)+p_1)+s_1 s_2 = (p_1+s_2)(p_1+s_1) if p_1≤ 0 p_1(√(μ_1)-p_1)+s_1 s_2 = (p_1+s_2)(s_1-p_1) if √(μ_1)-√(μ_2)/2≥ p_1≥ 0 p_1√(μ_2)/2+s_1s_2 if p_1≥max{√(μ_1)-√(μ_2)/2,0}} ≥√(μ_1)/2(p_1+s_2) Hence, ‖ D_μ_1,μ_2^9‖≤sup_-s_2<p_1<22/√(μ_1)(p_1+s_2)^1/2∫_√(μ_1)-p_1^p_1+√(μ_2)1/| q_1-s_1 |^1/2 q_1=8/√(μ_1) Region 10: By symmetry in p_1, we have ‖ D_μ_1,μ_2^10‖≤sup_-2<p_1<2 h(p_1)∫_-2^2 χ_10/p_1^2+q_1^2-s_1^2-s_2^21/h(q_1) q_1 = sup_s_1<p_1<2| p_1-s_1 |^1/2∫_max{√(μ_1)-p_1,-√(μ_2)/2}^min{p_1-√(μ_2),√(μ_2)/2}1/p_1^2+q_1^2-s_1^2-s_2^21/|| q_1|-s_2 |^1/2 q_1 If we mirror the part of region 10 with p_1>0,q_1<0 along q_1=0, its image contains the part of region 10 with p_1>0,q_1>0. Since the integrand is symmetric in q_1, we can thus bound ‖ D_μ_1,μ_2^10‖≤sup_s_1<p_1<2 2| p_1-s_1 |^1/2∫_max{√(μ_2)-p_1,0}^min{p_1-√(μ_1),√(μ_2)/2}1/p_1^2+q_1^2-s_1^2-s_2^21/| q_1-s_2 |^1/2 q_1 Note that for q_1≥√(μ_2)-p_1, p_1>s_1 we have p_1^2+q_1^2-s_1^2-s_2^2=(p_1-s_1)^2+(q_1-s_2)^2+2s_1 (p_1-s_1)+2s_2(q_1-s_2) ≥ 2s_1 (p_1-s_1)+2s_2(s_1-p_1) = 2√(μ_1)(p_1-s_1). Therefore, ‖ D_μ_1,μ_2^10‖≤sup_s_1<p_1<21/√(μ_1)| p_1-s_1 |^1/2∫_√(μ_2)-p_1^p_1-√(μ_1)1/| q_1-s_2 |^1/2 q_1=4/√(μ_1). Region 11: By symmetry in p_1, we obtain ‖ D_μ_1,μ_2^11‖≤sup_-2<p_1<2 h(p_1)∫_-2^2 1/2χ_11/-p_1q_1-s_1s_21/h(q_1) q_1 = sup_-μ_1-√(μ_2)/2<p_1<-√(μ_2)/21/2| p_1+s_1 |^1/2∫_max{-√(μ_1)-p_1,√(μ_2)+p_1}^√(μ_2)/21/-p_1q_1-s_1s_21/| q_1-s_2 |^1/2 q_1 For p_1<-s_1 we have -p_1q_1-s_1s_2>s_1(q_1-s_2). Hence, sup_-μ_1-√(μ_2)/2<p_1<-s_11/2| p_1+s_1 |^1/2∫_-√(μ_1)-p_1^√(μ_2)/21/-p_1q_1-s_1s_21/| q_1-s_2 |^1/2 q_1 ≤sup_-μ_1-√(μ_2)/2<p_1<-s_1| p_1+s_1 |^1/2/2s_1∫_-√(μ_1)-p_1^∞1/| q_1-s_2 |^3/2 q_1 =1/s_1≤1/√(μ_1) For p_1>-s_1, we carry out the integration sup_-s_1<p_1<-√(μ_2)/21/2| p_1+s_1 |^1/2∫_√(μ_2)+p_1^√(μ_2)/21/-p_1q_1-s_1s_21/| q_1-s_2 |^1/2 q_1 ≤sup_-s_1<p_1<-√(μ_2)/21/| p_1 |^1/2 s_2^1/2( s_2^1/2/| p_1 |^1/2)=2^1/2/μ_2^1/4s_2^1/2(2^1/2 s_2^1/2/μ_2^1/4) With (x)≤x/1-x, we obtain 2^1/2/μ_2^1/4s_2^1/2(2^1/2 s_2^1/2/μ_2^1/4)≤2^1/2/μ_2^1/4s_2^1/2s_2^1/2/μ_2^1/4/2^1/2-s_2^1/2 =2^1/2/μ_2^1/4μ_2^1/4/2^1/2+s_2^1/2/μ_1^1/2/2≤4/μ_1^1/2 Therefore, ‖ D_μ_1,μ_2^11‖≤4/μ_1^1/2. Region 12: By symmetry in p_1, we obtain ‖ D_μ_1,μ_2^12‖≤sup_-2<p_1<2 h(p_1)∫_-2^2 1/2χ_12/p_1q_1+s_1s_21/h(q_1) q_1 = sup_-√(μ_2)<p_1<-√(μ_1)1/2 h(p_1) ∫_0^min{p_1+√(μ_2),-√(μ_1)-p_1}1/p_1q_1+s_1s_21/| s_2-q_1 |^1/2 q_1 For p_1≥ -s_1 note that p_1 q_1+s_1 s_2 ≥ s_1(s_2-q_1) ≥√(μ_1)/2( s_2-q_2). For p_1≤ -s_1 and q_1<p_1+√(μ_2) observe that p_1 q_1+s_1 s_2 = (-p_1-s_1)(s_2-q_1)+s_1(s_2-q_1)+s_2(p_1+s_1) ≥√(μ_1)/2(s_2-q_1)+√(μ_2)/2(s_2-q_1)+s_2(q_1-√(μ_2)+s_1)=√(μ_1)/2(s_2-q_1)+√(μ_2)/2(s_2-q_1)-s_2(s_2-q_1) ≥√(μ_1)/2(s_2-q_1) Therefore, ‖ D_μ_1,μ_2^12‖≤sup_-√(μ_2)<p_1<-√(μ_1)| p_1+s_1 |^1/2/√(μ_1)∫_-∞^min{p_1+√(μ_2),-√(μ_1)-p_1}1/| s_2-q_1 |^3/2 q_1 =2/√(μ_1) Region 13: By symmetry under (p_1,q_1)→ -(p_1,q_1), we obtain ‖ D_μ_1,μ_2^13‖≤sup_-2<p_1<2 h(p_1)∫_-2^2 1/2χ_13/p_1q_1+s_1s_21/h(q_1) q_1 = sup_-√(μ_2)/2+√(μ_1)<p_1<3√(μ_2)/2 h(p_1) ∫_max{√(μ_1)-p_1,0, -√(μ_2)+p_1}^√(μ_2)/21/p_1q_1+s_1s_21/| s_2-q_1 |^1/2 q_1 For p_1>√(μ_1), q_1>0, we have p_1q_1+s_1 s_2 ≥√(μ_1)(q_1+s_2). Therefore, sup_√(μ_1)<p_1<√(μ_2)+s_2 h(p_1) ∫_max{√(μ_1)-p_1,0, -√(μ_2)+p_1}^√(μ_2)/21/p_1q_1+s_1s_21/| s_2-q_1 |^1/2 q_1 ≤sup_√(μ_1)<p_1<√(μ_2)+s_2| p_1-s_1 |^1/2/√(μ_1)∫_0^∞1/q_1+s_21/| s_2-q_1 |^1/2 q_1 =2^1/2/√(μ_1)∫_0^∞1/q_1+11/| 1-q_1 |^1/2 q_1 ≤2^1/2/√(μ_1)[ ∫_0^21/| 1-q_1 |^1/2 q_1+∫_2^∞1/| q_1-1 |^3/2 q_1 ] =2^1/26/√(μ_1) and sup_√(μ_2)+s_2<p_1<3√(μ_2)/2 h(p_1) ∫_max{√(μ_1)-p_1,0, -√(μ_2)+p_1}^√(μ_2)/21/p_1q_1+s_1s_21/| s_2-q_1 |^1/2 q_1 ≤sup_√(μ_2)+s_2<p_1<3√(μ_2)/2| p_1-s_1 |^1/2/√(μ_1)∫_-√(μ_2)+p_1^∞1/q_1+s_21/| q_1-s_2 |^1/2 q_1 =sup_2s_2<x<μ_2-√(μ_1)/2| x |^1/2/√(μ_1)∫_x^∞1/y1/| y-2s_2 |^1/2 y =sup_2s_2<x<μ_2-√(μ_1)/21/√(μ_1)∫_1^∞1/y1/| y-2s_2/x|^1/2 y =1/√(μ_1)∫_1^∞1/y1/| y-1 |^1/2 y≤1/√(μ_1)[∫_1^21/| y-1 |^1/2 y+ ∫_2^∞1/| y-1 |^3/2 y]=4/√(μ_1), where we substituted x=p_1-s_1 and y=q_1+s_2. Next, we consider the case p_1<√(μ_1)/2. For √(μ_2)/2≥ q_1 ≥√(μ_1)-p_1 and -s_2<p_1<√(μ_1)/2 we have p_1 q_1+s_1 s_2 ≥{√(μ_1)/2(p_1+s_2) if p_1>0 (s_1-q_1)(p_1+s_2)-p_1(s_1-q_1)+q_1(p_1+s_2) if p_1<0 . ≥{√(μ_1)/2(p_1+s_2) if p_1>0 (s_1-q_1)(p_1+s_2) if p_1<0 . ≥√(μ_1)/2(p_1+s_2) Therefore, sup_-√(μ_2)/2+√(μ_1)<p_1<√(μ_1)/2 h(p_1) ∫_max{√(μ_1)-p_1,0, -√(μ_2)+p_1}^√(μ_2)/21/p_1q_1+s_1s_21/| s_2-q_1 |^1/2 q_1 ≤sup_-√(μ_2)/2+√(μ_1)<p_1<√(μ_1)/22 h(p_1)/√(μ_1)(p_1+s_2)∫_√(μ_1)-p_1^√(μ_2)/21/| s_2-q_1 |^1/2 q_1 ≤sup_-√(μ_2)/2+√(μ_1)<p_1<√(μ_1)/24/√(μ_1)(p_1+s_2)^1/2{(√(μ_1)/2)^1/2 if √(μ_1)-p_1>s_2 (√(μ_1)/2)^1/2+ (s_2-√(μ_1)+p_1)^1/2 if √(μ_1)-p_1<s_2. Note that sup_-√(μ_2)/2+√(μ_1)<p_1<√(μ_1)/2 (p_1+s_2)^-1/2(√(μ_1)/2)^1/2 =1 and that for p_1>√(μ_1)-s_2 we have |s_2-√(μ_1)+p_1/p_1+s_2|≤ 1. One can hence bound (<ref>) above by 8/√(μ_1). For q_1 ≥ 0 and p_1>√(μ_1)/2 we have p_1 q_1+s_1 s_2 ≥√(μ_1)/2(q_1+s_2). Therefore, sup_√(μ_1)/2<p_1<√(μ_1) h(p_1) ∫_max{√(μ_1)-p_1,0, -√(μ_2)+p_1}^√(μ_2)/21/p_1q_1+s_1s_21/| s_2-q_1 |^1/2 q_1 ≤sup_√(μ_1)/2<p_1<√(μ_1)2h(p_1)/√(μ_1)∫_√(μ_1)-p_1^∞1/(q_1+s_2)| s_2-q_1 |^1/2 q_1 =sup_√(μ_1)/2<p_1<√(μ_1)4h(p_1)/√(μ_1)√(2s_2){(√(s_2-√(μ_1)+p_1/2 s_2))+π if s_2>√(μ_1)-p_1 arctan(√(2 s_2/√(μ_1)-p_1-s_2)) if s_2<√(μ_1)-p_1. We estimate the two cases separately: sup_√(μ_1)-s_2<p_1<√(μ_1)4h(p_1)/√(μ_1)√(2s_2)[(√(s_2-√(μ_1)+p_1/2 s_2))+π] ≤4 | s_1-√(μ_1)+s_2 |^1/2/√(μ_1)√(2s_2)[(1/√(2))+π]=4(1/√(2))+π/√(μ_1) and sup_√(μ_1)/2<p_1<√(μ_1)-s_2 4h(p_1)/√(μ_1)√(2s_2)arctan(√(2 s_2/√(μ_1)-p_1-s_2)) ≤4/√(μ_1)sup_√(μ_1)/2<p_1<√(μ_1)-s_2[ | s_1-p_1 |^1/2-|√(μ_1)-p_1-s_2 |^1/2/√(2s_2)π/2 + |√(μ_1)-p_1-s_2 |^1/2/√(2s_2)arctan(√(2 s_2/√(μ_1)-p_1-s_2))] ≤4/√(μ_1)sup_√(μ_1)/2<p_1<√(μ_1)-s_2√(2s_2)/| s_1-p_1 |^1/2+|√(μ_1)-p_1-s_2 |^1/2π/2 +1 ≤4 (π/2+1)/√(μ_1) In total, we obtain ‖ D_μ_1,μ_2^13‖≤max{2^1/2 6/μ_1^1/2,4/μ_1^1/2,8/μ_1^1/2,4((1/√(2))+π)/μ_1^1/2,4 (π/2+1)/μ_1^1/2} = 4((1/√(2))+π)/μ_1^1/2 Region 14: By symmetry in p_1, we have ‖ D_μ_1,μ_2^14‖ ≤sup_-2<p_1<2 h(p_1)∫_-2^2 χ_14/s_1^2+s_2^2-p_1^2-q_1^21/h(q_1) q_1 = sup_0<p_1<s_1h(p_1) ∫_max{-√(μ_1)-p_1,-√(μ_2)/2,-√(μ_2)+p_1}^min{√(μ_1)-p_1,√(μ_2)/2}1/s_1^2+s_2^2-p_1^2-q_1^21/|| q_1|-s_2 |^1/2 q_1 ≤sup_0<p_1<s_12 h(p_1) ∫_max{0,p_1-√(μ_1)}^min{√(μ_1)+p_1,√(μ_2)/2,√(μ_2)-p_1}1/s_1^2+s_2^2-p_1^2-q_1^21/|| q_1|-s_2 |^1/2 q_1, where in the last inequality we increased the domain to be symmetric in q_1 and used the symmetry of the integrand. For p_1 ≤ s_2 and √(μ_1)+p_1>q_1 we have s_1^2+s_2^2-p_1^2-q_1^2 ≥ s_1^2+s_2^2-p_1^2-(√(μ_1)+p_1)^2=2 (s_2-p_1)(p_1+s_1). Hence, sup_0<p_1<√(μ_2)/2-√(μ_1)2 h(p_1) ∫_0^√(μ_1)+p_11/s_1^2+s_2^2-p_1^2-q_1^21/|| q_1|-s_2 |^1/2 q_1 ≤sup_0<p_1<√(μ_2)/2-√(μ_1)1/(s_2-p_1)^1/2(p_1+s_1)∫_0^√(μ_1)+p_11/|| q_1|-s_2 |^1/2 q_1 =sup_0<p_1<√(μ_2)/2-√(μ_1)2(s_2^1/2+(p_1+√(μ_1)-s_2)^1/2)/(s_2-p_1)^1/2(p_1+s_1)≤2(s_2^1/2+(√(μ_1)/2)^1/2)/(√(μ_1)/2)^1/2s_1≤4(√(μ_2)/2)^1/2/(√(μ_1)/2)^1/2√(μ_2)/2≤8/√(μ_1) Similarly, for p_1 ≥ s_2 and √(μ_2)-p_1>q_1 we have s_1^2+s_2^2-p_1^2-q_1^2 ≥ s_1^2+s_2^2-p_1^2-(√(μ_2)-p_1)^2=2 (s_1-p_1)(p_1-s_2). Therefore, sup_√(μ_2)/2<p_1<s_12 h(p_1) ∫_p_1-√(μ_1)^√(μ_2)-p_11/s_1^2+s_2^2-p_1^2-q_1^21/| q_1-s_2 |^1/2 q_1 ≤sup_√(μ_2)/2<p_1<s_11/(s_1-p_1)^1/2(p_1-s_2)∫_p_1-√(μ_1)^√(μ_2)-p_11/| q_1-s_2 |^1/2 q_1 =sup_√(μ_2)/2<p_1<s_14/p_1-s_2 =8/√(μ_1). For √(μ_2)/2-√(μ_1)≤ p_1≤√(μ_2)/2 and q_1<√(μ_2)/2, we have s_1^2+s_2^2-p_1^2-q_1^2≥μ_1/2. Thus, sup_√(μ_2)/2-√(μ_1)< p_1<√(μ_2)/22 h(p_1) ∫_max{0,p_1-√(μ_1)}^√(μ_2)/21/s_1^2+s_2^2-p_1^2-q_1^21/| q_1-s_2 |^1/2 q_1 ≤4/μ_1(√(μ_1)/2)^1/2∫_√(μ_2)/2-2√(μ_1)^√(μ_2)/21/| q_1-s_2 |^1/2 q_1 =8 ((3√(μ_1)/2)^1/2+(√(μ_1)/2)^1/2)/2^1/2μ_1^3/4 ≤4/μ_1^1/2(√(3)+1) In total, we have ‖ D_μ_1,μ_2^14‖≤4/μ_1^1/2(√(3)+1) §.§ Proof of Lemma <ref> The integral in (<ref>) is invariant under rotations of q̃. Therefore, it suffices to take the supremum over q̃=q_2≥ 0 for d=2 and q̃ =(q_2,0) with q_2≥ 0 for d=3. Furthermore, it suffices to restrict to p_2≥ 0 since the integrand is invariant under p̃→ -p̃. Note that under these conditions μ_1 ≤μ_2. We split the domain of integration in (<ref>) into two regions according to μ_1=min{μ_1,μ_2}≶ 0. Dimension three: We first consider the case μ_1<0, i.e. | p_2+q_2|^2>1-p_3^2. In this case, sup_q̃=(q_2,0), q_2≥ 0∫_^2χ_|p̃|<2 χ_p_2≥ 0χ_min{μ_1,μ_2}<0/(-min{μ_1,μ_2})^αp̃ =sup_q_2≥ 0[∫_-1^1 p_3 ∫_max{√(1-p_3^2)-q_2,0}^√(4-p_3^2)1/((p_2+q_2)^2+p_3^2-1)^α p_2 +∫_1<|p_3|<2 p_3 ∫_0^√(4-p_3^2)1/((p_2+q_2)^2+p_3^2-1)^α p_2] Let q_2 and |p_3|≤ 1 be fixed. By substituting x=p_2+q_2-√(1-p_3^2) if q_2≤√(1-p_3^2) one obtains ∫_max{√(1-p_3^2)-q_2,0}^21/((p_2+q_2)^2+p_3^2-1)^α p_2 ≤∫_0^2χ_q_2≤√(1-p_3^2)/(x+√(1-p_3^2))^2+p_3^2-1)^α x +∫_0^2χ_q_2>√(1-p_3^2)/((p_2+√(1-p_3^2))^2+p_3^2-1)^α p_2 ≤∫_0^21/(2p_2√(1-p_3^2))^α p_2 ≤C/(1-p_3^2)^α/2 for some finite constant C. Since ∫_-1^1(1-p_3^2)^-α/2 p_3<∞, the first term in (<ref>) is bounded. The second term is bounded by ∫_1<|p_3|<2 p_3 ∫_0^21/(p_3^2-1)^α p_2<∞. For the case μ_1>0 we have | p_2+q_2|^2<1-p_3^2. Hence, sup_q̃=(q_2,0), q_2≥ 0∫_^2χ_|p̃|<2 χ_p_2≥ 0χ_0<min{μ_1,μ_2}/min{μ_1,μ_2}^αp̃ =sup_q_2≥ 0∫_-1^1 p_3 χ_q_2≤√(1-p_3^2)∫_0^√(1-p_3^2)-q_21/(1-(p_2+q_2)^2-p_3^2)^α p_2 For fixed |p_3|<1 and q_2≤√(1-p_3^2) substituting x=√(1-p_3^2)-q_2-p_2 gives ∫_0^√(1-p_3^2)-q_21/(1-(p_2+q_2)^2-p_3^2)^α p_2 =∫_0^√(1-p_3^2)-q_21/(1-(√(1-p_3^2)-x)^2-p_3^2)^α x =∫_0^√(1-p_3^2)-q_21/x^α (2√(1-p_3^2)-x)^α x. Thus the expression in (<ref>) is bounded by sup_q_2≥ 0∫_-1^1 p_3 χ_q_2≤√(1-p_3^2)∫_0^√(1-p_3^2)-q_21/x^α (√(1-p_3^2)+q_2)^α x ≤∫_-1^1 p_3 ∫_0^11/x^α (√(1-p_3^2))^α x<∞ Dimension two: For the case μ_1<0 we have | p_2+q_2|>1. Hence, sup_q_2≥ 0∫_0^2χ_min{μ_1,μ_2}<0/(-min{μ_1,μ_2})^α p_2=sup_q_2≥ 0∫_max{1-q_2,0}^21/((p_2+q_2)^2-1)^α p_2. This is finite according to (<ref>). For the case μ_1>0, sup_q_2≥ 0∫_0^2χ_0<min{μ_1,μ_2}/min{μ_1,μ_2}^α p_2=sup_0 ≤ q_2≤ 1∫_0^1-q_21/(1-(p_2+q_2)^2)^α p_2 =∫_0^11/x^α(2-x)^α x <∞, where we used (<ref>) in the second equality. §.§ Proof of Lemma <ref> The proof follows from elementary computations. We carry out the case d=2 and leave the case d=3, where one additional integration over q_3 needs to be performed, to the reader. By symmetry, we may restrict to p_1,q_1,p_2≥ 0. Furthermore, we will partition the remaining domain of p_2,q_2 into nine subdomains. Let χ_j be the characteristic function of domain j. Since (a+b)^2≤ 2(a^2+b^2), there is a constant C such that the expression in (<ref>) is bounded above by C ∑_j=1^9 lim_→0 I_j, where I_j= sup_0≤ p_2<∫_^2χ_0<p_1,q_1<[∫_-√(2)^√(2)2χ_j/| (p+q)^2-1 |+ | (p-q)^2-1 | q_2 ]^2 p_1 q_1. Hence, we can consider the domains case by case and prove that lim_→0 I_j=0 for each of them. We use the notation μ_1=1-(p_1+q_1)^2 and μ_2=1-(p_1-q_1)^2. (Note that this differs from the notation in Lemma <ref>). Since p_1,q_1≥ 0 we have μ_2≥μ_1. We assume that <1/4, and thus for p_1,q_1< we have μ_1,μ_2>1-4^2>3/4. For fixed 0<p_1,q_1<, we choose the subdomains for p_2,q_2 as sketched in Figure <ref>. The subdomains are chosen according to the signs of (p_2+q_2)^2-μ_1 and (p_2-q_2)^2-μ_2, and to distinguish which of -√(μ_1)-p_2,-√(μ_2)+p_2 is larger. We start with domains 1 to 4, where (p+q)^2-1=(p_2+q_2)^2-μ_1>0 and (p-q)^2-1=(p_2-q_2)^2-μ_2>0. Note that in domain 4, p_2≥√(μ_1)+√(μ_2)/2≥√(1-4^2), which is larger than . Hence χ_4=0 for p_2<√(1-4^2), giving I_4=0. For domains 2 and 3, we have I_2 =sup_0≤ p_2<∫_^2χ_0<p_1,q_1<χ_2p_2<√(μ_2)-√(μ_1)[∫_a_2^21/p_1^2+q_1^2+p_2^2+q_2^2-1 q_2 ]^2 p_1 q_1, I_3 =sup_0≤ p_2<∫_^2χ_0<p_1,q_1<χ_2p_2>√(μ_2)-√(μ_1)[∫_a_3^21/p_1^2+q_1^2+p_2^2+q_2^2-1 q_2 ]^2 p_1 q_1, where a_2=√(μ_2)-p_2 and a_3=√(μ_1)+p_2. Since 0≤ p_1,q_1,p_2< we have 1-p_1^2-q_1^2-p_2^2>3/4 and thus ∫_a_j^21/p_1^2+q_1^2+p_2^2+q_2^2-1 = √(1-p_1^2-q_1^2-p_2^2)/a_j-√(1-p_1^2-q_1^2-p_2^2)/2/√(1-p_1^2-q_1^2-p_2^2) ≤ C √(1-p_1^2-q_1^2-p_2^2)/a_j. Since (x/y)=ln((y+x)^2/(y^2-x^2))/2 and √(1-p_1^2-q_1^2-p_2^2)+a_j≤ 3 we get I_2≤C/2sup_0≤ p_2<∫_^2χ_0<p_1,q_1<χ_2p_2<√(μ_2)-√(μ_1)[ ln9/2(p_1 q_1-p_2(√(μ_2)-p_2))]^2 p_1 q_1 and I_3≤C/2sup_0≤ p_2<∫_^2χ_0<p_1,q_1<χ_2p_2>√(μ_2)-√(μ_1)[ ln9/2(p_1 q_1+p_2(√(μ_1)+p_2))]^2 p_1 q_1. For domain 2, we substitute z=p_1+q_1 and r=p_1-q_1 and obtain the bound I_2≤C/4sup_0≤ p_2<∫_^2χ_| r| <z<2χ_2p_2<√(μ_2)-√(μ_1)[ ln18/z^2-r^2-4p_2(√(1-r^2)-p_2)]^2 r z The condition 2p_2<√(μ_2)-√(μ_1) implies that x:=z^2-r^2-4p_2(√(1-r^2)-p_2)≥ 0. Substituting z by x gives I_2≤C/4sup_0≤ p_2<∫_-2^2 r ∫_0^^2 x [ ln18/x]^2 1/2√(x+r^2+4p_2(√(1-r^2)-p_2)) ≤C/2∫_0^^2[ ln18/x]^2 1/√(x) x This vanishes as → 0. For domain 3 we bound (<ref>) by I_3≤ C ∫_^2χ_0<p_1,q_1<[ ln9/2 p_1 q_1]^2 p_1 q_1, which vanishes in the limit → 0. For domain 1 note that since √(μ_2)+p_2≥ a_2,a_3, we have I_1≤ I_2+I_3. Now consider domain 5, where (p+q)^2-1=(p_2+q_2)^2-μ_1>0 and (p-q)^2-1=(p_2-q_2)^2-μ_2<0. We have I_5= sup_0≤ p_2<∫_^2χ_0<p_1,q_1<[∫_√(μ_1)-p_2^√(μ_2)+p_21/2(p_1q_1+p_2 q_2) q_2 ]^2 p_1 q_1. Integration over q_2 gives ∫_√(μ_1)-p_2^√(μ_2)+p_21/2(p_1q_1+p_2 q_2) q_2 =1/2p_2ln(1+ (√(μ_2)-√(μ_1))p_2+2p_2^2/p_1 q_1+(√(μ_1)-p_2)p_2). Note that √(μ_2)-√(μ_1)= 4p_1q_1/(√(μ_2)+√(μ_1))≤ 2 p_1 q_1 /√(1-4^2) and √(μ_1)-p_2≥√(1-4^2)-. We can therefore bound the previous expression from above by 1/2p_2ln(1+ 2 p_2/√(1-4^2)+2p_2/√(1-4^2)-) ≤1 /√(1-4^2)+1/√(1-4^2)-<C, where we used that ln(1+ x)/x ≤ 1 for x≥ 0. Therefore I_5≤ C^2 ^2 vanishes as →0. For region 6 we have I_6= sup_0≤ p_2<∫_^2χ_0<p_1,q_1<χ_2p_2≤√(μ_2)-√(μ_1)[∫_-√(μ_2)+p_2^-√(μ_1)-p_21/2(p_1 q_1+p_2 q_2) q_2 ]^2 p_1 q_1. Integration over q_2 gives ∫_-√(μ_2)+p_2^-√(μ_1)-p_21/2(p_1q_1+p_2 q_2) q_2 =1/2p_2ln(1+ p_2(√(μ_2)-√(μ_1)-2p_2)/p_1 q_1-(√(μ_2)-p_2)p_2). One can compute that / p_2√(μ_2)-√(μ_1)-2p_2/p_1 q_1-(√(μ_2)-p_2)p_2=8/(√(μ_2)+√(μ_1)-2p_2)^2>0. Thus, for χ_2p_2≤√(μ_2)-√(μ_1) we have √(μ_2)-√(μ_1)-2p_2/p_1 q_1-(√(μ_2)-p_2)p_2≤lim_p_2→ (√(μ_2)-√(μ_1))/2√(μ_2)-√(μ_1)-2p_2/p_1 q_1-(√(μ_2)-p_2)p_2=2/√(μ_1). The expression in (<ref>) is thus bounded above by 1/2p_2ln(1+ p_2 2/√(μ_1))≤1/√(μ_1)≤1/√(1-4^2), which is bounded as →0. In total, we have I_6 ≤ C ^2, which vanishes in the limit →0. For region 7, I_7= sup_0≤ p_2<∫_^2χ_0<p_1,q_1<χ_2p_2≥√(μ_2)-√(μ_1)[∫_-√(μ_1)-p_2^-√(μ_2)+p_21/-2(p_1 q_1+p_2 q_2) q_2 ]^2 p_1 q_1. Integration over q_2 gives ∫_-√(μ_1)-p_2^-√(μ_2)+p_21/-2(p_1 q_1+p_2 q_2) q_2 =1/2p_2ln(1+ p_2(√(μ_2)-√(μ_1)-2p_2)/p_1 q_1-(√(μ_2)-p_2)p_2). According to (<ref>), for (√(μ_2)-√(μ_1))/2≤ p_2< this is bounded by 1/2p_2ln(1+ p_2(√(μ_2)-√(μ_1)-2)/p_1 q_1-(√(μ_2)-))≤1/22-(√(μ_2)-√(μ_1))/(√(μ_2)-)-p_1 q_1. For p_1,q_1< this can be further estimated by 1/22/(√(μ_2)-)-^2≤1/√(1-4^2)-2, which is bounded for → 0. Hence, I_7 ≤ C ^2 vanishes for →0. For domains 8 and 9, we have I_8 =sup_0≤ p_2<∫_^2χ_0<p_1,q_1<χ_2p_2<√(μ_2)-√(μ_1)[∫_-√(μ_1)-p_2^√(μ_1)-p_21/1-p_1^2-q_1^2-p_2^2-q_2^2 q_2 ]^2 p_1 q_1, I_9 =sup_0≤ p_2<∫_^2χ_0<p_1,q_1<χ_2p_2>√(μ_2)-√(μ_1)[∫_-√(μ_2)+p_2^√(μ_1)-p_21/1-p_1^2-q_1^2-p_2^2-q_2^2 q_2 ]^2 p_1 q_1. We bound ∫_-√(μ_1)-p_2^√(μ_1)-p_21/1-p_1^2-q_1^2-p_2^2-q_2^2 q_2≤ 2∫_0^√(μ_1)+p_21/1-p_1^2-q_1^2-p_2^2-q_2^2 q_2 =1/√(1-p_1^2-q_1^2-p_2^2)ln( √(1-p_1^2-q_1^2-p_2^2)+√(μ_1)+p_2/√(1-p_1^2-q_1^2-p_2^2)-√(μ_1)-p_2) =1/√(1-p_1^2-q_1^2-p_2^2)ln( (√(1-p_1^2-q_1^2-p_2^2)+√(μ_1)+p_2)^2/2(p_1 q_1-p_2(√(μ_1)+p_2))) ≤1/√(1-3^2)ln( 9/2(p_1 q_1-p_2(√(μ_1)+p_2))) Substituting z=p_1+q_1 and r=p_1-q_1 we obtain I_8≤ C sup_0≤ p_2<∫_^2χ_0<p_1,q_1<χ_2p_2<√(μ_2)-√(μ_1)ln( 9/2(p_1 q_1-p_2(√(μ_1)+p_2)))^2 p_1 q_1 ≤C/2sup_0≤ p_2<∫_0^2 z ∫_-^ r χ_|r|<zχ_2p_2<√(1-r^2)-√(1-z^2)ln( 18/z^2-r^2-4p_2(√(1-z^2)+p_2))^2. Substituting r by x=z^2-r^2-4p_2(√(1-z^2)+p_2) and using Hölder's inequality we obtain I_8≤C/4sup_0≤ p_2<∫_4p_2-4p_2^2^2 z ∫_0^z^2-4p_2(p_2+√(1-z^2))ln( 18/x)^2 1/√(z^2-4p_2(√(1-z^2)+p_2)-x) x ≤C/4sup_0≤ p_2<∫_4p_2-4p_2^2^2 z [ (∫_0^z^2-4p_2(p_2+√(1-z^2))ln( 18/x)^6 x )^1/3× (∫_0^z^2-4p_2(p_2+√(1-z^2))1/(z^2-4p_2(√(1-z^2)+p_2)-x)^3/4 x)^2/3]. In the last line we substitute y=z^2-4p_2(√(1-z^2)+p_2)-x, and then we use z^2-4p_2(√(1-z^2)+p_2)-x≤ 4^2 to arrive at the bound I_8≤C/4sup_0≤ p_2<∫_4p_2-4p_2^2^2 z [ (∫_0^4^2ln( 18/x)^6 x )^1/3 (∫_0^4^21/y^3/4 y)^2/3] ≤C/2 (∫_0^4^2ln( 18/x)^6 x )^1/3 (∫_0^4^21/y^3/4 y)^2/3 , which vanishes as → 0. For I_9 we bound (analogously to (<ref>)) ∫_-√(μ_2)+p_2^√(μ_1)-p_21/1-p_1^2-q_1^2-p_2^2-q_2^2 q_2 ≤ 2 ∫_0^√(μ_2)-p_21/1-p_1^2-q_1^2-p_2^2-q_2^2 q_2 =1/√(1-p_1^2-q_1^2-p_2^2)ln( (√(1-p_1^2-q_1^2-p_2^2)+√(μ_2)-p_2)^2/2(p_2(√(μ_2)-p_2)-p_1 q_1)) ≤1/√(1-3^2)ln( 4/2(p_2(√(μ_2)-p_2)-p_1 q_1)) Substituting z=p_1+q_1 and r=p_1-q_1 we obtain I_9≤ C sup_0≤ p_2<∫_^2χ_0<p_1,q_1<χ_2p_2>√(μ_2)-√(μ_1)ln( 4/2(p_2(√(μ_2)-p_2)-p_1 q_1))^2 p_1 q_1 ≤C/2sup_0≤ p_2<∫_-^ r ∫_0^2 z χ_|r|<zχ_2p_2>√(1-r^2)-√(1-z^2)ln( 8/4p_2(√(1-r^2)-p_2) -z^2+r^2)^2. Substituting z by x=4p_2(√(1-r^2)-p_2) -z^2+r^2 and using Hölder's inequality we obtain I_9≤C/4sup_0≤ p_2<∫_-^ r ∫_0^4p_2(√(1-r^2)-p_2)+r^2ln( 8/x)^2 1/√(4p_2(√(1-r^2)-p_2) +r^2-x) x ≤C/4sup_0≤ p_2<∫_-^ r (∫_0^4p_2(√(1-r^2)-p_2)+r^2ln( 8/x)^6 x)^1/3× (∫_0^4p_2(√(1-r^2)-p_2)+r^21/(4p_2(√(1-r^2)-p_2) +r^2-x)^3/4 x)^2/3 ≤C/2(∫_0^4+^2ln( 8/x)^6 x)^1/3(∫_0^4+^21/y^3/4 y)^2/3, which vanishes for →0. §.§ Proof of Lemma <ref> To prove Lemma <ref> we show that the following expressions are finite. * sup_T>μ/2sup_ q ∈^d‖ V^1/2 B_T(· , q) |V |^1/2‖ * sup_T sup_ q ∈^d‖ V^1/2 B_T(· , q) χ_|·|^2>3μ |V |^1/2‖ * sup_T sup_q∈^d‖ V^1/2 B_T(·, q) χ_((·+q)^2-μ)((·-q)^2-μ)<0 |V |^1/2‖ * sup_T sup_| q | >√(μ)/2‖ V^1/2 B_T(·, q) χ_p^2<3μχ_((·+q)^2-μ)((·-q)^2-μ)>0 |V |^1/2‖ * sup_T sup_| q | <√(μ)/2‖ V^1/2[B_T(·, q) χ_|·|^2<3μχ_((·+q)^2-μ)((·-q)^2-μ)>0-Q_T(q) ] |V |^1/2‖ In combination, they prove (<ref>). For (<ref>), note that ‖ V^1/2 B_T(·, q) |V |^1/2‖ =‖ |V|^1/2 B_T(·, q) |V |^1/2‖ and by Lemma <ref> this is maximal for q=0, i.e. sup_T>μ/2sup_ q ∈^d‖ V^1/2 B_T(·, q) |V |^1/2‖ = sup_T>μ/2‖ |V|^1/2 B_T(·, 0) |V |^1/2‖. By Lemma <ref>, there is a constant C depending only on μ and V such that sup_T>μ/2‖ |V|^1/2 B_T(·, 0) |V|^1/2‖≤sup_T>μ/2 e_μ(|V|) m_μ(T) + C<∞, where e_μ(|V|)= supσ( |V|^1/2^† |V |^1/2). Part (<ref>) follows using (<ref>) and that ‖ |V|^1/21/1-Δ |V |^1/2‖ is bounded <cit.>. For (<ref>), it suffices to prove that Y= sup_Tsup_q∈^d∫_^d B_T(p, q)χ_((p+q)^2-μ)((p-q)^2-μ)<0 p <∞ since (<ref>) is bounded by ‖ V ‖_1 Y. The integrand is invariant under rotation of (p,q)→ (R p, Rq) around the origin. Hence, the integral only depends on the absolute value of q and we may take the supremum over q of the form q=(| q |,0) only. For p,(q_1,0) satisfying ((p+(q_1,0))^2-μ)((p-( q_1,0))^2-μ)<0, we can estimate by <cit.> B_T(p,(q_1,0))≤2/Texp(-1/Tmin{(| p_1| + | q_1 |)^2+p̃^2-μ, μ-(| p_1| - | q_1 |)^2-p̃^2 }) Note that (| p_1| + | q_1 |)^2+p̃^2-μ<μ-(| p_1| - | q_1 |)^2-p̃^2 ↔ p^2+q_1^2<μ. We can therefore further estimate B_T(p,(q_1,0)) χ_(| p_1| + | q_1 |)^2+p̃^2>μ>(| p_1| - | q_1 |)^2+p̃^2 ≤2/Texp(-1/T((| p_1| + | q_1 |)^2+p̃^2-μ)) χ_(| p_1| + | q_1 |)^2+p̃^2>μχ_p^2+q_1^2<μ +2/Texp(-1/T(μ-(| p_1| - | q_1 |)^2-p̃^2)) χ_μ>(| p_1| - | q_1 |)^2+p̃^2 We now integrate the bound over p and use the symmetry in p_1 to restrict to p_1>0, replace | p_1| by p_1 and then extend the domain to p_1∈. We obtain Y ≤sup_Tsup_q_1∈4/T[∫_^dexp(-1/T(( p_1 + | q_1 |)^2+p̃^2-μ)) χ_( p_1 + | q_1 |)^2+p̃^2>μχ_p^2+q_1^2<μ p . +. ∫_^dexp(-1/T(μ-( p_1 - | q_1 |)^2-p̃^2)) χ_μ>( p_1 - | q_1 |)^2+p̃^2 p]. Now we substitute p_1 ±| q_1 | by p_1 and obtain Y ≤sup_Tsup_| q_1|<√(μ)4/T∫_^dexp(-1/T(p_1^2+p̃^2-μ)) χ_p_1^2+p̃^2>μχ_(p_1-| q_1 |)^2+p̃^2+q_1^2<μ p +sup_T4/T∫_^dexp(-1/T(μ- p_1^2-p̃^2)) χ_μ>p_1^2+p̃^2 p ≤sup_T4 |^d-1| (2√(μ))^d-1e^μ/T/T∫_√(μ)^∞ e^-r^2/T r+sup_T4 |^d-1|√(μ)^d-1e^-μ/T/T∫_0^√(μ) e^r^2/T r, where we used that (p_1-| q_1 |)^2+p̃^2+q_1^2<μ⇒ p^2<2μ. Note that √(μ)e^μ/T/T∫_√(μ)^∞ e^-r^2/T r= π^1/2/2√(μ/T)e^μ/Terfc(√(μ/T)) and √(μ)e^-μ/T/T∫_0^√(μ) e^r^2/T r= π^1/2/2√(μ/T)e^-μ/Terfi(√(μ/T)) As in the proof of <cit.>, we conclude that Y<∞ since the functions xe^x^2erfc(x) and xe^-x^2erfi(x) are bounded for x≥0. For (<ref>), it again suffices to prove that X=sup_T sup_| q |> √(μ)/2∫_^d B_T(p,q) χ_p^2<3μχ_((p+q)^2-μ)((p-q)^2-μ)>0 p <∞, since (<ref>) is bounded by ‖ V ‖_1 X. Again we can restrict to q of the form q=(| q |,0). The idea is to split the integrand in X into four terms localized in different regions. The integrand is supported on the intersection and the complement of the two disks/balls with radius √(μ) centered at (± q_1,0). (For d=2 this is the white region in Figure <ref>). * The first term covers the domain with |p̃| > √(μ) outside the disks/balls: X_1=sup_T sup_| q_1 |> √(μ)/2∫_^d B_T(p,(q_1,0)) χ_p^2<3μχ_p̃^2 >μ p * The second term covers the remaining domain with | p_1 | > | q_1 | outside of the two disks/balls: X_2=sup_T sup_| q_1 |> √(μ)/2∫_p̃^2 <μp̃∫_| p_1 |>√(μ-p̃^2)+| q_1 | p_1 B_T(p,(q_1,0)) χ_p^2<3μ * The third term covers the remaining domain with | p_1 | < | q_1 | outside of the two disks/balls: X_3=sup_T sup_| q_1 |> √(μ)/2∫_μ-q_1^2<p̃^2 <μp̃∫_| p_1 |<-√(μ-p̃^2)+| q_1 | p_1 B_T(p,(q_1,0)) χ_p^2<3μ * The fourth term covers the domain in the intersection of the two disks/balls: X_4=sup_T sup_| q_1 |> √(μ)/2∫_p̃^2<μ-q_1^2p̃∫_| p_1 |<√(μ-p̃^2)-| q_1 | p_1 B_T(p,(q_1,0)) χ_p^2<3μ We prove that each X_j is finite. It then follows that X ≤ X_1+X_2+X_3+X_4 is finite. We use the bounds B_T(p,(q_1,0))≤{1/p^2+q_1^2-μ if (| p_1| - | q_1 |)^2+p̃^2 >μ, 1/μ- p^2-q_1^2 if (| p_1| +| q_1 |)^2+p̃^2<μ, . which follow from (<ref>). The first line applies to X_1,X_2, X_3, the second line to X_4. For X_1, we have p^2+q_1^2-μ>q_1^2>μ/4 and thus X_1<∞. Similarly, for X_2, we have p^2+q_1^2-μ= (√( q_1^2+p_1^2) +√(μ-p̃^2))(√( q_1^2+p_1^2) -√(μ-p̃^2)) ≥| q_1|(| p_1| -√(μ-p̃^2))≥ q_1^2≥μ/4 and thus X_2<∞. For X_3, we have p^2+q_1^2-μ≥| q_1|(| q_1| -√(μ-p̃^2))≥√(μ)/2(| q_1| -√(μ-p̃^2)). Hence, X_3 ≤sup_| q_1 |> √(μ)/24/√(μ)∫_μ-q_1^2<p̃^2 <μp̃ <∞. For X_4 we have μ-p^2-q_1^2≥μ-(√(μ-p̃^2)-| q_1 |)^2- p̃^2-q_1^2 =2 | q_1 |√(μ-p̃^2)≥√(μ)√(μ-p̃^2). Thus, X_4≤sup_| q_1 |> √(μ)/22/√(μ)∫_p̃^2 <μ-q_1^2√(μ-p̃^2)-| q_1 |/√(μ-p̃^2)p̃<∞. To prove that (<ref>) is finite, let S_T,d(q):L^1(^d)→ L^∞(^d) be the operator with integral kernel S_T,d(q)(x,y)=1/(2π)^d∫_^d[e^i (x-y)· p -e^i √(μ) (x-y)· p/| p |]B_T(p,q) χ_((p+q)^2-μ)((p-q)^2-μ)>0χ_p^2<3μ p Then (<ref>) equals sup_T sup_| q | <√(μ)/2‖ V^1/2 S_T,d(q) |V |^1/2‖. With (<ref>) and | e^i x- e^i y|≤min{| x-y|,2} we obtain | S_T,d(q)(x,y) |≤1/(2π)^d∫_^dmin{| (| p |-√(μ)) (x-y) · p/| p ||,2}/| p^2+q^2-μ|χ_((p+q)^2-μ)((p-q)^2-μ)>0χ_p^2<3μ p ≤1/(2π)^d∫_^dmin{|| p| -√(μ)|| x-y |,2}/| p^2+q^2-μ|χ_((p+q)^2-μ)((p-q)^2-μ)>0χ_p^2<3μ p Again, the integral only depends on | q |, so we may restrict to q=(| q |,0). We now switch to angular coordinates. Recall the notation r_± and e_φ introduced before (<ref>) and that (| r cosφ|∓| q_1 |)^2+r^2 sinφ^2≷μ↔ r≷ r_±(e_φ). For d=2 we have | S_T,2((q_1,0))(x,y) |≤1/(2π)^2∫_0^2π[∫_r_+(e_φ)^√(3μ)min{| r-√(μ)|| x-y |,2}/r^2+q_1^2-μr r + ∫_0^r_-(e_φ)min{(√(μ)-r) | x-y |,2}/μ-r^2-q_1^2 r r] φ =: g(x,y,q_1) and for d=3 | S_T,3((q_1,0))(x,y) |≤1/(2π)^2∫_0^π[∫_r_+(e_θ)^√(3μ)min{| r-√(μ)|| x-y |,2}/r^2+q_1^2-μsinθ r^2 r + ∫_0^r_-(e_θ)min{(√(μ)-r) | x-y |,2}/μ-r^2-q_1^2sinθ r^2 r] θ≤√(3μ)/2g(x,y,q_1). We bound g by | g(x,y,q_1) | ≤1/(2π)^2∫_0^2π[∫_r_+(e_φ)^√(3μ)min{ (r-r_+(e_φ) ) | x-y |,2}+min{|√(μ)- r_+(e_φ)|| x-y |,2}/r^2+q_1^2-μ r r . . + ∫_0^r_-(e_φ)min{(r_-(e_φ)-r) | x-y |,2} +min{(√(μ)-r_-(e_φ))| x-y |,2}/μ-r^2-q_1^2 r r ] φ Note that r_+(e_φ) attains the minimal value √(μ-q_1^2) at |φ| =π/2 and the maximal value √(μ)+| q_1 | at |φ| =0. Similarly, r_-(e_φ) attain the maximal value √(μ-q_1^2) at |φ| =π/2 and the minimal value √(μ)-| q_1 | at |φ| =0. For the first summand in both integrals we take the supremum over the angular variable. For the second summand in both integrals, we carry out the integration over r and use that |√(μ)-r_-(e_φ)|, |√(μ)- r_+(e_φ)|≤| q_1 |. We obtain the bound | g(x,y,q_1) |≤1/2π∫_0^√(3μ)min{| r-√(μ-q_1^2)|| x-y |,2} r/| r^2+q_1^2-μ| r +min{| q_1 || x-y |,2}/2(2π)^2∫_0^2π[ ln(2μ+q_1^2/r_+(e_φ)^2+q_1^2-μ) +ln(μ-q_1^2/μ-q_1^2 -r_-(e_φ)^2) ] φ Recall that we are only interested in | q_1 |<√(μ)/2. For the first term, we use that r≤√(3μ) and | r^2+q_1^2-μ| = | r-√(μ-q_1^2)|| r+√(μ-q_1^2)|≥| r-√(μ-q_1^2)|√(μ-q_1^2). This gives the following bound, where we first carry out the r-integration and then use that √(μ-q_1^2)≥√(3μ)/2: √(3μ)/π√(μ-q_1^2)∫_0^√(3μ)min{| x-y |/2, 1/| r-√(μ-q_1^2)|} r ≤√(3μ)/π√(μ-q_1^2)[ln(max{1, √(μ-q_1^2)| x-y|/2})+2+ln(max{1, (√(3μ)-√(μ-q_1^2))| x-y |/2})] ≤ C [1+ln(1+ √(3μ)| x-y |/2)] For the second term, we use that 2μ+q_1^2/r_+(e_φ)^2+q_1^2-μμ-q_1^2/μ-q_1^2 -r_-(e_φ)^2= 2μ+q_1^2/4 | e_φ,1|^2 | q_1 |^2 and | q_1 | <√(μ)/2 as well as | e_φ,1|=|cosφ|≥1/2min{|π/2-φ|,|3π/2-φ|} to arrive at the bound min{| q_1 || x-y |,2}/(2π)^2∫_0^2πln(√(3μ)/2| e_φ,1 q_1|) φ≤4 min{| q_1 || x-y |,2}/(2π)^2∫_0^π/2ln(√(3μ)/|φ q_1|) φ =min{| q_1 || x-y |,2}/2π(1+ln(2√(3μ)/π| q_1|) ) =min{| q_1 || x-y |,2}/2π(1+ln(√(3μ)| x-y |)+ln(2π/| x- y || q_1|)), where we used ∫ln(1/x) x =x+ x ln(1/x). Since x ln(1/x) ≤ C, this is bounded above by 1/π(1+max{ln(√(3μ)| x-y |),0})+C. In total, we obtain the bound sup_| q_1 |<√(μ)/2| g(x,y,q_1) |≤ C[1+ln(1+ √(μ)| x-y |)]. Let M:L^2(^d)→ L^2(^d) be the operator with integral kernel M(x,y)=|V|^1/2 (x) (1+ln( 1+√(μ)| x-y |)) |V|^1/2(y). We have sup_T sup_| q | <√(μ)/2‖ V^1/2 S_T,d(q) |V|^1/2‖≤ C(μ,d) ‖ M ‖ for some constant C(μ,d)<∞. The operator M is Hilbert-Schmidt since the function x↦ (1+ln(1+| x|)^2)|V(x)| is in L^1(^d). § ACKNOWLEDGMENTS Financial support by the Austrian Science Fund (FWF) through project number I 6427-N (as part of the SFB/TRR 352) is gratefully acknowledged. abbrv 10 barkman_elevated_2022 M. Barkman, A. Samoilenka, A. Benfenati, and E. Babaev. Elevated critical temperature at BCS superconductor-band insulator interfaces. Physical Review B, 105(22):224518, Jun. 2022. benfenati_boundary_2021 A. Benfenati, A. Samoilenka, and E. Babaev. Boundary effects in two-band superconductors. Physical Review B, 103(14):144512, Jun. 2021. frank_bcs_2019 R. L. Frank, C. Hainzl, and E. Langmann. The BCS critical temperature in a weak homogeneous magnetic field. Journal of Spectral Theory, 9(3):1005–1062, Mar. 2019. frank_critical_2007 R. L. Frank, C. Hainzl, S. Naboko, and R. Seiringer. The critical temperature for the BCS equation at weak coupling. Journal of Geometric Analysis, 17(4):559–567, Dec. 2007. hainzl_bcs_2008 C. Hainzl, E. Hamza, R. Seiringer, and J. P. Solovej. The BCS Functional for General Pair Interactions. Communications in Mathematical Physics, 281(2):349–367, Jul. 2008. hainzl_boundary_nodate C. Hainzl, B. Roos, and R. Seiringer. Boundary Superconductivity in the BCS Model. Journal of Spectral Theory, 12:1507–1540, 2022. hainzl_critical_2008 C. Hainzl and R. Seiringer. Critical temperature and energy gap for the BCS equation. Physical Review B, 77(18):184517, May 2008. hainzl_bardeencooperschrieffer_2016 C. Hainzl and R. Seiringer. The Bardeen–Cooper–Schrieffer functional of superconductivity and its mathematical properties. Journal of Mathematical Physics, 57(2):021101, Feb. 2016. henheik_universality_2023 J. Henheik, A. Lauritsen, and B. Roos. Universality in low-dimensional BCS theory. arXiv:2301.05621 [cond-mat, physics:math-ph], Jan. 2023. lauritsen_masters_2020 A. Lauritsen. Master’s thesis in mathematics. University of Copenhagen, Jun 2020. <https://drive.google.com/file/d/1pUePzhCyweJAePyJrKPdmA6OCsLPUkMr/view> lieb_analysis_2001 E. Lieb and M. Loss. Analysis. volume 14 of Graduate Studies in Mathematics. American Mathematical Society, 2001. roos_two-particle_2022 B. Roos and R. Seiringer. Two-Particle Bound States at Interfaces and Corners. Journal of Functional Analysis, 282(2):109455, Jun. 2022. samoilenka_boundary_2020 A. Samoilenka and E. Babaev. Boundary states with elevated critical temperatures in Bardeen-Cooper-Schrieffer superconductors. Physical Review B, 101(13):134512, Apr. 2020. samoilenka_microscopic_2021 A. Samoilenka and E. Babaev. Microscopic derivation of superconductor-insulator boundary conditions for Ginzburg-Landau theory revisited: Enhanced superconductivity at boundaries with and without magnetic field. Physical Review B, 103(22):224516, Jun. 2021. talkachov_microscopic_2022 A. Talkachov, A. Samoilenka, and E. Babaev. A microscopic study of boundary superconducting states on a honeycomb lattice. arXiv:2212.14711 [cond-mat], Dec. 2022. inc_mathematica_2022 Wolfram Research, Inc. Mathematica, Version 13.1. Champaign, IL, 2022.
http://arxiv.org/abs/2306.08069v1
20230613183326
An update on $(n,m)$-chromatic numbers
[ "Sandip Das", "Abhiruk Lahiri", "Soumen Nandi", "Sagnik Sen", "S Taruni" ]
math.CO
[ "math.CO", "cs.DM" ]
An update on (n,m)-chromatic numbers Sandip Das^a, Abhiruk Lahiri^b, Soumen Nandi^c, Sagnik Sen^d, S Taruni^d (a) Indian Statistical Institute Kolkata, India (b) Charles University, Czech Republic (c) Institute of Engineering & Management, Kolkata, India (d) Indian Institute of Technology Dharwad, India ============================================================================================================================================================================================================================================================================================================ An (n,m)-graph is a graph with n types of arcs and m types of edges. A homomorphism of an (n,m)-graph G to another (n,m)-graph H is a vertex mapping that preserves adjacency, its direction, and its type. The minimum value of |V(H)| such that G admits a homomorphism to H is the (n,m)-chromatic number of G, denoted by _n,m(G). This parameter was introduced by Nešetřil and Raspaud (J. Comb. Theory. Ser. B 2000). In this article, we show that the arboricity of G is bounded by a function of _n,m(G), but not the other way round. We also show that acyclic chromatic number of G is bounded by a function of _n,m(G), while the other way round bound was known beforehand. Moreover, we show that (n,m)-chromatic number for the family of graphs having maximum average degree less than 2+ 2/4(2n+m)-1, which contains the family of planar graphs having girth at least 8(2n+m) as a subfamily, is equal to 2(2n+m)+1. This improves the previously known result which proved that the (n,m)-chromatic number for the family planar graphs having girth at least 10(2n+m)-4 is equal to 2(2n+m)+1. It is known that the (n,m)-chromatic number for the family of partial 2-trees bounded below and above by quadratic functions of (2n+m) and that the lower bound is tight when (2n+m)=2. We show that the lower bound is not tight when (2n+m)=3 by improving the corresponding lower bounds by one. We manage to improve some of the the upper bounds in these cases as well. Keywords: colored mixed graph, graph homomorphism, chromatic number, maximum degree, sparse graphs, arboricity. § INTRODUCTION In 2000, Nešetřil and Raspaud <cit.> generalized the notion of graph homomorphisms by introducing colored homomorphisms of colored mixed graphs[The same notion is studied in literature under slightly different names. We choose the one most suitable for this article.]. The (n,m)-graphs: An (n,m)-graph is a graph G with set of vertices V(G), set of arcs A(G), and set of edges E(G). Moreover, each arc is colored with one of the n colors from {2,4, ⋯, 2n} and each edge is colored with one of the m colors from {2n+1, 2n+2, ⋯, 2n+m}. The underlying undirected graph of G is denoted by und(G). In this article, we restrict ourselves to (n,m)-graphs G where und(G) are simple, unless otherwise stated. Observe that, for (n,m) = (0,1), (1,0), (0,2), and (0,m), the (n,m)-graphs are the same as undirected graphs, oriented graphs <cit.>, 2-edge-colored graphs <cit.>, and m-edge-colored graphs <cit.>, respectively. These types of graphs and their homomorphisms are well-studied. It is worth mentioning that, a variation of homomorphisms of (0,2)-graphs, called homomorphisms of signed graphs <cit.>, have gained popularity <cit.> in recent times due to its strong connection with classical graph theory (especially, coloring and graph minor theory). It is known that, homomorphisms of signed graphs are in one-to-one correspondence with a specific restriction of homomorphisms of (0,2)-graphs <cit.>[In the language of category theory, there is a bijective covariant functor from the category induced by homomorphisms of signed graphs to a subcategory of the category induced by homomorphisms of (0,2)-graphs.]. Thus, the notion of colored homomorphism truly manages to unify a lot of important theories related to graph homomorphisms. Homomorphisms: A homomorphism of an (n,m)-graph G to another (n,m)-graph H is a function f : V(G) → V(H) such that if uv is an arc (resp., edge) of G, then f(u)f(v) is also an arc (resp., edge) of H having the same color as uv. The notation G → H is used to denote that G admits a homomorphism to H. Using the notion of homomorphism, one can define the chromatic number of colored mixed graphs that generalizes <cit.> the chromatic numbers defined for simple graphs, oriented graphs, m-edge-colored graphs, etc. The (n,m)-chromatic number of an (n,m)-graph G is given by _n,m(G) = min{|V(H)| : G → H}. For a simple graph S, the (n,m)-chromatic number is given by _n,m(S) = max{_n,m(G) : und(G) = S}. For a family ℱ of graphs, the (n,m)-chromatic number is given by _n,m(ℱ) = max{_n,m(G) : G ∈ℱ}. Notice that, the family ℱ may contain simple or (n,m)-graphs. State of the art, contributions, and organization of the paper: The (n,m)-chromatic number for (n,m) = (0,1) is simply the ordinary chromatic number which is, arguably, the most popularly studied topic of graph theory. However, due to some basic differences in the nature of the homomorphisms, the general study of the (n,m)-chromatic number tends to address the parameter for all values of (n,m) ≠ (0,1). Naturally, the (n,m)-chromatic number has been studied for various graph families in the past <cit.> and, in particular, for (n,m) = (1,0) and (0,2), it has been extensively studied under the names oriented chromatic number <cit.> and 2-edge-colored chromatic number <cit.>. It is noteworthy that any result proved for general (n,m)-chromatic number also, in particular, imply results for oriented and 2-edge-colored chromatic numbers. In the following, we summarize the contributions of this work along with relevant state of the art. We also enumerate them in a way so that it indicates the organization of the article.[Section <ref> was part of a CALDAM 2017 Conference proceedings in Lecture Notes in Computer Science. Section <ref> was part of EuroComb 2021 conference proceedings in Lecture notes in Computer Science.] * Section <ref>: We recall some basic results and preliminaries useful for the proofs in the following sections. * Section <ref>: This section is dedicated to establishing relations between the parameters arboricity, acyclic chromatic number, and (n,m)-chromatic number. To begin with, we show that the (n,m)-chromatic number of a graph G is not bounded above by a function of its arboricity, yet its arboricity is bounded by ⌈log_pk +k/2 ⌉, where k is its (n,m)-chromatic number. In contrast to that, Nešetřil and Raspaud <cit.>, generalizing a result due to Raspaud and Sopena <cit.>, showed that the (n,m)-chromatic number of a graph with acyclic chromatic number t is bounded above by t(2n+m)^t-1 while Fabila-Monroy, Flores, Heumer, and Montejano <cit.>, generalizing a result of Ochem <cit.>, proved the tighness of the bound. We prove that the reverse statement is also true, that is, the acyclic chromatic number of a graph G is bounded above by k^2 +k^2+ ⌈log_p log_pk ⌉, where k is its (n,m)-chromatic number. These upper bounds are generalizations of results due to Kostochka, Sopena and Zhu <cit.> proved for (n,m) = (1,0). We managed to slightly improve the second bound while generalizing it. * Section <ref>: The (n,m)-chromatic number for the family of planar graphs is bounded above by 5(2n+m)^4 due to Nešetřil and Raspaud <cit.> while there is a lower bound of the same which is a cubic function of (2n+m) by Fabila-Monroy, Flores, Heumer, and Montejano <cit.>. These two bounds together are the closest analogue of the Four-Color Theorem for (n,m)-graphs. On the other hand, the best (and the only) known possible analogue of the Grötzsch's theorem for (n,m)-graphs is a result due to Montejano, Pinlou, Raspaud, and Sopena <cit.> which shows that the (n,m)-chromatic number for the family of planar graphs having girth at least 10(2n+m)-4 is equal to 2(2n+m)+1. We improve this result by showing that the (n,m)-chromatic number for the family of graphs having maximum average degree less than 2+ 2/4(2n+m)-1, which contains the family of planar graphs having girth at least 8(2n+m) as a subfamily, is equal to 2(2n+m)+1. * Section <ref>: It is known that the (n,m)-chromatic number for the family of partial 2-trees bounded below and above by quadratic functions of (2n+m) due to Fabila-Monroy, Flores, Heumer, and Montejano <cit.> and Nešetřil and Raspaud <cit.>, respectively. The lower bounds, when restricted to the cases of (2n+m)=2, are known to be tight <cit.>. The next natural question to ask is whether the lower bounds are tight for (2n+m)=3 as well. Observe that (2n+m)=3 only when (n,m) = (0,3) or (1,1). The best known bounds, in these two cases tell us that the (0,3)-chromatic number <cit.> of partial 2-trees lie in the interval [13,27], and the (1,1)-chromatic number <cit.> of partial 2-trees lie in the interval [13,21]. We improve both these intervals to [14,15] and [14,21], respectively. In particular this shows that the general lower bound of the (n,m)-chromatic number for the family of partial 2-trees proved in <cit.> is not tight for (2n+m)=3. * Section <ref> : We share our concluding remarks. § PRELIMINARIES For standard graph theoretic notation and terminologies we will follow the book “Introduction to Graph Theory” by West <cit.> and we will recall all non-standard definitions here for convenience. Moreover, we will use a slightly modified article specific notations for improving the readability and uniformity of this article. Also, the standard notions relevant for simple graphs, if used for an (n,m)-graph G, will be understood as a property of und(G). If uv is an arc of color α, then we say that v is an α-neighbor of u, or equivalently, we say that u is an (α-1)-neighbor of v. On the other hand, if uv is an edge of color α, then we say that u and v are α-neighbors of each other. Furthermore, the set of all α-neighbors of u is denoted by N^α(u). Also, we will use the Greek smaller case letters such as α, β, γ etc. as variables belonging to the set {1,2, ⋯, 2n+m} denoting the color and the direction of an adjacency. A special 2-path is 2-path uvw of (n,m)-graph G where v ∈ N^α(u) ∩ N^β(w) such that α≠β. The following useful property comes in handy in proving some of the results of this paper. Two vertices u and v cannot have the same image under any homomorphism of G to any H, if and only if they are adjacent or connected by a special 2-path in G. § ACYCLIC CHROMATIC NUMBER AND ARBORICITY The arboricity arb(G) of a graph G is the minimum r such that the edges of G can be decomposed into r forests. First we show that an (n,m)-graph having bounded arboricity can have arbitrarily large (n,m)-chromatic number. For every positive integer k ≥ 2, there exists an (n,m)-graph G_k having arboricity at most r that satisfies _n,m(G_k) ≥ k. Consider the complete graph K_k. For all (n,m) ≠ (0,1), it is possible to replace all the edges of K_k by a special 2-path to obtain an (n,m)-colored mixed graph G_k'. We know that, the end points of the special 2-path must have different image under any homomorphism of G_k' <cit.>, thus χ_(n,m)(G_k') ≥ k whereas, it is easy to note that G_k' has arboricity 2. Thus, for r = 2 take G_k = G_k'. For r > 2, simply take the disjoint union of the above G_k' with a finite graph H having arboricity r. Next we show that it is possible to bound the arboricity of an (n,m)-graph by a function of (n,m)-chromatic number. Let G be a graph with χ_n,m(G) = k where p = 2n+m ≥ 2. Then arb(G) ≤⌈log_pk +k/2 ⌉. Let G' be an arbitrary labeled subgraph of G consisting v_G' vertices and e_G' edges. We know from Nash-Williams' Theorem <cit.> that the arboricity arb(G) of any graph G is equal to the maximum of ⌈ e_G'/(v_G' - 1) ⌉ over all subgraphs G' of G. It is sufficient to prove that for any subgraph G' of G, e_G'/(v_G' - 1) ≤log_p k + k/2. As G' is a labeled graph, so there are p^e_G' different (n,m)-graphs with underlying graph G'. As χ_n,m(G) = k, there exits a homomorphism from G' to a (n,m)-colored mixed graph G_k which has the complete graph on k vertices as its underlying graph. Observe that, even though it is not a necessary condition for G_k to have the complete graph as its underlying graph, we can always add some extra edges/arcs to make G_k have that property. Note that the number of possible homomorphisms of G' to G_k is at most k^v_G'. For each such homomorphism of G' to G_k there are at most p^k 2 different (n,m)-colored mixed graphs with underlying labeled graph G' as there are p^k 2 choices of G_k. Therefore, p^k 2. k^v_G'≥ p^e_G' which implies log_p k ≥ (e_G'/v_G') - k 2 / v_G'. Suppose if v_G'≤ k, then e_G'/(v_G' - 1) ≤ v_G'/2 ≤ k/2. Now let v_G' > k. We know that χ_n,m(G') ≤χ_n,m(G) = k. So log_p k ≥e_G'/v_G' - k(k - 1)/2 v_G' ≥e_G'/(v_G' -1) - e_G'/v_G'(v_G' - 1) - k - 1/2 ≥e_G'/(v_G' -1) - 1/2 - k/2 + 1/2 ≥e_G'/(v_G' -1) - k/2. Therefore, e_G'/(v_G' -1)≤log_pk +k/2. A graph G is acyclic t-colorable if we can color its vertices with t colors such that each color class induces an independent set and any two color classes induce a forest. The acyclic chromatic number _a(G) of a simple graph G is the minimum t such that G is acyclic t-colorable. One of the major results proved for (n,m)-chromatic numbers is due to Nešetřil and Raspaud <cit.> which shows that _n,m(𝒜_t) = t (2n+m)^t-1 where 𝒜_t denotes the family of graphs having acyclic chromatic number at most t. Moreover, Fabila-Monroy, Flores, Heumer, and Montejano <cit.> proved the tighness of the same. In particular, this shows that the (n,m)-chromatic number of a graph is bounded by a function of its acyclic chromatic number. It is natural to ask whether the converse is also true. It turns out to be true, however, before presenting its proof, we will show that the acyclic chromatic number of a graph is finite if its arboricity and (n,m)-chromatic number is finite. Let G be a graph with arb(G) = r and χ_n,m(G) = k where p = 2n+m ≥ 2. Then χ_a(G) ≤ k^⌈log_pr ⌉ +1. Let G be a graph with arb(G) = r and _n,m(G) = k where 2n+m = p. Let v_1, v_2, ..., v_t be some ordering of the vertices of G. Now consider the (n,m)-colored mixed graph G_0 with underlying graph G such that for any i < j we have v_j ∈ N^1(v_i) whenever v_iv_j is an edge of G. Note that the edges of G can be covered by r edge disjoint forests F_1, F_2, ..., F_r as arb(G) = r. Let s_i be the number i expressed in base p for all i ∈{1,2,...,r}. Note that s_i can have at most s = ⌈log_pr ⌉ digits. Now we will construct a sequence of (n,m)-colored mixed graphs G_1, G_2, ..., G_s each having underlying graph G. For l ∈{1,2,...,s} we are going to describe the construction of the (n,m)-graph G_l. Consider any edge v_iv_j of G where i <j. Then v_iv_j is an edge of the forest F_t for some t ∈{1,2,...,r}. Let the t^th digit of s_t be t̂. Then G_l is constructed in such a way that we have v_j ∈ N^t̂+1(v_i) in G_l. Recall that _n,m(G) ≤ k and the underlying graph of G_l is G. Thus, for each l ∈{1, 2, ⋯ , s } there exists an H_l on k vertices and a homomorphism f_l : G_l → H_l. Now we claim that f(v) = (f_0(v), f_1(v), ..., f_s(v)) for each v ∈ V(G) is an acyclic coloring of G. For adjacent vertices u,v in G clearly we have f(v) ≠ f(u) as f_0(v) ≠ f_0(u). Let C be a cycle in G. We have to show that at least three colors have been used to color this cycle with respect to the coloring given by f. Note that in C there must be two incident edges uv and vw such that they belong to different forests, say, F_t and F_t', respectively. Now suppose that C received two colors with respect to f, that is, the contrary of what we wish to prove. Then we must have f(u) = f(w) ≠ f(v). In particular we must have f_0(u) = f_0(w) ≠ f_0(v). To have that we must also have u,w ∈ N^α(v) for some α∈{1,2,...,p} in G_0 (even though α can only take the value 1 or 2 in this case). Let s_t and s_t' differ in their j^th digit. Then in G_j we have u ∈ N^α(v) and w ∈ N^α'(v) for some α≠α'. Then we must have f_j(u) ≠ f_j(w) as uvw is a special 2-path in G_j. Therefore, we also have f(u) ≠ f(w). Thus, the cycle C cannot be colored with two colors under the coloring f. Thus, combining Theorem <ref> and <ref>, χ_a(G) ≤ k^⌈log_p ⌈log_pk + k/2 ⌉⌉ +1 for χ_n,m(G) = k where p = 2n+m ≥ 2. However, we managed to obtain the following bound which is better in all cases except for some small values of k, p. Let G be an (n,m)-colored mixed graph with χ_(n,m)(G) = k ≥ 4 where p = 2n+m ≥ 2. Then _a(G) ≤ k^2 +k^2+ ⌈log_p log_pk ⌉. Let t be the maximum real number such that there exists a subgraph G' of G with v_G'≥ k^2 and e_G'≥ t.v_G'. Let G” be the biggest subgraph of G with e_G” > t.v_G”. Thus, by maximality of t, v_G” < k^2. Let G_0 = G - G”. Hence χ_a(G) ≤χ_a(G_0) + k^2. By maximality of G”, for each subgraph H of G_0, we have e_H≤ t.v_H. If t ≤v_H - 1/2, then e_H≤ (t + 1/2)(v_H -1). If t > v_H - 1/2, then v_H/2 < t + 1/2. So e_H≤(v_H -1).v_H/2≤ (t + 1/2)(v_H -1). Therefore, e_H≤ (t + 1/2)(v_H -1) for each subgraph H of G_0. By Nash-Williams' Theorem <cit.>, there exists r = ⌈ t + 1/2 ⌉ forests F_1, F_2, ⋯, F_r which covers all the edges of G_0. We know from Theorem <ref>, χ_a(G_0) ≤ k^s +1 where s = ⌈log_pr ⌉. Using inequality (<ref>) we get log_p k ≥ t - 1/2. Therefore s = ⌈log_p(⌈ t + 1/2 ⌉)⌉≤⌈log_p(1 + ⌈log_pk ⌉)⌉≤ 1 + ⌈log_p log_pk ⌉. Hence χ_a(G) ≤ k^2 +k^2+ ⌈log_p log_pk ⌉. Our bound, when restricted to the case of (n,m) = (1,0), slightly improves the existing bound due to Kostochka, Sopena and Zhu <cit.>. § SPARSE GRAPHS The maximum average degree of a graph is given by mad(G) = max{2|E(H)|/|V(H)| : H is a subgraph of G} . We present a tight bound for the (n,m)-chromatic number of sparse graphs. For mad(G) < 2 + 2/4(2n+m)-1 and 2n+m ≥ 2, we have _n,m(G) ≤ 2(2n+m)+1. We begin the proof of Theorem <ref> by describing a complete (n,m)-graph on (2p+1) vertices where p = 2n+m. We know that there exists a Hamiltonian decomposition of K_2p+1 due to Walecki <cit.> which we are going to describe in the following. To do so, first we will label the vertices of K_2p+1 is a certain way. Let one specific vertex of it be labeled by the symbol ∞ while the other vertices are labeled by the elements of the cyclic group ℤ/2pℤ. Let C_0, C_1, ⋯, C_p-1 be the edge disjoint Hamiltonian cycles of the decomposition where C_j is the cycle ∞ (2p+j) (1+j) (2p-1+j) ⋯ (p-1+j) (2p - (p-1)+j) (p+j) ∞ For each α∈{2,4, 6 ⋯ 2n} convert the cycles C_α - 2 and C_α - 1 to directed cycles having arcs of color α. For each α∈{2n+1, n+2, ⋯, 2n+m}, convert the cycle C_α-1 into a cycle having all edges of color α. Thus what we obtain is a complete (n,m)-mixed graph on 2p+1 vertices. We call this so-obtained complete (n,m)-graph as T. We now prove a useful property of T. For every S ⊊ V(T) we have |S| < |N^α(S)| for all α∈{1, 2, ⋯ 2n+m }. We divide the proof into three parts depending on the value of α. Case 1: If α∈{ 2n+1, 2n+2, ⋯ 2n+m }, then assume that C_α-1 is the cycle v_1v_2 ⋯ v_2p+1v_1 and the set S consist of vertices v_i_1, v_i_2, ⋯, v_i_l where i_1 < i_2 < ⋯ < i_l. Now notice that the set of vertices A = {v_i_1+1, v_i_2+1, ⋯, v_i_l+1} are distinct and are contained in N^α(S). On the other hand, note that the set of vertices B= {v_i_1-1, v_i_2-1, ⋯, v_i_l-1} are distinct and are contained in N^α(S). As |A| = |B| = |S|, we are done unless A= B. If A = B, then note that v_t∈ S implies v_t+2∈ S where the + operation on indices of v is taken modulo 2p+1. Hence, if there exists some index t for which we have v_t, v_t+1∈ S, then S must be the whole vertex set, which is not possible. Thus, we must have v_i_j+1 = v_i_j+2 for all j ∈{1, 2, ⋯, l} where the + operation on indices of v is taken modulo 2p+1. However as C_α is an odd cycle on 2p+1 vertices, it is impossible to satisfy the above condition. Hence A ≠ B, and we are done in this case. Case 2: If α∈{2, 4, ⋯, 2n}, then observe that S has exactly |S| many α-neighbors in C_α-2 and exactly |S| many α-neighbors in C_α-1. Furthermore, assume that A and B are the sets of α-neighbors of S in C_α-2 and C_α-1, respectively. As |A| = |B| = |S| and A ∪ B ⊆ N^α(S), we are done unless A= B. Due to the structure of the cycles C_α-2 and C_α-1, without loss of generality we may assume that α = 2. Thus let us try to obtain a contradiction assuming A=B. In such a scenario, fix a k ∈ℤ/2pℤ. Note that p ≠ k ∈ S if and only if x ∈ A = B if and only if k+2 ∈ S, where x = (2p-k) (resp., (2p-k+1)) Moreover, p ∈ S if and only if ∞∈ A = B if and only if (p+1) ∈ S. Also, 0 ∈ S if and only if 1 ∈ A = B if and only if ∞∈ S. Hence, for any non-empty S, A=B forces S = V(T), a contradiction. Case 3: If α∈{1, 3, ⋯, 2n-1}, then one can handle it in a way similar to Case 2. Therefore, |S| < |N^α(S)| for all α∈{1, 2, ⋯ 2n+m} and for every S ⊊ V(K_2p+1). Let T be a complete (n,m)-graph on 2p+1 vertices satisfying the condition of Lemma <ref>. We want to show that G → T whenever mad(G) < 2 + 2/4p-1. That is, it is enough to prove the following lemma. If mad(G) < 2 + 2/4p-1, then G → T. We will prove the above lemma by contradiction. Hence we assume a minimal (with respect to number of vertices) (n,m)-graph M having mad(M) < 2 + 2/4p-1 which does not admit a homomorphism to T. We now give some forbidden configurations for M stated as lemmas. The graph M does not contain a vertex having degree one. Suppose M contains a vertex u having degree one. Observe that the graph M', obtained by deleting the vertex u from M, admits a homomorphism to T due to the minimality of M. It is possible to extend the homomorphism M' → T to a homomorphism of M → T as any vertex of T has exactly two α-neighbors for all α∈{1,2, ⋯ 2n+m}. A path with all internal vertices of degree two is called a chain, and in particular a k-chain is a chain having k internal vertices. The endpoints (assume them to always have degree at least 3) of a (k-)chain are called (k-)chain adjacent. The graph M does not contain a k-chain with k ≥ 2p-1. Suppose M contains a k-chain with endpoints u,v. Observe that the graph M', obtained by deleting the internal degree two vertices from the above mentioned chain from M, admits a homomorphism to T due to the minimality of M. It is possible to extend the homomorphism M' → T to a homomorphism of M → T by Lemma <ref>. Let us describe another configuration. Suppose v is chain adjacent to exactly l vertices v_1, v_2, ⋯, v_l, each having degree at least three. Let the chains between v and v_i has k_i internal vertices. Let us refer to such a configuration as configuration C_l for convenience. The graph M does not contain the configuration C_l as a induced subgraph if ∑_i=1^l k_i > (2p-1)l - 2p where p = (2n+m). Suppose M contains the configuration C_l. Let M' be the graph obtained by deleting all vertices of the configuration except v_1, v_2, ⋯, v_l. Thus there exists a homomorphism f: M' → T due to minimality of M. We are going to extend f to a homomorphism f_ext: M → T, which will lead to a contradiction and complete the proof. As T has exactly two α-neighbors for all α∈{1, 2, ⋯ 2n+m}, Lemma <ref> implies that it is possible to partially extend f to the chain between v and v_i in such different ways that will allow us to choose the value of f_ext(v_i) from a set of k_i+2 vertices of T. In other words, the value of f(v_i) will forbid at most 2p-k_i-1 values at v_i. Thus, considering the effects all the chains incident to v, at most 2lp-∑_i=1^lk_i-l values are forbidden at v. Notice that if this value is less than or equal to 2p, then it will be possible to extend f to a homomorphism f_ext : M → T. That implies the relation ∑_i=1^lk_i≤ (2p-1)l - 2p. Now we are ready to start the discharging procedure. First we define a charge function on the vertices of M. ch(x) = deg(x) - (2 + 2/4p-1), for all x ∈ V(M). Observe that, ∑_x ∈ V(M)ch(x) < 0 as mad(M) < 2 + 2/4p-1. Now after the completion of the discharging procedure, all updated charge will become non-negative implying a contradiction. The discharging rule is the following: (R1): Every vertex having degree three or more donates 1/4p-1 to the degree two vertices which are part of its incident chains. Let ch^*(x) be the updated charge. Now we are going to calculate this values of this updated charge for vertices of different degrees in M. For any degree two vertex x ∈ V(M), we have ch^*(x) = 0. As M does not have any degree one vertex due to Lemma <ref>, every degree two vertex x must be internal vertex of a chain. Thus, by rule (R1) the vertex x must receive 1/4p-1 charge from each side of the chain. Hence the updated charge is ch^*(x) = ch(x) + 2/4p-1 = deg(x)-2 - 2/4p-1 + 2/4p-1 = 0. Thus we are done. For any vertex x having degree three or more, we have ch^*(x) ≥ 0. Let x be a degree d vertex of M. Thus by Lemma <ref> ch^*(x) ≥ ch(x) - (2p-1)d-2p/4p-1 = d-2 - 2/4p-1 -2pd-d-2p/4p-1 = 4pd -8p -d +2 - 2 - 2pd+d+2p/4p-1=2p(d-3)/4p-1≥ 0 for d ≥ 3. This implies 0 > ∑_x ∈ V(M) ch(x) = ∑_x ∈ V(M) ch^*(x) ≥ 0, a contradiction. Thus, the proof of Lemma <ref> is completed which proves the Theorem <ref>. Proof of Theorem <ref>. Observe that, due to the above lemmas, we have 0 > ∑_x ∈ V(M) ch(x) = ∑_x ∈ V(M) ch^*(x) ≥ 0, a contradiction. Thus, the proof of Lemma <ref> is completed which proves the Theorem <ref>. As a corollary to this result, we obtain _n,m(𝒫_g) = 2 (2n+m) + 1 for all g ≥ 8(2n+m). For g ≥ 8(2n+m) and 2n+m ≥ 2, we have, _n,m(𝒫_g) = 2 (2n+m) + 1. From the Theorem <ref> along with the result by Borodin<cit.> that shows that a planar graph G having girth at least g has mad(G) < 2g/g-2, the following result is obtained as a corollary of Theorem <ref>. § PARTIAL 2-TREES The only solutions of (2n+m)=2 are (n,m)=(1,0) and (0,2). Similarly, the only two solutions of (2n+m)=3 are (n,m)=(1,1) and (0,3). Therefore, studying the (n,m)-chromatic number for these values of (n,m) can help understand the trend for the general values of (n,m). The best known general bounds of (n,m)-chromatic number for the family 𝒯^2 of partial 2-trees, or equivalently, K_4-minor free graphs are the following. For all non-negative integers n and m where (2n+m) ≥ 2, we have (2n+m)^2 + 2(2n+m) + 1 ≤_n,m(𝒯^2) ≤ 3(2n+m)^2, for m>0 even (2n+m)^2 + (2n+m) + 1 ≤_n,m(𝒯^2) ≤ 3(2n+m)^2, otherwise. Recall that, for (2n+m) = 2 case, it is known that the lower bounds are tight <cit.>. The best known bounds when 2n+m=3 are 13 ≤_0,3(𝒯^2) ≤ 27 and 13 ≤_1,1(𝒯^2) ≤ 21. So, if the trend of lower bound in Theorem <ref> being tight were true, then in particular it would be true for the cases when 2n+m=3. However, we show the contrary via the following result where we improve the lower and the upper bounds for both the instances. For the family of 𝒯^2 of partial 2-trees we have, (i) 14 ≤_0,3(𝒯^2) ≤ 15, (ii) 14 ≤_1,1(𝒯^2) ≤ 21. The proof of the theorem is contained in a series of lemmas. An (n,m)-universal bound of ℱ is an (n,m)-graph T such that G → T for all und(G) ∈ℱ. A minimum (n,m)-universal bound of ℱ is a universal bound T on minimum number of vertices having the property that for every proper subgraph T' of T, there exists a G' with und(G') ∈ℱ such that G' ↛T'. A complete family ℱ of graphs is such that given G_1, G_2 ∈ℱ, there exists a graph G ∈ℱ that contains every G_i as its subgraph. Let ℱ be a complete family of graphs. Then there exists a minimal (n,m)-universal bound on ℱ on _n,m(ℱ) vertices. Suppose not, that is, for every (n,m)-graph T on _n,m(ℱ) vertices, there exists a graph (n,m)-graph G_T ∈ℱ such that G_T ↛T. Since ℱ is a complete family of graphs, there exists a (n,m)-graph G ∈ℱ that contains every G_T as its subgraph. As G ∈ℱ, there is a homomorphism f : G →T̂ for some (n,m)-graph T̂ on _n,m(ℱ) vertices. Then the restriction f|_V(G_T) : G_T→T̂ for all (n,m)-graphs T on _n,m(ℱ) vertices, a contradiction. An (n,m)-graph T has property P_2,1 if for any adjacent pair of vertices u,v of T the set N^α(u) ∩ N^β(v) is not empty for all α, β∈{1,2, ⋯, 2n+m}. Any minimum universal bound T of the family 𝒯^2 of partial 2-trees has property P_2,1. Due to minimality of T, there exists an (n,m)-graph G with und(G) ∈𝒯^2 such that for any homomorphism f : G → T, given any xy ∈ A(T) (resp., E(T)), there exists uv ∈ A(G) (resp., E(G)) satisfying f(u) = x, f(v) = y. For any two adjacent vertices u,v in G and for any (α, β) ∈{1,2, ⋯, 2n+m}^2, add a new vertex w_α, β adjacent to u and v in such a way that we have w_α, β∈ N^α(u) ∩ N^β(v). Let the so-obtained (n,m)-graph be G^*. Observe that, und(G^*) ∈𝒯^2 by construction. Therefore, G^* admits a homomorphism to T. For any pair x,y of adjacent vertices in T and for any homomorphism f^* : G^* → T, there exists u,v in G satisfying f(u)=x and f(v)=y. Note that the newly added common neighbors of u,v are connected by a special 2-path via either u or v. Thus, the images of u, v and their newly added common neighbors must have distinct images under f^*. As f^* is any homomorphism of G^* to T, and as x,y is any pair of adjacent vertices in T, T must have property P_2,1. The above lemma implies a necessary and sufficient condition useful for computing the (n,m)-chromatic number of partial 2-trees. We have χ_n,m(𝒯_2) = t if and only if there exists an (n,m)-graph T on t vertices with property P_2,1. In light of the above corollary, if one can show that there does not exist any (n,m)-graph on t vertices with property P_2,1, then it will imply that χ_n,m(𝒯_2) ≥ t+1. We will use this observation to prove our lower bounds. If T is a minimal universal (n,m)-bound of 𝒯^2 on (2n+m)^2 + (2n+m) + 1 vertices, then every vertex v in T has exactly (2n+m)+1 many α-neighbors for all α∈{1,2, ⋯, 2n+m}. As T has property P_2,1 due to Lemma <ref>, each vertex of T has all (2n+m) types of adjacencies. Let v be an α-neighbor of u in T. Notice that, there is at least one vertex which is α-neighbor of u and β-neighbor of v. As β varies over the set of all (2n+m) types of adjacencies, u has at least (2n+m) α-neighbors, which are also adjacent to v. Thus, counting v, u has at least (2n+m)+1 many α-neighbors. On the other hand, as α is any of the (2n+m) many adjacencies and |N^α(u)| ≥ 2n + m + 1, we have |N(u)| ≥ (2n+m)(2n+m+1)= (2n+m)^2 + (2n+m). As T has only (2n+m)^2 + (2n+m) + 1 vertices, the inequalities must be tight. If T is a minimum universal (n,m)-bound of 𝒯^2 on (2n+m)^2 + (2n+m) + 1 vertices, then it can not have x,y,z ∈ N^α(u) such that x,z are γ-neighbors of y. Suppose x,y,z ∈ N^α(u) and x,z are γ-neighbors of y. We know from the proof of Lemma <ref> that there is exactly one vertex in the set N^α(u) ∩ N^β(y) for all β. However, in this case, {x,z}⊆ N^α(u) ∩ N^β(y) for β = γ, a contradiction. Now we are ready to prove the first lower bound. The (n,m)-graph T has at least 14 vertices if either of the following happens. * T is a minimum universal (0,3)-bound of 𝒯^2. * T is a minimum universal (1,1)-bound of 𝒯^2. Suppose not, that is, T is a minimum universal (0,3)-bound or a minimum universal (1,1)-bound of 𝒯^2 on 13 vertices. Note that und(T) is a complete graph due to Lemma <ref>. Also using Lemma <ref> and <ref> we know that T contains a K_3 with vertices u,v,w (say) whose all edges are of color 3. Next we will try to count the number of vertices in T. For convenience, let us denote the set N^α(u) ∩ N^β(v) ∩ N^γ(w) ∖{u,v,w} = A_α, β, γ. Also, let us denote the set of all common neighbors of u,v,w by A. As T has property P_2,1, there must exist a x ∈ A which is a 3-neighbor of u and a 2-neighbor of v. Notice that, if x is a 2-neighbor or a 3-neighbor of w, then the configuration described in Lemma <ref> is created. Thus, x must be a 1-neighbor of w. Hence, |A_3,2,1| ≥ 1. Similarly, we can show that |A_1,2,3|=|A_1,3,2|=|A_2,1,3|=|A_2,3,1|=|A_3,1,2|=|A_3,2,1| ≥ 1. Till now, among the vertices we have described, there is none which is a 2-neighbor of both u,v. However as T has property P_2,1, there must exist a vertex y of T which is a 2-neighbor of both u,v. Note that, x can not be an 3-neighbor or a 2-neighbor of w due to Lemma <ref>. Thus, we are forced to have an edge of color three between x and w. So |A_2,2,1| ≥ 1. Similarly, we can show that |A_2,2,1|=|A_1,1,2|=|A_2,1,2|=|A_1,2,1|=|A_1,2,2|=|A_2,1,1| ≥ 1. As the sets of the form A_α, β, γ partitions A, we can combine equations (<ref>) and (<ref>) to conclude that |A| ≥ 12. This implies that T has at least 15 vertices including u,v,w, a contradiction. There exists a (0,3)-graph T_0,3 on 15 vertices having property P_2,1. Let T_0,3 be an (0,3)-graph with set of vertices ℤ/5ℤ×ℤ/3ℤ. Let (i,j) and (i',j') be two vertices of T_0,3. The adjacency between the vertices are as per the following rules. * If j ≠ j' and i = i', then (i,j) and (i',j') are not adjacent. * If (i'-i) is a non-zero square in ℤ/5ℤ, then there is an edge of color (1+j+j') (considered modulo 3) between (i,j) and (i',j'). * If (i'-i) is not a non-zero square in ℤ/5ℤ, then there is an edge of color (2+j+j') (considered modulo 3) between (i,j) and (i',j'). Notice that it is enough to show that T_0,3 has property P_2,1. Let (i,j) and (i',j') be any two adjacent vertices in T_0,3. Without loss of generality we may assume that either i'=i+1 or i'=i+2. We can further assume that i=0 and i'=1 or 2, still without losing generality. Also, for convenience, let A_α, β = N^α((i,j)) ∩ N^β((i',j')). Thus our objective is to show that all such subsets, which are a total of nine in number, are non-empty. * If i'=1, then (2,j”) ∈ A_2+j+j”, 1+j'+j”, (3,j”) ∈ A_2+j+j”, 2+j'+j”, and (4,j”) ∈ A_1+j+j”, 2+j'+j” where j” varies over ℤ/5ℤ. Notice that, as j” varies, we obtain a total of nine non-empty subsets of the type A_α,β, and we are done by observing that these subsets have distinct ordered pairs as indices. * If i'=2, then (1,j”) ∈ A_1+j+j”, 1+j'+j”, (3,j”) ∈ A_2+j+j”, 1+j'+j”, and (4,j”) ∈ A_1+j+j”, 2+j'+j” where j” varies over ℤ/5ℤ. Notice that, as j” varies, we obtain a total of nine non-empty subsets of the type A_α,β, and we are done by observing that these subsets have distinct ordered pairs as indices. Hence T_0,3 has property P_2,1. The existence of a (1,1)-graph on 21 vertices having property P_2,1 is remarked in the conclusion of <cit.>. For the sake of completeness, we include an explicit construction of the same. There exists a (1,1)-graph T_1,1 on 21 vertices having property P_2,1. Let T_1,1 be an (1,1)-graph with set of vertices ℤ/7ℤ×ℤ/3ℤ. Let (i,j) and (i',j') be two vertices of T_1.1. The adjacency between the vertices are as per the following rules. * If j ≠ j' and i = i', then, (i,j) and (i',j') are not adjacent. * If j'=j, and (i'-i) is a non-zero square in ℤ/7ℤ, then there is an arc from (i,j) to (i',j'). * If j' = j +1 ( 3) and (i'- i) is a non-zero square in ℤ/7ℤ, then there is an edge between (i,j) and (i',j'). * If j' = j +1 ( 3) and (i'- i) is not a non-zero square in ℤ/7ℤ, then there is an arc from (i',j') to (i,j). As exactly one among (i'-i) or (i-i') is a non-zero square in ℤ/7ℤ, the above indeed describes the whole (1,1)-graph. Notice that it is enough to show that T_1,1 has property P_2,1. Let (i,j) and (i',j') be any two adjacent vertices in T_1,1. For convenience, let A_α, β = N^α((i,j)) ∩ N^β((i',j')). Thus our objective is to show that all such subsets, which are a total of nine in number, are non-empty. Let (i'-i) a non-zero square (resp., non-square) of ℤ/7ℤ. Define the mapping ϕ: ℤ/7ℤ→ℤ/7ℤ given by ϕ(x) = (i'-i)^2(x-i). This map is a group automorpism of ℤ/7ℤ that maps a non-zero square to a non-zero square and vice versa. Therefore, the adjacency rules of the graph, even after applying this automorphism to the three copies of ℤ/7ℤ used for describing the graph, remain as it were. Also, notice that the above automorphism maps ϕ(i)=0 and ϕ(i')=1. Therefore, instead of arguing a general case of i,i' where (i'-i) is a square, we can argue for the case when i=0 and i'=1 without losing any generality. This brings us to the following cases. * If j'=j and (i'-i) is a non-zero square, then without loss of generality we may assume that (i,j)=(0,0) and (i',j')=(1,0). * If j'=j+1 (considered modulo 3) and (i'-i) is a non-zero square, then without loss of generality we may assume that (i,j)=(0,0) and (i',j')=(1,1). * If j'=j+1 (considered modulo 3) and (i'-i) is not a non-zero square, that is, (i-i') is a non-zero square, then without loss of generality we may assume that (i,j)=(1,0) and (i',j')=(0,1). (i,j) (i'j') A_1,1 A_1,2 A_1,3 A_2,1 A_2,2 A_2,3 A_3,1 A_3,2 A_3,3 (0,0) (1,0) (6,0) (5,0) (4,2) (4,0) (2,0) (5,1) (5,2) (4,1) (2,1) (0,0) (1,1) (5,0) (4,2) (6,0) (2,0) (5,1) (4,0) (4,1) (2,1) (5,2) (1,0) (0,1) (4,0) (5,2) (6,0) (2,0) (4,1) (5,0) (5,1) (2,1) (4,2) The previously defined nine subsets of the form A_α, β are non-empty for each of the above listed cases can be observed from the above table. Hence T_1,1 has property P_2,1. Proof of Theorem <ref>. Using Corollary <ref>, the lower bound follows from Lemma <ref> and the upper bounds follow from Lemmas <ref> and <ref>. § CONCLUSIONS Following the introduction of the (n,m)-graphs, their homomorphisms, and (n,m)-chromatic numbers due to Nešetřil and Raspaud <cit.> in 2000, a number of research works were dedicated towards this topic. In fact, the research in this topic has evolved in different tracks such as, finding out the (n,m)-chromatic number of some graphs families <cit.>, the study of relative and absolute (n,m)-clique numbers (that is, analogue of clique numbers) <cit.>, study of the complexity dichotomy of homomorphisms of an input (n,m)-graph G to a prefixed (n,m)-graph H <cit.>, and studies related to the algebraic properties of such systems, and their interactions with certain modifications such a permutation of adjacencies of a vertex (or several vertices) <cit.>. Our work clearly pertains to the first of the research tracks listed above. In this track, the graph families for which the (n,m)-chromatic number is studied are paths and forests <cit.>, graphs with bounded maximum degree <cit.>, graphs with bounded acyclic chromatic number <cit.>, partial 2-trees <cit.>, planar graphs and planar graphs with high girth <cit.>. However, the study of (n,m)-chromatic number of graph families is significantly less in volume compared to the the cases when (n,m)= (1,0) or (0,2). To the best of our understanding, the reason for this is the lack of knowledge regarding well-structured (n,m)-graphs which can be used as target graphs for homomorphisms. For (n,m)= (1,0) or (0,2), the Paley tournaments, signed Paley graphs or the Tromp constructions play this role. The way we constructed the target graph in Section <ref> maybe one of the approaches towards tackling this issue. Acknowledgements: This work is partially supported by IFCAM project “Applications of graph homomorphisms” (MA/IFCAM/18/39), SERB-SRG project “Graph homomorphisms and its extensions” (SRG/2020/001575), SERB-MATRICS “Oriented chromatic and clique number of planar graphs” (MTR/2021/000858), and NBHM project “Graph theoretic model of Channel Assignment Problem (CAP) in wireless network” (NBHM/RP-8 (2020)/Fresh). abbrv
http://arxiv.org/abs/2306.01731v1
20230602175753
PAGAR: Imitation Learning with Protagonist Antagonist Guided Adversarial Reward
[ "Weichao Zhou", "Wenchao Li" ]
cs.LG
[ "cs.LG" ]
plain theoremTheorem[section] proposition[theorem]Proposition lemma[theorem]Lemma corollary[theorem]Corollary definition definition[theorem]Definition assumption[theorem]Assumption remark[theorem]Remark myitemize∙ PAGAR: Imitation Learning with Protagonist Antagonist Guided Adversarial Reward Weichao Zhou Boston University Wenchao Li Boston University July 31, 2023 ======================================================================================================================================================================= Imitation learning (IL) algorithms often rely on inverse reinforcement learning (IRL) to first learn a reward function from expert demonstrations. However, IRL can suffer from identifiability issues and there is no performance or efficiency guarantee when training with the learned reward function. In this paper, we propose Protagonist Antagonist Guided Adversarial Reward (PAGAR), a semi-supervised learning paradigm for designing rewards for policy training. PAGAR employs an iterative adversarially search for reward functions to maximize the performance gap between a protagonist policy and an antagonist policy. This allows the protagonist policy to perform well across a set of possible reward functions despite the identifiability issue. When integrated with IRL-based IL, PAGAR guarantees that the trained policy succeeds in the underlying task. Furthermore, we introduce a practical on-and-off policy approach to IL with PAGAR. This approach maximally utilizes samples from both the protagonist and antagonist policies for the optimization of policy and reward functions. Experimental results demonstrate that our algorithm achieves higher training efficiency compared to state-of-the-art IL/IRL baselines in standard settings, as well as zero-shot learning from demonstrations in transfer environments. § INTRODUCTION Reinforcement Learning (RL) relies on reward functions to measure the utility of the trained policy <cit.>. A variety of reward learning approaches are proposed to infer reward functions from human behaviors <cit.>, preference over trajectories <cit.>, or symbolic specification <cit.>. Among those reward learning approaches, inverse reinforcement learning (IRL) <cit.> stands out due to its promise in scenarios such as inferring an expert's intention, re-optimizing a reward in a novel environment <cit.>. The fundamental idea of IRL is to learn a reward function from expert demonstrations so that under this reward function, the expert policy maximally outperforms any other policy. This idea has been the basis for some imitation learning (IL) approaches <cit.>. However, IRL suffers from an identifiability problem where multiple reward functions are consistent with expert demonstrations, even in the infinite-data limit. This ambiguity makes it impossible for IRL to precisely identify the expert's reward function. Moreover, real-world tasks often exhibit issues like covariate shifts, which can lead to IRL learning reward functions inconsistent with expert demonstrations. These challenges can have detrimental effects on imitation learning (IL) when training policies are based on learned reward functions. We elaborate on those effects with a toy example in Figure <ref>(a). In this two-state transition system, an agent can choose action a_0 at state s_0 to stay at s_0 or choose a_1 to reach an absorbing state s_1. Any agent can start from s_0 and for 5 timesteps. The underlying task is to reach s_1. An expert chooses a_0 at s_0 at the first timestep, then chooses a_1 to move to s_1, and stays at s_1 for the remaining 3 timesteps. The whole set of reward functions contains only two reward functions r_1 and r_2 as shown in Figure <ref>(b). Suppose that only deterministic policies can be learned. Then the two policies π_1 and π_2 as shown in Figure <ref>(c) are respectively the optimal policies under r_1 and r_2. It is apparent that π_2 is the one that finishes the task. However, since the expert demonstration outperforms the optimal policy π_1 under r_1, and π_2 under r_2, both by 1, a generic IRL algorithm will deem both r_1 and r_2 as the eligible solution indifferently. As a result, an IL agent that picks r_1 to train its policy will never escape s_0 and thus fail the underlying task. In this paper, we aim to train policies with reward functions learned from demonstrations via IRL by circumventing the identifiablity problem. Our approach is motivated by the following intuition: since by design the expert policy achieves high performance under all the reward functions learned via IRL, the policies that we learn should also have the same property. We propose a novel semi-supervised reward design paradigm called Protagonist Antagonist Guided Adversarial Reward (PAGAR). Our approach leverages IRL to generate a set of candidate reward functions and disambiguates them using two policies: an antagonist policy and a protagonist policy. PAGAR incentivizes the selection of reward functions that challenge the protagonist policy to achieve high performance. By training the protagonist policy with these selected reward functions, the protagonist policy can achieve high performance across the candidate reward function set. We will show in Section <ref>.1 that by applying our paradigm to the example in Figure <ref>, the protagonist policy will end up being π_2. The idea of using protagonist and antagonist is inspired by <cit.> where the protagonist and antagonist policies are used for unsupervised environment design (UED). In this paper, we have developed novel theories for using protagonist and antagonist in imitation learning. We prove that training the protagonist policy with reward functions learned via PAGAR guarantees task success under certain conditions. Additionally, if IRL can identify the exp=ert's reward function, PAGAR can also identify it. We further propose an on-and-off-policy approach to IL with PAGAR, maximizing the utilization of samples from both the antagonist and protagonist policies. We present a practical algorithm that efficiently combines IRL and PAGAR for IL. Experimental results demonstrate that our algorithm outperforms baselines on complex IL tasks and zero-shot IL tasks in transfer environments with limited demonstrations. We summarize our contribution below. * We propose PAGAR, a semi-supervised reward design paradigm, and adopt PAGAR in imitation learning to theoretically guarantee the success in the underlying task. * We developed an on-and-off-policy approach to imitation learning with PAGAR by maximally exploiting the samples of the policies. * Our approach enables learning agents to achieve higher performance than the baselines on a set of complex IL and transfer learning tasks with only a few demonstrations. § RELATED WORKS Inverse Reinforcement Learning. The theories of identifiability problem in IRL can be found in <cit.>. The Max-Entropy IRL from <cit.>, Max-Margin IRL from <cit.> and Bayesian IRL from <cit.> aim to mitigate the identifiability problem by assuming a linear reward function. However, these approaches suffer from the feature engineering problem. Deep learning-based IRL methods  <cit.> leverage the connection between IRL and Generative Adversarial Networks (GANs)  <cit.> to improve scalability. However, they do not specifically address the identifiability problem. In <cit.>, a GAN-based IRL algorithm is proposed to learn reward functions that are invariant to changing dynamics. Our framework tackles the identifiability problem from a different angle and complements GAN-based IRL algorithms. Many efforts have been made to resolve ambiguity in reward learning by assuming access to additional information. These include human preferences over trajectories  <cit.>, expert behaviors from multiple tasks  <cit.>, and logical structures for reward functions <cit.>. In contrast, our approach does not rely on such additional information. Reward Design. Previous work has shown that the addition of a state-based potential to the reward function leaves the optimal policy invariant <cit.>. Other reward transformations that retain policy invariance have also been studied <cit.>. These theories imply that reward functions can be designed to provide additional guidance to the learning agent, accelerating the learning process. Our paradigm utilizes adversarial reward functions to provide such guidance. The works on intrinsic reward <cit.>, exploration-driven <cit.>, and curiosity-driven <cit.> have demonstrated the benefits of leveraging rewards from multiple sources for challenging tasks. Our work also leverages multiple reward functions but all the reward functions are learned from demonstrations. Inverse reward design (IRD) proposed in <cit.> also design reward functions by learning from demonstrations. Safety-aware apprenticeship learning from <cit.> designs reward functions by learning from the agent's failures. However, those works are limited to linear rewards and require feature engineering, whereas our approach does not have such limitations. Multi-Agent Learning. Although our work involves two policies, it differs from multi-agent IRL. Most existing research on multi-agent IRL focuses on Markov Game frameworks where multiple agents interact with each other <cit.>. In our case, the two policies do not interact with each other, and the environment dynamics do not depend on their joint actions. Our work draws inspiration from <cit.>, which utilizes protagonist and antagonist policies for environment design rather than IRL. § PRELIMINARIES Reinforcement Learning (RL) models the environment as a Markov Decision process ℳ=⟨𝕊, 𝔸, 𝒫, d_0⟩ where 𝕊 is the state space; 𝔸 is an action space; 𝒫(s'|s, a) is the probability of reaching a state s' by performing an action a at a state s; d_0 is an initial state distributio. A policy π(a|s) determines the probability of an RL agent performing an action a at state s. By successively performing actions for T steps after initializing from a state s^(0)∼ d_0, a trajectory τ=s^(0)a^(0)s^(1)a^(1)… s^(T)a^(T) is produced. A state-action based reward function is a mapping r:S× A→ℝ to the real space. With a slight abuse of notations, we denote the total reward along a trajectory τ as r(τ)=∑^T_t=0γ^t r(s^(t), a^(t)) with a discount factor γ∈(0, 1]. Similarly, 𝒫(τ|π)=∏^T-1_t=0𝒫(s^(t+1)|s^(t),a^(t))π(a^(t)|s^(t)) is the joint probability of generating a trajectory τ under the condition of knowing the policy a priori. The soft Q-value function of π is 𝒬_π(s^(t), a^(t))=r(s^(t), a^(t)) + γ·s^(t+1)∼𝒫(·|s^(t), a^(t))𝔼[𝒱_π(s^(t+1))] where ℋ(π(·|s)) is entropy of π at a state s; 𝒱_π is the soft value function of π defined as 𝒱_π(s):=a∼π(·|s)𝔼[𝒬_π(s, a)] + ℋ(π(·|s)). The soft advantage of performing action a at state s then following a policy π afterwards is then 𝒜_π(s,a)=𝒬_π(s, a)-𝒱_π(s). The expected return of π under a reward function r is written as η_r(π)=τ∼π𝔼[∑^T_t=0 r(s^(t), a^(t))] where τ∼π is an abbreviation for τ∼𝒫(τ|π). With a little abuse of notations, let ℋ(π):= ∑^T_t=0s^(t)∼π𝔼[ℋ(π(·|s^(t)))]. The objective of entropy-regularized RL is to learn a policy π to maximize J_RL(π; r)=η_r(π) + ℋ(π). Inverse Reinforcement Learning (IRL) assumes that a set E = {τ_1,…, τ_N} of expert demonstrations is provided instead of the reward function. It is also assumed that the expert demonstrations are sampled from the roll-outs of the expert's policy π_E. Given a candidate set of reward functions R, the maximum entropy IRL <cit.> solves the reward function that maximizes r∈ Rmax { - τ∼πmin η_r(π) - ℋ(π) }+ η_r(π_E) where η_r(π_E) can be estimated from the demonstration set E when π_E is not accessible. Generative Adversarial Imitation Learning (GAIL) <cit.> draws a connection between inverse reinforcement learning (IRL) and Generative Adversarial Nets (GANs) as in Eq.<ref>. A discriminator D:𝕊×𝔸→ [0,1] is trained by minimizing Eq.<ref> so that D can accurately identify any (s,a) generated by the agent. Meanwhile an agent policy π is trained as a generator by using a reward function induced from D to maximize Eq.<ref> so that D cannot discriminate τ∼π from τ_E. In adversarial inverse reinforcement learning (AIRL) <cit.>, it is further proposed that by representing D(s,a):=exp(π(a|s))/exp(r(s,a)) + π(a|s) with r, when Eq.<ref> is at optimality, r^*≡logπ_E≡𝒜_π_E. By training π with r^* until optimality, π will behave just like π_E. (s, a)∼π𝔼[log (1 - D(s,a))]+ (s, a)∼π_E𝔼[log D(s, a))] § PROTAGONIST ANTAGONIST GUIDED ADVERSARIAL REWARD (PAGAR) Our objective is to construct a policy that performs well across the set of reward functions learned via IRL. To achieve this, we propose a semi-supervised reward design paradigm. §.§ A Semi-supervised Reward Design Paradigm Given the expert demonstrations E on finishing some task, let J_IRL(π, r) represent the objective function of an IRL algorithm, such as Eq.<ref>, where D can be represented by r as mentioned earlier. Let R_E be a set of global or local optimal solutions, or simply intermediate solutions, to the IRL algorithm. Let U_r(π) be a function that measures the performance of a policy π under a reward function r, for example, U_r(π):=η_r(π) or U_r(π):= η_r(π)+ℋ(π). For a protagonist policy π_P and a reward function r, a Protagonist Antagonist Induced Regret can be represented as Eq.<ref> where π_A is referred to as the antagonist policy. Regret(π_P, r):= {π_A∈Πmax U_r(π_A)} - U_r(π_P) A policy does not incur high regret under a reward function if the optimal policy cannot achieve a high value of U_r(·). Using this regret, we define our semi-supervised reward design paradigm as follows. Given a candidate reward function set R_E and a protagonist policy π_P, the objective is to find a reward function r within R_E to maximize the Protagonist Antagonist Induced Regret, i.e., r∈ R_Emax Regret(r, π_P). To illustrate the advantage of PAGAR, we use the example shown in Figure <ref>. Let R_E={r_1, r_2} be the candidate reward function set since r_1, r_2 incur the same loss in the generic IRL as mentioned in Section <ref>. If the protagonist policy π_P=π_1, then r∈ R_Emax Regret(r, π_1) = 2. If the protagonist policy π_P=π_2, then r∈ R_Emax Regret(r, π_2) = 1. In this case, if searching for the policy with the lowest worst-case regret, π_2 will be selected. §.§ Imitation Learning with PAGAR Our imitation learning framework leverages the property of PAGAR by solving a MinimaxRegret problem as defined in Eq.<ref>. MinimaxRegret:=π_P∈Πmin r∈ R_Emax {π_A∈Πmax U_r(π_A)} - U_r(π_P) In Table <ref>, we formally show that solving MinimaxRegret is equivalent to searching for a policy π with the highest score, which is an affine combination of policy performance U_r(π) measured under r drawn from two different reward function distributions. One distribution has a singleton support on a reward function r^*_π that maximizes the policy performance U_r(π) among those who maximizes the regret Regret(π, r). The other one, 𝒫_π, is a baseline distribution which guarantees that: 1) for policies that do not always perform worse than any other policy, the expected U_r(π) values measured under r∼𝒫_π are all equal to a constant c (minimum value for the equality to hold); 2) for any other policy π', the distribution concentrates on the reward function r' under which the policy achieves the highest performance U_r'(π'). The existence of such 𝒫_π is proven in Appendix <ref> and <ref>. Intuitively, the affine combination assigns different weights to the policy performances evaluated under those two distributions. If the policy π performs worse under r^*_π than under many other reward functions (Ur^*_π(π) falls below c), a higher weight will be allocated to using r^*_π to train π. Conversely, if the policy π performs better under r^*_π than under many other reward functions (c falls below Ur^*_π(π)), a higher weight will be allocated to using reward functions drawn from 𝒫_π to train π. An important property of MinimaxRegret is that it chooses a policy that do not fail the underlying task whenever there is a policy that ensures success in the task. We provide a definition below to correlate the notion of success and failure with the reward functions learned via IRL. For any reward function r∈ R_E learned via an IRL algorithm, let 𝕌_r=[πmin U_r(π), πmax U_r(π)] be the interval of the measured policy performance. A reward function r is said to specify the task if and only if there are two intervals S_r=[inf S_r, πmax U_r(π)], F_r=[πmin U_r(π), sup F_r] ⊂𝕌_r with inf F_r ≤sup F_r < inf S_r ≤sup S_r such that any policy π is successful in the task if U_r(π)∈ S_r and a failure if U_r(π)∈ F_r, otherwise its success or failure is uncertain. If such S_r and F_r do not exist, r is said to mis-specify the task. It is possible to have U_r(π_E)∉𝕌_r, and Figure <ref> is an example. This may happen if π_E∉Π or U_r(π_E) is estimated from a very limited number of demonstrations. As mentioned in Section <ref>, if π imitates π_E, its performance under any reward function should be close to the optimal policy under that reward function, even if the reward function mis-specifies the task. With these intuitions in mind, we state the following theorem. Suppose that the following three assumptions are satisfied for the set R_E of reward functions learned via IRL: 1) r_sp∈ R_Emax {sup F_r_sp - inf F_r_sp} < r_sp∈ R_Emin {inf S_r_sp - sup F_r_sp} and r_sp∈ R_Emax {sup S_r_sp - inf S_r_sp} < r_sp∈ R_Emin {inf S_r_sp - sup F_r_sp}; there exists a policy π such that 2) for any task-specifying reward function r_sp∈ R_E, U_r_sp(π)∈ S_r_sp; 3) for the any mis-specifying reward function r_mis∈ R_E, π'max U_r_mis(π') - U_r_mis(π) < r_sp∈ R_Emin {inf S_r_sp - sup F_r_sp}. Then MinimaxRegret will choose a policy π_P that does not fail the task, ∀ r_sp∈ R_E.(U_r_sp(π)∉ F_r_sp). Basically, in Theorem <ref>, condition 1) indicates that for each task-specifying reward function r_sp∈ R_E the intervals of success and failure are smaller than uncertain interval; condition 2) indicates that π succeeds in the task, thus performing well under the task-specifying reward functions; condition 3) indicates that π also performs well under the reward functions that mis-specify the task. If those conditions are met, MinimaxRegret provides guarantees that the generic IRL does not provide – MinimaxRegret will always choose policies that do not fail the underlying task. In the example in Figure.<ref>, r_1 mis-specifies the task since its S_r_1 and F_r_1 do not exist. For r_2, we can let S_r_2 = (12, 13], and F_r_2=[10, 11). Under this setting, the conditions in Theorem <ref> are met. MinimaxRegret exclusively selects π_2 as its output. Furthermore, we show that if there exists a policy that performs optimally under all reward functions, i.e., IRL can reach Nash Equilibrium, MinimaxRegret will choose this policy as its solution. Let the IRL loss be in the form of J_IRL(π, r):=U_r(π) -U_r(π_E) for some U_r(π). If r∈ Rmin π∈Πmax J_IRL(r_E, π) can reach Nash Equilibrium with a reward function set R_E and a policy set Π_E, then Π_E equals the set of solutions to MinimiaxRegret. Note that r∈ R_Emin π∈Πmax J_IRL(r_E, π) cannot reach Nash Equilibrium if the set of optimal policies under any r∈ R_E do not have an intersection with that of any other r'∈ R_E, as shown by the example in Figure <ref>. § AN ON-AND-OFF-POLICY APPROACH TO SOLVING MINIMAXREGRET In this section we introduce our approach to solving MinimaxRegret. Our approach involves learning the policies π_P, π_A, as well as the reward function r, in an alternating manner. Given an intermediate learned r, we optimize π_P and π_A. Then we use π_P and π_A to update r, and repeat this process iteratively. We first explain how we optimize π_P, π_A; then we show how we optimize r by optimizing the bounds of Eq.<ref>. We then discuss how we incorporate IRL to enforce the constraint r ∈ R_E. §.§ Policy Optimization with On-and-Off Policy Samples Following the entropy-regularized RL framework, our approach defines U_r(π):=η_r(π) + ℋ(π), which corresponds to the RL objective J_RL(π; r). Given an intermediate learned reward function r, according to MinimaxRegret, the objective function for optimizing π_P is π_Pmax η_r(π_P)-η_r(π_A) + ℋ(π_P). According to <cit.>, η_r(π_P)≥η_r(π_A)+s∈𝕊∑ρ_π_A(s)a∈𝔸∑π_P(a|s)𝒜̂_π_A(s,a) - C·smax D_TV(π_A(·|s), π_P(·|s))^2 where ρ_π_A(s)=∑^T_t=0γ^t Prob(s^(t)=s|π_A) is the discounted visitation frequency of π_A; 𝒜̂_π_A(s,a) is the advantage function when the entropy regularizer ℋ is not considered; C is some constant. By utilizing the samples of π_A, we can maximize J_PPO(π_P;π_A, r):=s∈𝕊∑ρ_π_A(s)a∈𝔸∑π_P(a|s)𝒜̂_π_A(s,a) as a surrogate objective for maximizing η_r(π_P)-η_r(π_A) by following the theories in <cit.> and <cit.>. We also maximize J_RL(π_P; r) with the samples of π_P itself via an on-policy RL algorithm. Regarding the optimization of π_A, since π_A is intended to be the optimal policy for πmax U_r(π), we train π_A with r using an on-policy RL algorithm and estimate the 𝒜̂_π_A in J_PPO(π_P; π_A, r) via Generalized Advantage Estimation (GAE) <cit.>. The loss functions for optimizing π_A and π_P are denoted as J_RL(π_A; r) and J_PPO(π_P; π_A, r) + J_RL(π_P; r) respectively. §.§ Regret Minimization with On-and-Off Policy Samples Given the intermediate learned protagonist and antagonist policy π_P and π_A, according to MinimaxRegret, we optimize r to maximize Regret(r, π_P):=π∈Πmax η_r(π)-η_r(π_A). Our objective function for optimizing r combines multiple objective functions derived from the bounds of the regret. First, we note that optimizing r to maximize Regret(r, π_P):=U_r(π_A) - U_r(π_P) is to maximize η_r(π_A)-η_r(π_P). Then we utilize Theorem <ref> to extract the two bounds of η_r(π_A) - η_r(π_P) Suppose that π_2 is the optimal policy in terms of entropy-regularized RL under r. Let α = smax D_TV(π_1(·|s), π_2(·|s)), ϵ = s,amax |𝒜_π_2(s,a)|, and Δ𝒜(s)=a∼π_1𝔼[𝒜_π_2(s,a)] - a∼π_2𝔼[𝒜_π_2(s,a)]. For all policy π_1, the following bounds hold. |η_r(π_1) -η_r(π_2) - ∑^∞_t=0γ^ts^(t)∼π_1𝔼[Δ𝒜(s^(t))]| ≤ 2αγϵ/(1-γ)^2 |η_r(π_1)-η_r(π_2) - ∑^∞_t=0γ^ts^(t)∼π_2𝔼[Δ𝒜(s^(t))]| ≤ 2αγ(2α+1)ϵ/(1-γ)^2 By letting π_P be π_1 and π_A be π_2, Theorem <ref> implies that we can derive two bounds for η(π_A)-η(π_P) respectively by only using the samples of π_A or π_P. Following <cit.>, we let r be a proxy of 𝒜_π_2 in the bounds of Eq.<ref> and <ref>. Then we derive two loss functions J_R,1(r;π_P, π_A) and J_R,2(r; π_P, π_A) as in Eq.<ref> and <ref>. These loss functions incorporate the importance sampling ratios δ_1(s,a)=π_P(a|s)/π_A(a|s) and δ_2(s,a)=π_A(a|s)/π_P(a|s), and constants C_1 and C_2 that are proportional to the maximum KL divergence between π_A and π_P (to bound α as in <cit.>). The PAGAR objective function is defined as J_PAGAR:=J_R,1 + J_R,2. J_R,1(r;π_P, π_A):= τ∼π_A𝔼[∑^∞_t=0γ^t(δ_1(s^(t),a^(t))- 1)· r(s^(t),a^(t))] +C_1·(s,a)∼π_Amax|r(s,a)| J_R,2(r; π_P, π_A) := τ∼π_P𝔼[∑^∞_t=0γ^t(1 - δ_2(s^(t), a^(t))) · r(s^(t),a^(t))] + C_2·(s,a)∼π_Pmax|r(s,a)| §.§ Algorithm for Solving MinimaxRegret In addition to the PAGAR loss J_PAGAR(r; π_P, π_A), we incorporate the constraint r∈ R_E by adding the IRL loss as a penalty. We introduce a Lagrangian parameter λ and formulate the objective function for optimizing r in MinimaxRegret as r∈ Rmin J_PAGAR(r; π_P, π_A) + λ· J_IRL(π_A, r). Algorithm <ref> describes our approach for solving MinimaxRegret. The algorithm iteratively trains the policies and the reward function in an alternate manner. It first trains π_A in line 4 to compute its advantage values. Then, it employs the on-and-off policy approach to train π_P in line 5, utilizing the Proximal Policy Optimization (PPO) algorithm to minimize J_PPO. In line 6, J_PAGAR is estimated based on both 𝔻_A and 𝔻_P. The specific implementation may vary when incorporating different IRL algorithms, and further details are provided in Appendix <ref>. § EXPERIMENTS This section focuses on assessing whether IL with PAGAR can accelerate the agents' learning process and overcome some practical issues, such as covariate shift. We compare our algorithms with baselines in standard IL/IRL benchmarks and zero-shot learning tasks in transfer environments. We exhibit our main results in the main text and provide details and additional results in Appendix <ref>. §.§ Standard IL/IRL Tasks Continuous Control Domain. Our benchmarks include three Mujuco tasks: Walker2d-v2, HalfCheetah-v2, Hopper-v2. We compare our approach mainly with two baselines: GAIL <cit.> and VAIL <cit.>, which is based on GAIL but additionally optimizes a variational discriminator bottleneck (VDB) objective. We use the IRL techniques behind those two baseline algorithms in our approach, resulting in two versions of Algorithm <ref>, a GAIL-versioned one and a VAIL-versioned one. More specifically, if the baseline optimizes a J_IRL objective, we use the same J_IRL objective in Algorithm <ref>. Additionally, we compare our algorithm with a state-of-the-art (SOTA) IRL algorithm, IQ-learn <cit.>, which, however, is not compatible with our algorithm because it does not optimize a reward function. In all experiments, we adopt MLPs to approximate the policy and the reward function. The networks are the same between ours and the baselines. PPO <cit.> algorithm is employed for policy training for GAIL, VAIL, and ours; 100 trajectories with high returns are used as demonstrations for HalfCheetah-v2 task and 10 for the other two. We constrain the size of the replay buffer to 2048 for our algorithm and all the baselines. In Figure <ref>, we compare Algorithm <ref> with the baselines in the same task. The results show that Algorithm <ref> outperforms the baselines when the task is challenging. Especially in the HalfCheetah-v2 task, Algorithm <ref> the protagonist policy achieves the same level of performance as the policy in GAIL and VAIL by using almost half the number of iterations. We note that IQ-learn would have performed better when the size of the replay buffer was decades of times larger. Partially Observable Navigation Tasks. Our benchmarks include two discrete domain tasks from Mini-Grid environment: DoorKey-6x6-v0, and SimpleCrossingS9N1-v0. In DoorKey-6x6-v0, as shown in Figure <ref>(a), an agent needs to pick up a key, unlock a door and reach a target tile; in SimpleCrossingS9N1, as shown in Figure <ref>(b), an agent needs to pass an opening on a wall and reach a target tile. The placements of the objects and doors are randomized in each instance of an environment. The agent can only observe the 7×7 tiles in front of it if there is no obstacle. At each timestep, the agent can choose one out of 7 actions. By default, the reward is always zero unless the agent finishes the tasks. We use GAIL, VAIL, and IQ-learn as the baselines for those benchmarks. The policy, the critic and the reward function are all approximated with convolutional networks. As shown in Figure <ref>(a) and (b), Algorithm <ref> produces high performant policy with high efficiency. §.§ Zero-Shot IL in Transfer Environments We use the SimpleCrossing series of tasks as our benchmark. In this experiment, we show that by observing the expert's behaviors of finishing the task in SimpleCrossingS9N1-v0, PAGAR can enable the agent to infer the goal of the task and imitate the expert to finish the same task in different environments. As shown in Figure <ref>, we apply Algorithm <ref> and the baselines, GAIL, VAIL and IQ-learn, in SimpleCrossingS9N2-v0 and SimpleCrossingS9N3-v0, where there are more and more walls in the environment. The results show that the more complex the environment becomes, the more the agent in Algorithm <ref> outperforms the baselines. In Figure <ref>, we show the heatmaps of the rewards along a path to the goal cell in a SimpleCrossingS9N3 environment. The rewards are respectively generated by the reward functions learned via GAIL and Algorithm <ref> in the environment. While both methods can solve this task, the reward designed via PAGAR produces higher reward along the path, especially at the crossings. § CONCLUSION We propose PAGAR, a semi-supervised reward design paradigm that generates adversarial reward functions under the guidance of a protagonist policy and an antagonist policy. Imitation learning with PAGAR can circumvent the identifiablitiy problem of IRL by training a policy that performs well across the unidentifiable reward functions learned via IRL. We have presented an on-and-off policy approach for imitation learning with PAGAR. This approach maximizes the utilization of policy samples and optimizes bounds that ensure both policy and reward improvement. Our experimental have demonstrated the superiority of our algorithm over baseline imitation learning algorithms in challenging IL tasks. Moreover, our algorithm can facilitate learning from demonstrations in transferred environments. The future work will focus on further improving the algorithms to accommodate other types of IRL approaches such as those that use implicit rewards. abbrvnat § APPENDIX In the appendix, we provide additional details of the theories and the experiments. The contents of this appendix are as follows. * In Appendix <ref>, we discuss some details of PAGAR and MinimaxRegret that were omitted in Section <ref>. We briefly introduce some necessary preliminaries in Appendix <ref>. Then we derive Theorem <ref> to support Table <ref> in Appendix <ref>. At last we prove Theorem <ref> and <ref> in Appendix <ref>. * In Appendix <ref>, we provide some details of Imitation Learning with PAGAR that were omitted in Section <ref>. we prove Theorem <ref> in Appendix <ref>. Then we derive the objective functions in Appendix <ref>. Some details of Algorithm <ref> will be explained in Appendix <ref> * In Appendix <ref>, we provide some experimental details and additional results. § REWARD DESIGN WITH PAGAR This paper does not aim to resolve the identifiability problem of IRL but provides a way to circumvent it. We assume that by solving some IRL problem a set of reward functions will be obtained, and while some of the reward functions can be used to train policies efficiently, others cannot. The goal of the proposed approach is to select from those reward functions for continued policy training. In this section, we show that PAGAR is a semi-supervised reward design problem, and imitation learning via PAGAR is equivalent to learning a policy to maximize an affine combination of utilities measured under a distribution of reward functions. §.§ Semi-supervised Reward Design Designing a reward function can be thought as deciding an ordering of policies. We adopt a concept, called total domination, from unsupervised environment design <cit.>, and re-interpret this concept in the context of reward design. In this paper, we suppose that a function U_r(π) is given to measure the performance of a policy. While the measurement of policy performance can vary depending on the free variable r, total dominance can be viewed as an invariance regardless of such dependency. A policy, π_1, is totally dominated by some policy π_2 w.r.t a reward function set R, if for every pair of reward functions r_1, r_2∈ R, U_r_1(π_1) < U_r_2(π_2). If π_1 totally dominate π_2 w.r.t R, π_2 can be regarded as being unconditionally better than π_1. In other words, the two sets {U_r(π_1)|r∈ R} and {U_r(π_2)|r∈ R} are disjoint, such that sup{U_r(π_1)|r∈ R} < inf{U_r(π_2)|r∈ R}. Conversely, if a policy π is not totally dominated by any other policy, it indicates that for any other policy, say π_2, sup{U_r(π_1)|r∈ R}≥inf{U_r(π_2)|r∈ R}. A reward function set R specifies an ordering ≺_R among policies such that π_1≺_Rπ_2 if and only if π_1 is totally dominated by π_2 w.r.t. R. Especially, designing a reward function r is to establish an ordering ≺_{r} among policies. Total domination can be extended to policy-conditioned reward design, where the reward function r is selected by following a decision rule ℛ(π) such that ∑_r∈ Rℛ(π)(r)=1. We let 𝒰_ℛ(π)=r∈ R_E∑ℛ(π)(r) · U_r(π) be an affine combination of U_r(π)'s with its coefficients specified by ℛ(π). A policy conditioned decision rule ℛ is said to prefer a policy π_1 to another policy π_2, which is notated as π_1≺^ℛπ_2, if and only if 𝒰_ℛ(π_1)< 𝒰_ℛ(π_2). Making a decision rule for selecting reward functions from a reward function set to respect the total dominance w.r.t this reward function set is an unsupervised learning problem, where no additional external supervision is provided. If considering expert demonstrations as a form of supervision and using it to constrain the set ℛ_E of reward function via IRL, the reward design becomes semi-supervised. §.§ Solution to the MinimaxRegret In Table <ref>, we mentioned that solving the MinimaxRegret problem is equivalent to finding an optimal policy π^* to maximize a 𝒰_ℛ(π) under a decision rule, which we denote as ℛ_E here. In order to show such an equivalence, we follow the same routine as in <cit.>, and start by introducing the concept of weakly total domination. A policy π_1 is weakly totally dominated w.r.t a reward function set R by some policy π_2 if and only if for any pair of reward function r_1, r_2∈ R, U_r_1(π_1) ≤ U_r_2(π_2). Note that a policy π being totally dominated by any other policy is a sufficient but not necessary condition for π being weakly totally dominated by some other policy. A policy π_1 being weakly totally dominated by a policy π_2 implies that sup{U_r(π_1)|r∈ R}≤inf{U_r(π_2)|r∈ R}. We assume that there does not exist a policy π that weakly totally dominates itself, which could happen if and only if U_r(π) is a constant. We formalize this assumption as the following. For the given reward set R and policy set Π, there does not exist a policy π such that for any two reward functions r_1, r_2∈ R, U_r_1(π)=U_r_2(π). This assumption makes weak total domination a non-reflexive relation. It is obvious that weak total domination is transitive and asymmetric. Now we show that successive weak total domination will lead to total domination. for any three policies π_1, π_2, π_3∈Π, if π_1 is weakly totally dominated by π_2, π_2 is weakly totally dominated by π_3, then π_3 totally dominates π_1. According to the definition of weak total domination, r∈ Rmax U_r(π_1)≤r∈ Rmin U_r(π_2) and r∈ Rmax U_r(π_2)≤r∈ Rmin U_r(π_3). If π_1 is weakly totally dominated but not totally dominated by π_3, then r∈ Rmax U_r(π_1)=r∈ Rmin U_r(π_3) must be true. However, it implies r∈ Rmin U_r(π_2)=r∈ Rmax U_r(π_2), which violates Assumption <ref>. We finish the proof. For the set Π_wtd⊆Π of policies that are not weakly totally dominated by any other policy in the whole set of policies w.r.t a reward function set R, there exists a range 𝕌⊆ℝ such that for any policy π∈Π_wtd, 𝕌⊆ [r∈ Rmin U_r(π), r∈ Rmax U_r(π)]. For any two policies π_1, π_2∈Π_wtd, it cannot be true that r∈ Rmax U_r(π_1) = r∈ Rmin U_r(π_2) nor r∈ Rmin U_r(π_1) = r∈ Rmax U_r(π_2), because otherwise one of the policies weakly totally dominates the other. Without loss of generalization, we assume that r∈ Rmax U_r(π_1) > r∈ Rmin U_r(π_2). In this case, r∈ Rmax U_r(π_2)>r∈ Rmin U_r(π_1) must also be true, otherwise π_1 weakly totally dominates π_2. Inductively, π∈Π_wtdmin r∈ Rmax U_r(π) > π∈Π_wtdmax r∈ Rmin U_r(π). Letting ub=π∈Π_wtdmin r∈ Rmax U_r(π) and lb=π∈Π_wtdmax r∈ Rmin U_r(π), any 𝕌⊆ [lb, ub] shall support the assertion. We finish the proof. For a reward function set R, if a policy π∈Π is weakly totally dominated by some other policy in Π and there exists a subset Π_wtd⊆Π of policies that are not weakly totally dominated by any other policy in π, then r∈ Rmax U_r(π) < π'∈Π_wtdmin r∈ Rmax U_r(π') If π_1 is weakly totally dominated by a policy π_2∈Π, then r∈ Rmin U_r(π_2)=r∈ Rmax U_r(π). If r∈ Rmax U_r(π) ≥π'∈Π_wtdmin r∈ Rmax U_r(π'), then r∈ Rmin U_r(π_2) ≥π'∈Π_wtdmin r∈ Rmax U_r(π'), making at least one of the policies in Π_wtd being weakly totally dominated by π_2. Hence, r∈ Rmax U_r(π) < π'∈Π_wtdmin r∈ Rmax U_r(π') must be true. Given a policy π and a reward function r, the regret is represented as Eq.<ref> Regret(π, r) := π'max U_r(π') - U_r(π) Recalling the goal of this paper is to select a reward function from the solutions of an IRL problem r∈ Rmax πmin J_IRL(r, π), we specify the feasible set of reward functions as R_E={r|r∈r∈ Rmax πmin J_IRL(r, π)}. Then we represent the MinimaxRegret problem in Eq.<ref>. MinimaxRegret := π∈Πmin{r∈ R_Emax Regret(π, r)} We denote as r^*_π∈ R_E the reward function that maximizes U_r(π) among all the r's that achieve the maximization in Eq.<ref>. Formally, r^*_π ∈ r∈ R_Emax U_r(π) s.t. r∈r'∈ R_Emax Regret(π, r') Then MinimaxRegret can be defined as minimizing the worst-case regret as in Eq.<ref>. Next, we want to show that for some decision rule ℛ_E, the set of optimal policies which maximizes 𝒰_ℛ_E are the solutions to MinimaxRegret. Formally, MinimaxRegret = π∈Πmax 𝒰_ℛ_E(π) We design ℛ_E by letting ℛ_E(π):= ℛ_E(π)·δ_r^*_π + (1-ℛ_E(π))·ℛ_E(π) where ℛ_E: Π→Δ(R_E) is a policy conditioned distribution over reward functions, δ_r^*_π be a delta distribution centered at r^*_π, and ℛ_E(π) is a coefficient. We show how to design ℛ_E by using the following lemma. Given that the reward function set is R_E, there exists a decision rule ℛ_E: Π→Δ(R_E) which guarantees that: 1) for any policy π that is not weakly totally dominated by any other policy in Π, i.e., π∈Π_ wtd⊆Π, 𝒰_ℛ_E(π)≡ c where c=π'∈Π_ wtdmax r∈ R_Emin U_r(π'); 2) for any π that is weakly totally dominated by some policy but not totally dominated by any policy, 𝒰_ℛ_E(π)=r∈ R_Emax U_r(π); 3) if π is totally dominated by some other policy, ℛ_E(π) is a uniform distribution. Since the description of ℛ_E for the policies in condition 2) and 3) are self-explanatory, we omit the discussion on them. For the none weakly totally dominated policies in condition 1), having a constant 𝒰_ℛ_E(π)≡ c is possible if and only if for any policy π∈Π_ wed, c∈[r∈ R_Emin U_r(π'), r∈ R_Emax U_r(π')]. As mentioned in the proof of Lemma <ref>, c can exist within [r∈ Rmin U_r(π), r∈ Rmax U_r(π)]. Hence, c=π'∈Π_ wtdmax r∈ R_Emin U_r(π') is a valid assignment. Then by letting ℛ_E(π):=Regret(π, r^*_π)/ c - U_r^*_π(π), we have the following theorem. By letting ℛ_E(π):= ℛ_E(π)·δ_r^*_π + (1-ℛ_E(π))·ℛ_E(π) with ℛ_E(π):=Regret(π, r^*_π)/ c - U_r^*_π(π) and any ℛ_E that satisfies Lemma <ref>, MinimaxRegret = π∈Πmax 𝒰_ℛ_E(π) If a policy π∈Π is totally dominated by some other policy, since there exists another policy with larger 𝒰_ℛ_E, π cannot be a solution to π∈Πmax 𝒰_ℛ_E(π). Hence, there is no need for further discussion on totally dominated policies. We discuss the none weakly totally dominated policies and the weakly totally dominated but not totally dominated policies (shortened to "weakly totally dominated" from now on) respectively. First we expand π∈Πmax 𝒰_ℛ_E(π) as in Eq.<ref>. π∈Πmax 𝒰_ℛ_E(π) = π∈Πmax r∈ R_E∑ℛ_E(π)(r)· U_r(π) = π∈Πmax Regret(π, r^*_π)· U_r^*_π (π) + (𝒰_ℛ_E(π)-U_r^*_π(π) - Regret(π, r^*_π)) ·𝒰_ℛ_E(π)/c-U_r^*_π(π) = π∈Πmax (𝒰_ℛ_E(π)-U_r^*_π(π))·𝒰_ℛ_E(π) - (𝒰_ℛ_E(π) - U_r^*_π (π))· Regret(π, r^*_π))/c-U_r^*_π(π) = π∈Πmax 𝒰_ℛ_E(π)-U_r^*_π(π)/c-U_r^*_π(π)·𝒰_ℛ_E(π) - Regret(π, r^*_π) 1) For the none weakly totally dominated policies, since by design 𝒰_ℛ_E≡ c, Eq.<ref> is equivalent to π∈Π_1max - Regret(π, r^*_π) which is exactly MinimaxRegret. Hence, the equivalence holds among the none weakly totally dominated policies. Furthermore, if a none weakly totally dominated policy π∈Π_ wtd achieves MinimaxRegret, its 𝒰_ℛ_E(π) is also no less than any weakly totally dominated policy. Because according to Lemma <ref>, for any weakly totally dominated policy π_1, its 𝒰_ℛ_E(π_1)≤ c, hence 𝒰_ℛ_E(π)-U_r^*_π(π)/c-U_r^*_π(π)·𝒰_ℛ_E(π_1)≤ c. Since Regret(π, r^*_π)≤ Regret(π_1, r^*_π_1), 𝒰_ℛ_E(π)≥𝒰_ℛ_E(π_1). Therefore, we can assert that if a none weakly totally dominated policy π is a solution to MinimaxRegret, it is also a solution to π∈Πmax 𝒰_ℛ_E(π). Additionally, to prove that if a none weakly totally dominated policy π is a solution to π'∈Πmax 𝒰_ℛ_E(π'), it is also a solution to MinimaxRegret, it is only necessary to prove that π achieve no larger regret than all the weakly totally dominated policies. But we delay the proof to 2). 2) If a policy π is weakly totally dominated and is a solution to MinimaxRegret, we show that it is also a solution to π∈Πmax 𝒰_ℛ_E(π), i.e., its 𝒰_ℛ_E(π) is no less than that of any other policy. We start by comparing with non weakly totally dominated policy. for any weakly totally dominated policy π_1∈ MinimaxRegret, it must hold true that Regret(π_1, r^*_π_1)≤ Regret(π_2, r^*_π_2) for any π_2∈Π that weakly totally dominates π_1. However, it also holds that Regret(π_2, r^*_π_2)≤ Regret(π_1, r^*_π_2) due to the weak total domination. Therefore, Regret(π_1, r^*_π_1)= Regret(π_2, r^*_π_2)=Regret(π_1, r^*_π_2), implying that π_2 is also a solution to MinimaxRegret. It also implies that U_r^*_π_2(π_1)=U_r^*_π_2(π_2)≥ U_r^*_π_1(π_1) due to the weak total domination. However, by definition U_r^*_π_1(π_1)≥ U_r^*_π_2(π_1). Hence, U_r^*_π_1(π_1)= U_r^*_π_2(π_1)=U_r^*_π_2(π_2) must hold. Now we discuss two possibilities: a) there exists another policy π_3 that weakly totally dominates π_2; b) there does not exist any other policy that weakly totally dominates π_2. First, condition a) cannot hold. Because inductively it can be derived U_r^*_π_1(π_1)= U_r^*_π_2(π_1)=U_r^*_π_2(π_2)=U_r^*_π_3(π_3), while Lemma <ref> indicates that π_3 totally dominates π_1, which is a contradiction. Hence, there does not exist any policy that weakly totally dominates π_2, meaning that condition b) is certain. We note that U_r^*_π_1(π_1)= U_r^*_π_2(π_1)=U_r^*_π_2(π_2) and the weak total domination between π_1, π_2 imply that r^*_π_1, r^*_π_2∈r∈ R_Emax U_r(π_1), r^*_π_2∈r∈ R_Emin U_r(π_2), and thus r∈ R_Emin U_r(π_2)≤π∈Π_ wtdmax r∈ R_Emin U_r(π)=c. Again, π_1∈ MinimaxRegret makes Regret(π_1, r^*_π)≤ Regret(π_1, r^*_π_1)≤ Regret(π, r^*_π) not only hold for π=π_2 but also for any other policy π∈Π_ wtd, then for any policy π∈Π_ wtd, U_r^*_π(π_1)≥ U_r^*_π(π) ≥r∈ R_Emin U_r(π). Hence, U_r^*_π(π_1)≥π∈Π_ wtdmax r∈ R_Emin U_r(π)=c. Since U_r^*_π(π_1)=r∈ R_Emin U_r(π_2) as aforementioned, r∈ R_Emin U_r(π_2) > π∈Π_ wtdmax r∈ R_Emin U_r(π) will cause a contradiction. Hence, r∈ R_Emin U_r(π_2) =π∈Π_ wtdmax r∈ R_Emin U_r(π)=c. As a result, 𝒰_ℛ_E(π)=U_r^*_π(π)=π' ∈Π_ wtdmax r∈ R_Emin U_r(π')=c, and 𝒰_ℛ_E(π)=c- Regret(π, r^*_π)≥π' ∈Π_ wtdmax c - Regret(π', r^*_π')=π' ∈Π_ wtdmax 𝒰_ℛ_E(π'). In other words, if a weakly totally dominated policy π is a solution to MinimaxRegret, then its 𝒰_ℛ_E(π) is no less than that of any non weakly totally dominated policy. This also complete the proof at the end of 1), because if a none weakly totally dominated policy π_1 is a solution to π∈Πmax 𝒰_ℛ_E(π) but not a solution to MinimaxRegret, then Regret(π_1, r^*_π_1)>0 and a weakly totally dominated policy π_2 must be the solution to MinimaxRegret. Then, 𝒰_ℛ_E(π_2)=c > c-Regret(π_1, r^*_π_1)=𝒰_ℛ_E(π_1), which, however, contradicts π_1∈π∈Πmax 𝒰_ℛ_E(π). It is obvious that a weakly totally dominated policy π∈ MinimaxRegret has a 𝒰_ℛ_E(π) no less than any other weakly totally dominated policy. Because for any other weakly totally dominated policy π_1, 𝒰_ℛ_E(π_1)≤ c and Regret(π_1, r^*_π_1)≤ Regret(π, r^*_π), hence 𝒰_ℛ_E(π_1)≤𝒰_ℛ_E(π) according to Eq.<ref>. So far we have shown that if a weakly totally dominated policy π is a solution to MinimaxRegret, it is also a solution to π'∈Πmax 𝒰_ℛ_E(π'). Next, we need to show that the reverse is also true, i.e., if a weakly totally dominated policy π is a solution to π∈Πmax 𝒰_ℛ_E(π), it must also be a solution to MinimaxRegret. In order to prove its truthfulness, we need to show that if π∉ MinimaxRegret, whether there exists: a) a none weakly totally dominated policy π_1, or b) another weakly totally dominated policy π_1, such that π_1∈ MinimaxRegret and 𝒰_ℛ_E(π_1)≤𝒰_ℛ_E(π). If neither of the two policies exists, we can complete our proof. Since it has been proved in 1) that if a none weakly totally dominated policy achieves MinimaxRegret, it also achieves π'∈Πmax 𝒰_ℛ_E(π'), the policy described in condition a) does not exist. Hence, it is only necessary to prove that the policy in condition b) also does not exist. If such weakly totally dominated policy π_1 exists, π∉ MinimaxRegret and π_1∈ MinimaxRegret indicates Regret(π, r^*_π) > Regret(π_1, r^*_π_1). Since 𝒰_ℛ_E(π_1)≥𝒰_ℛ_E(π), according to Eq.<ref>, 𝒰_ℛ_E(π_1)=c - Regret(π_1, r^*_π_1)≤𝒰_ℛ_E(π)=𝒰_ℛ_E(π)-U_r^*_π(π)/c-U_r^*_π(π)·𝒰_ℛ_E(π) - Regret(π, r^*_π). Thus 𝒰_ℛ_E(π)-U_r^*_π(π)/c-U_r^*_π(π)(π) ·𝒰_ℛ_E≥ c + Regret(π, r^*_π) - Regret(π_1, r^*_π_1) > c, which is impossible due to 𝒰_ℛ_E≤ c. Therefore, such π_1 also does not exist. In fact, this can be reasoned from another perspective. If there exists a weakly totally dominated policy π_1 with U_r^*_π_1(π_1)=c=U_r^*_π(π) but π_1∉ MinimaxRegret, then Regret(π, r^*_π) > Regret(π_1, r^*_π_1). It also indicates π'∈Πmax U_r^*_π(π') > π'∈Πmax U_r^*_π_1(π'). Meanwhile, Regret(π_1, r^*_π):=π'∈Πmax U_r^*_π(π') - U_r^*_π(π_1) ≤ Regret(π_1, r^*_π_1):= π'∈Πmax U_r^*_π_1(π') - U_r^*_π_1(π_1):= r∈ R_Emax π'∈Πmax U_r(π') - U_r(π_1) indicates π'∈Πmax U_r^*_π(π') - π'∈Πmax U_r^*_π_1(π') ≤ U_r^*_π(π_1) - U_r^*_π_1(π_1). However, we have proved that, for a weakly totally dominated policy, π_1 ∈ MinimaxRegret indicates U_r^*_π_1(π_1)=r∈ R_Emax U_r(π_1). Hence, π'∈Πmax U_r^*_π(π') - π'∈Πmax U_r^*_π_1(π') ≤ U_r^*_π(π_1) - U_r^*_π_1(π_1)≤ 0 and it contradicts π'∈Πmax U_r^*_π(π') > π'∈Πmax U_r^*_π_1(π'). Therefore, such π_1 does not exist. In summary, we have exhausted all conditions and can assert that for any policies, being a solution to MinimaxRegret is equivalent to a solution to π∈Πmax 𝒰_ℛ_E(π). We complete our proof. §.§ Measuring Policy Performance Recall that the function U_r(π) is used to measure the performance of a policy π under a reward function r. In <cit.>, U_r(π)=η_r(π). In this section, we discuss the validity of letting U_r(π) be the loss function of a generic IRL objective, e.g., U_r(π)=η_r(π)-η_r(π_E) where η_r(π_E) measures the expected return of the expert policy π_E and can be estimated if an expert demonstration set E instead of π_E is provided. If further letting R_E={r|r∈r'∈ Rmin π∈Πmax U_r'(π) - U_r'(π_E)}, π∈Πmax U_r(π) is a constant for any r∈ R_E, notated as u:= π∈Πmax U_r(π). Because by definition R_E={r|r∈r∈ Rmin π∈Πmax U_r(π)}. If there exists r_1, r_2∈ R_E such that π∈Πmax U_r_1(π)<π∈Πmax U_r_2(π), r_2 will not be a member of R_E. Furthermore, {U_r(π)|π∈Π, r∈ R_E} will be upper-bounded by a constant u=π∈Πmax U_r(π). Because if there exists a policy π∈Π and a reward function r∈ R_E with U_r(π) > u, it contradicts the fact that u=π'∈Πmax U_r(π'). In this case, MinimaxRegret=π∈Πmin r∈ R_Emax Regret(π, r)=π∈Πmin r∈ R_Emax u- U_r(π)=π∈Πmax r∈ R_Emin U_r(π). Note that before making any other assumption on R_E, Π and U_r(·), π∈Πmax r∈ R_Emin U_r(π) cannot be regarded as the same as IRL itself r∈ Rmin π∈Πmax U_r(π). The solution to π∈Πmax r∈ R_Emin U_r(π) is the policy with the highest worst case U_r(π) for r∈ R_E. The IRL problem however may induce a policy that maximizes U_r(·) for some r∈ R_E while minimizing U_r'(·) for some other r'∈ R_E. While r∈ Rmin π∈Πmax U_r(π)=u, it is possible that π∈Πmax r∈ R_Emin U_r(π)<u. In fact, it is easily observable that the solutions to MinimaxRegret with some U_r(π) will be the same as that of letting U_r(π):=U_r(π) - U_r(π_E). Hence, in this paper we simply use η_r(π) as U_r(π). If a policy π∈ MinimaxRegret when the policy performance is measured with some U_r, then π∈ MinimaxRegret when letting U_r(π):=U_r(π) - U_r(π_E). When using U_r(π):=U_r(π) - U_r(π_E) to measure the policy performance, solving MinimaxRegret is to solve Eq. <ref>, which is the same as Eq.<ref>. MimimaxRegret = π∈Πmax r∈ R_Emin Regret(π, r) = π∈Πmax r∈ R_Emin π'∈Πmax{U_r(π')-U_r(π_E)} - (U_r(π) - U_r(π_E) = π∈Πmax r∈ R_Emin π'∈Πmax U_r(π') - U_r(π) §.§ Criterion for Successful Policy Learning Theorem <ref>. Suppose that the following three assumptions are satisfied for the set R_E of reward functions learned via IRL: 1) r_sp∈ R_Emax {sup F_r_sp - inf F_r_sp} < r_sp∈ R_Emin {inf S_r_sp - sup F_r_sp} and r_sp∈ R_Emax {sup S_r_sp - inf S_r_sp} < r_sp∈ R_Emin {inf S_r_sp - sup F_r_sp}; there exists a policy π such that 2) for any task-specifying reward function r_sp∈ R_E, i.e., U_r_sp(π)∈ S_r_sp; 3) for the any mis-specifying reward function r_mis∈ R_E, i.e., π'max U_r_mis(π') - U_r_mis(π) < r_sp∈ R_Emin {inf S_r_sp - sup F_r_sp}. Then MinimaxRegret will choose a policy π_P that does not fail the task, i.e., ∀ r_sp∈ R_E.(U_r_sp(π)∉ F_r_sp). Suppose that the three conditions are met, and a policy π_1 satisfies the property described in conditions 2) and 3). Then for any policy π_2∈ MinimaxRegret, if π_2 does not satisfy the mentioned property, then there exists a task-specifying reward function r_sp∈ R_E such that and U_r_sp(π_2)∈ F_r_sp. In this case Regret(π_2, r_sp)=π∈Πmax U_r_sp(π) - U_r_sp(π_2)≥inf S_r_sp - sup F_r_sp≥r_sp'∈ R_Emin {inf S_r_sp' - sup F_r_sp'}. However, for π_1, it holds for any task-specifying reward function r̂_sp∈ R_E that Regret(π_2, r̂_sp)≤sup S_r̂_sp - inf S_r̂_sp<r_sp'∈ R_Emin {inf S_r_sp' - sup F_r_sp'}, and it also holds for any mis-specifying reward function r_mis∈ R_E that Regret(π_2, _r_mis)=π∈Πmax U_r_mis(π) - U_r_mis(π_2) < r_sp'∈ R_Emin {inf S_r_sp - sup F_r_sp}. Hence, Regret(π_2, r_sp)< Regret(π_1, r_sp), contradicting π_1∈ MiniRegret. We complete the proof. Note that condition 2) in the theorem does not rule out the case where there exists some reward function r∈ R_E such that the U_r(·) is a constant, although, for such reward function, it cannot be determined whether a policy is successful or not based on U_r(·). If R_E does include such a reward function, its S_r and F_r can still be defined in an arbitrary way so that the conditions in Theorem <ref> can be satisfied. Theorem <ref>. Let the IRL loss be in the form of J_IRL(π, r):=U_r(π) -U_r(π_E) for some U_r(π). If r∈ Rmin π∈Πmax J_IRL(r_E, π) can reach Nash Equilibrium with a reward function set R_E and a policy set Π_E, then Π_E equals the set of solutions to MinimiaxRegret. The reward function set R_E and the policy set Π_E achieving Nash Equilibrium for r∈ Rmin π∈Πmax J_IRL(r_E, π) indicates that for any r∈ R_E, π∈Π_E, π∈π∈Πmax U_r(π) - U_r(π_E). Then Π_E will be the solution to π_P∈Πmax r∈ R_Emin {π_A∈Πmax U_r(π_A) - U_r(π_E)} - (U_r(π_P) - U_r(π_E)) because the policies in Π_E achieve zero regret. Then Lemma <ref> states that Π_E will also be the solution to π_P∈Πmax r∈ R_Emin {π_A∈Πmax U_r(π_A)} - U_r(π_P). We finish the proof. § APPROACH TO SOLVING MINIMAXREGRET In this section, we develop a series of theories that lead to two bounds of the Protagonist Antagonist Induced Regret. By using those bounds, we formulate objective functions for solving Imitation Learning problems with PAGAR. §.§ Protagonist Antagonist Induced Regret Bounds Our theories are inspired by the on-policy policy improvement methods in <cit.>. The theories in <cit.> are under the setting where entropy regularizer is not considered. In our implementation, we always consider entropy regularized RL of which the objective is to learn a policy that maximizes J_RL(π;r)=η_r(π) + ℋ(π). Also, since we use GAN-based IRL algorithms, the learned reward function r as proved by <cit.> is a distribution. Moreover, it is also proved in <cit.> that a policy π being optimal under r indicates that logπ≡ r≡𝒜_π. We omit the proof and let the reader refer to <cit.> for details. Although all our theories are about the relationship between the Protagonist Antagonist Induced Regret and the soft advantage function 𝒜_π, the equivalence between 𝒜_π and r allows us to use the theories to formulate our reward optimization objective functions. To start off, we denote the reward function to be optimized as r. Given the intermediate learned reward function r, we study the Protagonist Antagonist Induced Regret between two policies π_1 and π_2. Given a reward function r and a pair of policies π_1 and π_2, η_r(π_1) - η_r(π_2)=τ∼π_1𝔼[∑^∞_t=0γ^t 𝒜_π_2(s^(t), a^(t)) ] + τ∼π𝔼[∑^∞_t=0γ^t ℋ(π_2(·|s^(t)))] This proof follows the proof of Lemma 1 in <cit.> where RL is not entropy-regularized. For entropy-regularized RL, since 𝒜_π(s,a^(t))=s'∼𝒫(·|s,a^(t))𝔼[r(s, a^(t)) + γ𝒱_π(s') - 𝒱_π(s)], τ∼π_1𝔼[∑^∞_t=0γ^t 𝒜_π_2(s^(t), a^(t))] = τ∼π_1𝔼[∑^∞_t=0γ^t(r(s^(t+1), a^(t+1)) + γ𝒱_π_2(s^(t+1)) - 𝒱_π_2(s^(t)))] = τ∼π_1𝔼[∑^∞_t=0γ^t r(s^(t), a^(t)) -𝒱_π_2(s^(0))] = τ∼π_1𝔼[∑^∞_t=0γ^t r(s^(t), a^(t))] -s^(0)∼ d_0𝔼[𝒱_π_2(s^(0))] = τ∼π_1𝔼[∑^∞_t=0γ^t r(s^(t), a^(t))] -τ∼π_2𝔼[∑^∞_t=0γ^t r(s^(t), a^(t)) + ℋ(π_2(·|s^(t)))] = η_r (π_1) - η_r(π_2) - τ∼π_2𝔼[∑^∞_t=0γ^t ℋ(π_2(·|s^(t)))] = η_r (π_1) - η_r(π_2) - ℋ(π_2) Lemma <ref> confirms that τ∼π𝔼[∑^∞_t=0γ^t 𝒜_π(s^(t), a^(t))] = η_r(π) - η_r(π) + ℋ(π)=ℋ(π). We follow <cit.> and denote Δ𝒜(s)= a∼π_1(·|s)𝔼[𝒜_π_2(s,a)] - a∼π_2(·|s)𝔼[𝒜_π_2(s,a)] as the difference between the expected advantages of following π_2 after choosing an action respectively by following policy π_1 and π_2 at any state s. Although the setting of <cit.> differs from ours by having the expected advantage a∼π_2(·|s)𝔼[𝒜_π_2(s,a)] equal to 0 due to the absence of entropy regularization, the following definition and lemmas from <cit.> remain valid in our setting.  <cit.>, the protagonist policy π_1 and the antagonist policy π_2) are α-coupled if they defines a joint distribution over (a, ã)∈𝔸×𝔸, such that Prob(a ≠ã|s)≤α for all s.  <cit.> Given that the protagonist policy π_1 and the antagonist policy π_2 are α-coupled, then for all state s, |Δ𝒜(s)|≤ 2αamax|𝒜_π_2(s,a)|  <cit.> Given that the protagonist policy π_1 and the antagonist policy π_2 are α-coupled, then |s^(t)∼π_1𝔼[Δ𝒜(s^(t))] - s^(t)∼π_2𝔼[Δ𝒜(s^(t))]|≤ 4α(1- (1-α)^t) s,amax|𝒜_π_2(s,a)| Given that the protagonist policy π_1 and the antagonist policy π_2 are α-coupled, then s^(t)∼π_1 a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))] - s^(t)∼π_2 a^(t)∼π_2𝔼[ 𝒜_π_2(s^(t),a^(t))] ≤ 2(1 - (1-α)^t)(s,a)max|𝒜_π_2(s,a)| The proof is similar to that of Lemma <ref> in <cit.>. Let n_t be the number of times that a^(t')∼π_1 does not equal a^(t')∼π_2 for t'< t, i.e., the number of times that π_1 and π_2 disagree before timestep t. Then for s^(t)∼π_1, we have the following. s^(t)∼π_1𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))] ] = P(n_t=0) s^(t)∼π_1 n_t=0𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))] ] + P(n_t > 0)s^(t)∼π_1 n_t > 0𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))] ] The expectation decomposes similarly for s^(t)∼π_2. s^(t)∼π_2 a^(t)∼π_2𝔼[ 𝒜_π_2(s^(t),a^(t))] = P(n_t=0)s^(t)∼π_2 a^(t)∼π_2 n_t=0𝔼[ 𝒜_π_2(s^(t),a^(t))] + P(n_t>0)s^(t)∼π_2 a^(t)∼π_2 n_t > 0𝔼[ 𝒜_π_2(s^(t),a^(t))] When computing s^(t)∼π_1𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))] ]-s^(t)∼π_2 a^(t)∼π_2𝔼[ 𝒜_π_2(s^(t),a^(t))], the terms with n_t=0 cancel each other because n_t=0 indicates that π_1 and π_2 agreed on all timesteps less than t. That leads to the following. s^(t)∼π_1𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))] ]-s^(t)∼π_2 a^(t)∼π_2𝔼[ 𝒜_π_2(s^(t),a^(t))] = P(n_t > 0)s^(t)∼π_1 n_t > 0𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))] ] - P(n_t>0)s^(t)∼π_2 a^(t)∼π_2 n_t > 0𝔼[ 𝒜_π_2(s^(t),a^(t))] By definition of α, the probability of π_1 and π_2 agreeing at timestep t' is no less than 1 - α. Hence, P(n_t > 0)≤ 1 - (1- α^t)^t. Hence, we have the following bound. |s^(t)∼π_1𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))] ]-s^(t)∼π_2 a^(t)∼π_2𝔼[ 𝒜_π_2(s^(t),a^(t))]| = |P(n_t > 0)s^(t)∼π_1 n_t > 0𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))] ] - P(n_t>0)s^(t)∼π_2 a^(t)∼π_2 n_t > 0𝔼[ 𝒜_π_2(s^(t),a^(t))]| ≤ P(n_t > 0)(|s^(t)∼π_1 a^(t)∼π_2 n_t≥ 0𝔼[𝒜_π_2(s^(t),a^(t))]| + |s^(t)∼π_2 a^(t)∼π_2 n_t > 0𝔼[ 𝒜_π_2(s^(t),a^(t))]| ) ≤ 2(1 - (1-α)^t)(s,a)max|𝒜_π_2(s,a)| The preceding lemmas lead to the proof for Theorem <ref> in the main text. Theorem <ref>. Suppose that π_2 is the optimal policy in terms of entropy regularized RL under r. Let α = smax D_TV(π_1(·|s), π_2(·|s)), ϵ = s,amax |𝒜_π_2(s,a^(t))|, and Δ𝒜(s)=a∼π_1𝔼[𝒜_π_2(s,a)] - a∼π_2𝔼[𝒜_π_2(s,a)]. For any policy π_1, the following bounds hold. |η_r(π_1) -η_r(π_2) - ∑^∞_t=0γ^ts^(t)∼π_1𝔼[Δ𝒜(s^(t))]| ≤ 2αγϵ/(1-γ)^2 |η_r(π_1)-η_r(π_2) - ∑^∞_t=0γ^ts^(t)∼π_2𝔼[Δ𝒜(s^(t))]| ≤ 2αγ(2α+1)ϵ/(1-γ)^2 We first leverage Lemma <ref> to derive Eq.<ref>. Note that since π_2 is optimal under r, Remark <ref> confirmed that ℋ(π_2)=-∑^∞_t=0γ^ts^(t)∼π_2𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))]]. η_r(π_1) -η_r(π_2) = (η_r(π_1) -η_r(π_2) - ℋ(π_2)) + ℋ(π_2) = τ∼π_1𝔼[∑^∞_t=0γ^t 𝒜_π_2(s^(t), a^(t)) ] + ℋ(π_2) = τ∼π_1𝔼[∑^∞_t=0γ^t 𝒜_π_2(s^(t), a^(t)) ]-∑^∞_t=0γ^ts^(t)∼π_2𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))]] = ∑^∞_t=0γ^ts^(t)∼π_1𝔼[a^(t)∼π_1𝔼[𝒜_π_2(s^(t),a^(t))] - a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))] ] + ∑^∞_t=0γ^t(s^(t)∼π_1𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))]]-s^(t)∼π_2𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))]] ) = ∑^∞_t=0γ^ts^(t)∼π_1𝔼[Δ𝒜(s^(t))] + ∑^∞_t=0γ^t(s^(t)∼π_1𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))]]-s^(t)∼π_2𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))]] ) We switch terms between Eq.<ref> and η_r(π_1)-η_r(π_2), then use Lemma <ref> to derive Eq.<ref>. |η_r(π_1) -η_r(π_2) - ∑^∞_t=0γ^ts^(t)∼π_1𝔼[Δ𝒜(s^(t))]| = |∑^∞_t=0γ^t(s^(t)∼π_1𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))]]-s^(t)∼π_2𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))]] )| ≤ ∑^∞_t=0γ^t · 2(s,a)max|𝒜_π_2(s,a)|· (1 - (1-α)^t)≤2αγ(s,a)max|𝒜_π_2(s,a)|/(1-γ)^2 Alternatively, we can expand η_r(π_2)-η_r(π_1) into Eq.<ref>. During the process, ℋ(π_2) is converted into -∑^∞_t=0γ^ts^(t)∼π_2𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t),a^(t))]]. η_r(π_1) -η_r(π_2) = (η_r(π_1) -η_r(π_2) - ℋ(π_2))+ℋ(π_2) = τ∼π_1𝔼[∑^∞_t=0γ^t 𝒜_π_2(s^(t), a^(t)) ]+ℋ(π_2) = ∑^∞_t=0γ^ts^(t)∼π_1𝔼[a^(t)∼π_1𝔼[𝒜_π_2(s^(t), a^(t)) ]]+ℋ(π_2) = ∑^∞_t=0γ^ts^(t)∼π_1𝔼[Δ A(s^(t)) +a^(t)∼π_2𝔼[𝒜_π_2(s^(t), a^(t)) ]]+ℋ(π_2) = ∑^∞_t=0γ^ts^(t)∼π_2𝔼[ a^(t)∼π_1𝔼[𝒜_π_2(s^(t),a^(t))]- a^(t)∼π_2𝔼[𝒜_π_2(s^(t), a^(t))] - Δ𝒜(s^(t))] + s^(t)∼π_1𝔼[ Δ𝒜(s^(t)) + a^(t)∼π_2𝔼[𝒜_π_2(s^(t), a^(t)) ]] - s^(t)∼π_2 a^(t)∼π_2𝔼[𝒜_π_2(s^(t), a^(t)) ] = ∑^∞_t=0γ^t(s^(t)∼π_1𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t), a^(t) ) ]]- 2 s^(t)∼π_2𝔼[ a^(t)∼π_2𝔼[𝒜_π_2(s^(t), a^(t))]])+ ∑^∞_t=0γ^t(s^(t)∼π_2𝔼[a^(t)∼π_1𝔼[𝒜_π_2(s^(t),a^(t))] ] - (s^(t)∼π_2𝔼[Δ𝒜(s^(t))] - s^(t)∼π_1𝔼[Δ𝒜(s^(t))])) We switch terms between Eq.<ref> and η_r(π_1)-η_r(π_2), then base on Lemma <ref> and <ref> to derive the inequality in Eq.<ref>. |η_r(π_1)-η_r(π_2) - ∑^∞_t=0γ^ts^(t)∼π_2𝔼[Δ𝒜_π(s^(t),a^(t))]| = |η_r(π_1) -η_r(π_2) - ∑^∞_t=0γ^t(s^(t)∼π_2𝔼[a^(t)∼π_1𝔼[𝒜_π_2(s^(t),a^(t))] ] -s^(t)∼π_2𝔼[ a^(t)∼π_2𝔼[𝒜_π_2(s^(t), a^(t))]])| = |∑^∞_t=0γ^t (s^(t)∼π_2𝔼[Δ𝒜(s^(t))] - s^(t)∼π_1𝔼[Δ𝒜(s^(t))])- ∑^∞_t=0γ^t(s^(t)∼π_1𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t), a^(t) ) ]]- s^(t)∼π_2𝔼[ a^(t)∼π_2𝔼[𝒜_π_2(s^(t), a^(t))]]) | ≤ |∑^∞_t=0γ^t (s^(t)∼π_2𝔼[Δ𝒜(s^(t))] - s^(t)∼π_1𝔼[Δ𝒜(s^(t))])| + |∑^∞_t=0γ^t(s^(t)∼π_1𝔼[a^(t)∼π_2𝔼[𝒜_π_2(s^(t), a^(t) ) ]]- s^(t)∼π_2𝔼[ a^(t)∼π_2𝔼[𝒜_π_2(s^(t), a^(t))]]) | ≤ ∑^∞_t=0γ^t ((1- (1-α)^t) (4αs,amax|𝒜_π_2(s,a)| + 2(s,a)max|𝒜_π_2(s,a)|)) ≤ 2αγ(2α+1)s,amax|𝒜_π_2(s,a)|/(1-γ)^2 It is stated in <cit.> that smax D_TV(π_2(·|s), π_1(·|s))≤α. Hence, by letting α:= smax D_TV(π_2(·|s), π_1(·|s)), Eq.<ref> and <ref> still hold. Then, we have proved Theorem <ref>. §.§ Objective Functions of Reward Optimization To derive J_R,1 and J_R,2, we let π_1=π_P and π_2=π_A. Then based on Eq.<ref> and <ref> we derive the following upper-bounds of η_r(π_P)-η_r(π_A). η_r(π_P) -η_r(π_A) ≤ ∑^∞_t=0γ^ts^(t)∼π_P𝔼[Δ𝒜(s^(t))] + 2αγ(2α+1)ϵ/(1-γ)^2 η_r(π_P) - η_r(π_A) ≥ ∑^∞_t=0γ^ts^(t)∼π_A𝔼[Δ𝒜(s^(t))] - 2αγϵ/(1-γ)^2 By our assumption that π_A is optimal under r, we have 𝒜_π_A≡ r <cit.>. This equivalence enables us to replace 𝒜_π_A's in Δ𝒜 with r. As for the 2αγ(2α+1)ϵ/(1-γ)^2 and 2αγϵ/(1-γ)^2 terms, since the objective is to maximize η_r(π_A)-η_r(π_B), we heuristically estimate the ϵ in Eq.<ref> by using the samples from π_P and the ϵ in Eq.<ref> by using the samples from π_A. As a result we have the objective functions defined as Eq.<ref> and <ref> where δ_1(s,a)=π_P(a^(t)|s^(t))/π_A(a^(t)|s^(t)) and δ_2=π_A(a^(t)|s^(t))/π_P(a^(t)|s^(t)) are the importance sampling probability ratio derived from the definition of Δ𝒜; C_1 ∝ - γα̂/(1-γ) and C_2∝γα̂/(1-γ) where α̂ is either an estimated maximal KL-divergence between π_A and π_B since D_KL≥ D_TV^2 according to <cit.>, or an estimated maximal D_TV^2 depending on whether the reward function is Gaussian or Categorical. We also note that for finite horizon tasks, we compute the average rewards instead of the discounted accumulated rewards in Eq.<ref> and <ref>. J_R,1(r;π_P, π_A):= τ∼π_A𝔼[∑^∞_t=0γ^t(δ_1(s^(t),a^(t))- 1)· r(s^(t),a^(t))] +C_1 (s,a)∼π_Amax|r(s,a)| J_R,2(r; π_P, π_A) := τ∼π_P𝔼[∑^∞_t=0γ^t(1 - δ_2(s^(t), a^(t))) · r(s^(t),a^(t))] + C_2 (s,a)∼π_Pmax|r(s,a)| Beside J_R, 1, J_R, 2, we additionally use two more objective functions based on the derived bounds. W J_R,r(r;π_A, π_P). By denoting the optimal policy under r as π^*, α^*=s∈𝕊max D_TV(π^*(·|s), π_A(·|s), ϵ^*=(s,a^(t))max|𝒜_π^*(s,a^(t))|, and Δ𝒜_A^*(s)=a∼π_A𝔼[𝒜_π^*(s,a)] - a∼π^*𝔼[𝒜_π^*(s,a)], we have the following. η_r(π_P) - η_r(π^*) = η_r(π_P) - η_r(π_A) + η_r(π_A) - η_r(π^*) ≤ η_r(π_P)-η_r(π_A) + ∑^∞_t=0γ^ts^(t)∼π_A𝔼[Δ𝒜_A^*(s^(t))] +2α^*γϵ^*/(1-γ)^2 = η_r(π_P)-∑^∞_t=0γ^ts^(t)∼π_A𝔼[a^(t)∼π_A𝔼[r(s^(t),a^(t))]] + ∑^∞_t=0γ^ts^(t)∼π_A𝔼[a^(t)∼π_A𝔼[𝒜_π^*(s^(t),a^(t))] - a^(t)∼π^*𝔼[𝒜_π^*(s^(t),a^(t))]] + 2α^*γϵ^*/(1-γ)^2 = η_r(π_P) -∑^∞_t=0γ^ts^(t)∼π_A𝔼[ a^(t)∼π^*𝔼[𝒜_π^*(s^(t),a^(t))]]+ 2α^*γϵ^*/(1-γ)^2 = τ∼π_P𝔼[∑^∞_t=0γ^t r(s^(t),a^(t))] -τ∼π_A𝔼[∑^∞_t=0γ^texp(r(s^(t),a^(t)))/π_A(a^(t)|s^(t))r(s^(t),a^(t))]+2α^*γϵ^*/(1-γ)^2 Let δ_3=exp(r(s^(t),a^(t)))/π_A(a^(t)|s^(t)) be the importance sampling probability ratio. It is suggested in <cit.> that instead of directly optimizing the objective function Eq.<ref>, optimizing a surrogate objective function as in Eq.<ref>, which is an upper-bound of Eq.<ref>, with some small δ∈(0, 1) can be much less expensive and still effective. J_R,3(r;π_P, π_A):=τ∼π_P𝔼[∑^∞_t=0γ^t r(s^(t),a^(t))] - τ∼π_A𝔼[∑^∞_t=0γ^t min(δ_3 · r(s^(t),a^(t)), clip(δ_3, 1-δ, 1 + δ)· r(s^(t),a^(t))) ] Alternatively, we let Δ𝒜_P^*(s)=a∼π_P𝔼[𝒜_π^*(s,a)] - a∼π^*𝔼[𝒜_π^*(s,a)]. The according to Eq.<ref>, we have the following. η_r(π_P) -η_r(π^*) ≤ ∑^∞_t=0γ^ts^(t)∼π_P𝔼[Δ𝒜_P^*(s^(t))] + 2α^*γ(2α^*+1)ϵ^*/(1-γ)^2 = ∑^∞_t=0γ^ts^(t)∼π_P𝔼[a^(t)∼π_P𝔼[𝒜_π^*(s^(t),a^(t))] - a^(t)∼π^*𝔼[𝒜_π^*(s^(t),a)^(t)]] + 2α^*γ(2α^*+1)ϵ^*/(1-γ)^2 Then a new objective function J_R,4 is formulated in Eq.<ref> where δ_4 = exp(r(s^(t),a^(t)))/π_P(a^(t)|s^(t)). J_R,4(r;π_P, π_A):=τ∼π_P𝔼[∑^∞_t=0γ^t r(s^(t),a^(t))] - τ∼π_P𝔼[∑^∞_t=0γ^t min(δ_4 · r(s^(t),a^(t)), clip(δ_4, 1-δ, 1 + δ)· r(s^(t),a^(t))) ] §.§ Incorporating IRL Algorithms In our implementation, we combine PAGAR with GAIL and VAIL, respectively. When PAGAR is combined with GAIL, the meta-algorithm Algorithm <ref> becomes Algorithm <ref>. When PAGAR is combined with VAIL, it becomes Algorithm <ref>. Both of the two algorithms are GAN-based IRL, indicating that both algorithms use Eq.<ref> as the IRL objective function. In our implementation, we use a neural network to approximate D, the discriminator in Eq.<ref>. To get the reward function r, we follow <cit.> and denote r(s,a) = log(π_A(a|s)/D(s,a) - π_A(a|s)) as mentioned in Section <ref>. Hence, the only difference between Algorithm <ref> and Algorithm <ref> is in the representation of the reward function. Regarding VAIL, since it additionally learns a representation for the state-action pairs, a bottleneck constraint J_IC(D)≤ i_c is added where the bottleneck J_IC is estimated from policy roll-outs. VAIL introduces a Lagrangian parameter β to integrate J_IC(D) - i_c in the objective function. As a result its objective function becomes J_IRL(π_A, r) + β· (J_IC(D) - i_c). VAIL not only learns the policy and the discriminator but also optimizes β. In our case, we utilize the samples from both protagonist and antagonist policies to optimize β as in line 10, where we follow <cit.> by using projected gradient descent with a step size δ In our implementation, depending on the difficulty of the benchmarks, we choose to maintain λ as a constant or update λ with the IRL loss J_IRL(π_A, r). When choosing to update λ, we introduce two hyperparameters, i_r and ξ. By regarding i_r as the target loss of J_IRL(π_A, r), we change the constraint r∈ R_E into J_IRL(π_A, r)≤ i_r. Then we update λ by λ:= λ·exp(ξ·(J_IRL(π_A, r) - i_r)) after every iteration. In some sense, keeping λ as a constant is equivalent to letting ξ:=0 Besides, we use PPO <cit.> to train all policies in Algorithm <ref> and <ref>. § EXPERIMENT DETAILS This section presents some details of the experiments and additional results. §.§ Experimental Details Network Architectures. Our algorithm involves a protagonist policy π_P, and an antagonist policy π_A. In our implementation, the two policies have the same structures. Each structure contains two neural networks, an actor network, and a critic network. When associated with GAN-based IRL, we use a discriminator D to represent the reward function as mentioned in Appendix <ref>. * Protagonist and Antagonist policies. We prepare two versions of actor-critic networks, an MLP version, and a CNN version, respectively, for the Mujoco and Mini-Grid benchmarks. The MLP version, the actor and critic networks have 3 layers. Each hidden layer has 100 neurons and a tanh activation function. The output layer output the mean and standard deviation of the actions. In the CNN version, the actor and critic networks share 3 convolutional layers, each having 5, 2, 2 filters, 2×2 kernel size, and ReLU activation function. Then 2 fully connected networks are used to simulate the actor and critic networks. The fully connected networks have one hidden layer, of which the sizes are 64. * Discriminator D for GAIL w/ PAGAR in Algorithm <ref>. We prepare two versions of discriminator networks, an MLP version and a CNN version, respectively, for the Mujoco and Mini-Grid benchmarks. The MLP version has 3 linear layers. Each hidden layer has 100 neurons and a tanh activation function. The output layer uses the Sigmoid function to output the confidence. In the CNN version, the actor and critic networks share 3 convolutional layers, each having 5, 2, 2 filters, 2×2 kernel size, and ReLU activation function. The last convolutional layer is concatenated with a fully connected network with one hidden layer with 64 neurons and tanh activation function. The output layer uses the Sigmoid function as the activation function. * Discriminator D for VAIL w/ PAGAR in Algorithm <ref>. We prepare two versions of discriminator networks, an MLP version and a CNN version, respectively, for the Mujoco and Mini-Grid benchmarks. The MLP version uses 3 linear layers to generate the mean and standard deviation of the embedding of the input. Then a two-layer fully connected network takes a sampled embedding vector as input and outputs the confidence. The hidden layer in this fully connected network has 100 neurons and a tanh activation function. The output layer uses the Sigmoid function to output the confidence. In the CNN version, the actor and critic networks share 3 convolutional layers, each having 5, 2, 2 filters, 2×2 kernel size, and ReLU activation function. The last convolutional layer is concatenated with a two-layer fully connected network. The hidden layer has 64 neurons and uses tanh as the activation function. The output layer uses the Sigmoid function as the activation function. Hyperparameters The hyperparameters that appear in Algorithm <ref> and <ref> are summarized in Table <ref>. The values vary depending on the task and IRL algorithm for the hyperparameters i_r, ξ explained in Appendix <ref>. Expert Demonstrations. Our expert demonstrations all achieve high rewards in the task. The number of trajectories and the average trajectory total rewards are listed in Table <ref>. §.§ Additional Results We append the results in two Mujoco benchmarks: InvertedPendulum-v2 and Swimmer-v2 in Figure <ref>. Algorithm <ref> performs similarly to VAIL and GAIL in those two benchmarks. IQ-learn does not perform well in Walker2d-v2 but performs better than ours and other baselines by a large margin. §.§ Ablation Study We study whether choosing a different reward function set R can influence the performance of Algorithm <ref>. Specifically, we use the Mini-Grid benchmarks to show the difference between using the Sigmoid function and a Categorical distribution in the output layer of the discriminator network. When using the Sigmoid function, the confidence of D classifying a state-action pair as being sampled from the antagonist policy's roll-outs is not normalized over the action space, i.e., ∑_a∈𝔸D(s,a)≠ 1. When using a Categorical distribution, the confidence sum to one for all the actions, i.e., ∑_a∈𝔸D(s,a)= 1. We fixed the IRL algorithm as the one used in GAIL. The results are shown in Figure <ref>. When changing the discriminator's output layer, GAIL, and ours' training efficiency change. However, our algorithm outperforms GAIL in both cases by using fewer samples to train the protagonist policy to attain high performance.
http://arxiv.org/abs/2306.01576v1
20230602144302
Non-canonical Higgs inflation
[ "Pooja Pareek", "Akhilesh Nautiyal" ]
astro-ph.CO
[ "astro-ph.CO", "hep-ph", "hep-th" ]
#1#1 #1#1#11/2
http://arxiv.org/abs/2306.03829v1
20230606161528
Small-Coupling Dynamic Cavity: a Bayesian mean-field framework for epidemic inference
[ "Alfredo Braunstein", "Giovanni Catania", "Luca Dall'Asta", "Matteo Mariani", "Fabio Mazza", "Mattia Tarabolo" ]
cond-mat.dis-nn
[ "cond-mat.dis-nn", "cond-mat.stat-mech", "physics.data-an", "q-bio.PE" ]
Institute of Condensed Matter Physics and Complex Systems, Department of Applied Science and Technology, Politecnico di Torino, C.so Duca degli Abruzzi 24, 10129 Torino, Italy Italian Institute for Genomic Medicine (IIGM) and Candiolo Cancer Institute IRCCS, str. prov. 142, km 3.95, Candiolo (TO) 10060, Italy Collegio Carlo Alberto, Piazza Arbarello 8, 10122, Torino, Italy Departamento de Física Téorica I, Universidad Complutense, 28040 Madrid, Spain [email protected] Institute of Condensed Matter Physics and Complex Systems, Department of Applied Science and Technology, Politecnico di Torino, C.so Duca degli Abruzzi 24, 10129 Torino, Italy Collegio Carlo Alberto, Piazza Arbarello 8, 10122, Torino, Italy Institute of Condensed Matter Physics and Complex Systems, Department of Applied Science and Technology, Politecnico di Torino, C.so Duca degli Abruzzi 24, 10129 Torino, Italy Institute of Condensed Matter Physics and Complex Systems, Department of Applied Science and Technology, Politecnico di Torino, C.so Duca degli Abruzzi 24, 10129 Torino, Italy Institute of Condensed Matter Physics and Complex Systems, Department of Applied Science and Technology, Politecnico di Torino, C.so Duca degli Abruzzi 24, 10129 Torino, Italy A novel generalized mean field approximation, called the Small-Coupling Dynamic Cavity (SCDC) method, for Bayesian epidemic inference and risk assessment is presented. The method is developed within a fully Bayesian framework and accounts for non-causal effects generated by the presence of observations. It is based on a graphical model representation of the epidemic stochastic process and utilizes dynamic cavity equations to derive a set of self-consistent equations for probability marginals defined on the edges of the contact graph. By performing a small-coupling expansion, a pair of time-dependent cavity messages is obtained, which capture the probability of individual infection and the conditioning power of observations. In its efficient formulation, the computational cost per iteration of the SCDC algorithm is linear in the duration of the epidemic dynamics and in the number of contacts. The SCDC method is derived for the Susceptible-Infected (SI) model and straightforwardly applicable to other Markovian epidemic processes, including recurrent ones. It exhibits high accuracy in assessing individual risk on par with Belief Propagation techniques and outperforming heuristic methods based on individual-based mean-field approximations. Although convergence issues may arise due to long-range correlations in contact graphs, the estimated marginal probabilities remain sufficiently accurate for reliable risk estimation. Future work includes extending the method to non-Markovian recurrent epidemic models and investigating the role of second-order terms in the small coupling expansion of the observation-reweighted Dynamic Cavity equations. Small-Coupling Dynamic Cavity: a Bayesian mean-field framework for epidemic inference Mattia Tarabolo ===================================================================================== § INTRODUCTION In the past decade, the increasing availability of detailed epidemiological data and high-accuracy contact-network datasets triggered the study of individual-based epidemic inference problems. The interest has been further stimulated during the COVID-19 pandemic, by the possibility of performing massive epidemic surveillance and digital contact tracing via smartphone applications <cit.>. A variety of computational methods were proposed for tackling this class of inference problems, such as heuristic algorithms based on network centrality measures <cit.>, generalized mean-field approximations <cit.>, Monte Carlo methods <cit.>, and machine learning techniques exploiting tailored architectures of autoregressive and graph neural networks <cit.>. The leading technique for epidemic inference adopts a Bayesian framework, with a simple individual-based epidemic model as prior distribution and the sparse observation of positive/negative test results as evidence, and consists of an efficient message-passing algorithm based on a Belief Propagation (BP) approximation of the posterior distribution <cit.>. The method has proved to be extremely effective in estimating local marginal probabilities of the posterior distribution, reconstructing the infection state of unobserved individuals, and identifying patient zero and contagion channels. Furthermore, when integrated into the framework of digital contact tracing for COVID-19, the BP-based algorithms have been shown to provide a better assessment of individual risk and improve the mitigation impact of non-pharmaceutical intervention strategies, outperforming competing methods for various epidemic inference problems defined on contact networks <cit.>. The Belief Propagation approach to spatio-temporal epidemic trajectories can be classified as a generalized mean-field method because the epidemic trajectories of the neighbors of a given individual are assumed to be conditionally independent. This hypothesis is correct when the dynamical process takes place on a contact network without cycles, but the method proved to be very effective also on contact networks with cycles. This is the same assumption of the Dynamic Cavity (DC) approach <cit.>, which turns out to be equivalent to BP in the case of pure time-forward epidemic dynamics without observations. Moreover, for dynamic models with non-recurrent individual states, in the case of pure time-forward epidemic dynamics without observations, the BP/DC approach simplifies into a dynamic message passing technique that has been extensively used to study spreading processes on networks <cit.>. Even simpler individual-based mean-field (IBMF) methods, also known as quenched mean-field and N-intertwined models, which assume that the states of neighboring nodes are statistically independent, have been shown to provide moderately good approximations to time-forward epidemic dynamics <cit.>. Recently, the individual-based mean-field method has been employed to propose a very simple inference method where the observations of individual states are heuristically taken into account <cit.>. Despite the absence of a correct Bayesian framework and the simplicity of the approach, this method provides moderately good results for epidemic risk assessment. In this paper, a novel generalized mean field approximation for Bayesian epidemic inference and risk assessment is proposed. The starting point of the method is a graphical model representation of the epidemic stochastic process that allows for a convenient derivation of a set of dynamic cavity equations for functional probability marginals defined on the edges of the contact graph. In this representation, the cavity probability marginal on a directed edge from individual i to individual j is a function of two quantities: the trajectory of the individual state of i in the absence of interactions with j and a conjugate external field acting on i (which replaces the effect of the missing interaction terms in the cavity graph). By performing an expansion of the dynamic cavity equations for weak infection probabilities and truncating the expansion at the first order, a set of self-consistent equations for the average of these two quantities can be obtained. This Small-Coupling Dynamic Cavity (SCDC) method is expected to be less accurate than BP for epidemic trajectories, for which it represents a sort of weak-infectivity approximation. Despite most common heuristic methods based on centrality measures and individual-based mean-field approximations, the proposed method is developed within a fully Bayesian formulation and accounts for non-causal effects generated by the presence of observations. In the absence of observations, the conjugate fields responsible for non-causal dynamics vanish, and the individual-based mean-field method for time-forward dynamics can be recovered. For clarity, the SCDC method is developed in the case of the Susceptible-Infected model, for which efficient computational schemes can be easily devised. Using an efficient formulation based on a transfer-matrix technique, the SCDC method can be straightforwardly extended to more general Markovian epidemic processes, including individual recovery, latency, and recurrent infection (e.g. SIR, SEIR, SIS, SIRS models). The manuscript is organized as follows: Section <ref> presents the SCDC method and its derivation on the SI model; Section <ref> discusses a general efficient formulation of the algorithm that is easy to generalize to other epidemic models (further discussed in Section <ref>); results are presented in Section <ref>, concerning both estimates of epidemic outbreaks in the absence of observations and individual risk assessment from partial observations, on both irreversible and recurrent compartmental models; finally, Section <ref> draws the conclusions and highlights future directions to be investigated. § METHODOLOGY §.§ Definition of the stochastic epidemic model and observations The simplest non-trivial model employed in epidemic inference is the discrete-time stochastic Susceptible-Infected (SI) model. The application of the proposed method to more general epidemic models is discussed in Section <ref>. It is thus considered the dynamics of the SI model on a population of N individuals over a temporal window of T time steps (e.g. days). The daily contacts are directly encoded in the set of parameters specifying the infection transmission, with λ_ij^t being the infection probability along the directed edge from individual i to individual j at time t; conversely, we set λ_ij^t=λ_ji^t=0 if i and j are not in contact at time t. The epidemic state of the population at time t is represented by a binary array x^t=(x_1^t,…,x_N^t), with x_i^t=0 (resp. x_i^t=1) meaning that i is a Susceptible (resp. Infected) individual at time t. For the sake of generality we include in the model a small self-infection probability ε_i^t. The epidemic model is assumed to be Markovian, although this hypothesis can be relaxed. In the Markovian setup, the time evolution of the probability p_t[ x^t] that the population is in state x^t at time t is given in terms of the following master equation p_t+1[ x^t+1]=∑_ x^tW[ x^t+1| x^t] p_t[ x^t], with transition rates W[ x^t+1| x^t]=∏_iW_i(x_i^t+1| x^t) where W_i(x_i^t+1=1| x^t) = x_i^t+(1-x_i^t)[1-(1 - ε_i^t)∏_j(1- λ_ji^tx_j^t)], W_i(x_i^t+1=0| x^t) = (1-x_i^t)(1 - ε_i^t)∏_j(1- λ_ji^tx_j^t). It is convenient to introduce a set of local fields h_i^t=∑_jν_ji^tx_j^t, with ν_ji^t=log(1-λ_ji^t), such that ∏_j(1- λ_ji^tx_j^t)=∏_j(1-λ_ji^t)^ x_j^t=e^h_i^t, and use them to provide an equivalent description to the master equation (<ref>), based on a system of discrete-time stochastic maps x_i^t+1 = x_i^t+(1-x_i^t)y_i^t, in which y_i^t is a Bernoulli random variable with parameter 1-e^h_i^t(1 - ε_i^t), i.e. P[y_i^t|h_i^t]=Bernoulli(1-e^h_i^t(1 - ε_i^t)). The likelihood of the model can be defined by a set 𝒪 of statistically independent observations, each of them providing information about the state of a certain node i at the corresponding observation time. The most general scenario admits multiple observations on the same node i (at different times), encoded in the vectors O_i, and uncertainty on the outcome of the tests, the latter being eventually quantified by false positive rate f_FPR and/or false negative rates f_FNR. If node i is observed at time τ_o_i, the corresponding likelihood over its epidemic trajectory x_i = (x_i^0, …, x_i^T) reads p(O_i^τ_o_i|x_i)=(1-f_FPR)δ_x_i^τ_o_i,0+f_FNRδ_x_i^τ_o_i,1 if O_i^τ_o_i=0 f_FPR δ_x_i^τ_o_i,0+(1-f_FNR)δ_x_i^τ_o_i,1 if O_i^τ_o_i=1 where δ_x,y denotes the Kronecker symbol. The total likelihood over the full set of observations 𝒪 ={O_i }_i=1^Ncan be rewritten as p( 𝒪| X) = ∏_i p( O_i |x_i ) = ∏_i ∏_o_i∈O_i p( O_i^τ_o_i|x_i ), where each term in the last equation takes the form given in (<ref>). At the rightmost-hand side, the second product runs over all the observations O_i on node i. In the above equation the quantity X is a short-hand notation to indicate the trajectories of all nodes, namely X={x_1, x_2, …, x_N } = { x^0, x^1,…, x^T}. In the case of perfectly accurate tests, in which f_FPR=f_FNR=0, the effect of the observations is to enforce the dynamical trajectories to be compatible with the observed states. The posterior probability of the trajectory X can be expressed using Bayes' theorem as follows p( X|𝒪) = 1/p(𝒪)p( X) p( 𝒪| X ) = 1/p(𝒪)∏_i=1^N{ p(x_i^0)∏_t=0^T-1[∑_y_i^t∫ d h_i^tP[y_i^t|h_i^t]δ_x_i^t+1,x_i^t+(1-x_i^t)y_i^t δ(h_i^t-∑_jν_ji^tx_j^t)]} p( O_i |x_i ) ∝∏_i{p(x_i^0) ∏_t=0^T-1[∑_y_i^t∫ d h_i^tP[y_i^t|h_i^t]δ_x_i^t+1,x_i^t+(1-x_i^t)y_i^t δ(h_i^t-∑_jν_ji^tx_j^t) p( O_i^t| x_i^t ) ]} p( O_i^T| x_i^T ), where in the last expression it is assumed the simplifying notation that the conditional probability p( O_i^t|x_i^t )=1 also in the case in which there is no observation of the state of individual i at time t, i.e. ∄ o_i such that t=τ_o_i. The same will be assumed in the rest of the paper. Alternatively, by summing over the variables y_i^t and h_i^t for all i and t, Eq.(<ref>) becomes p( X|𝒪) ∝∏_i{p(x_i^0) ∏_t=0^T-1[ δ_x_i^t+1,x_i^t(1-ε_i^t) e^∑_k∈∂ iν_ki^t x_k^t + δ_x_i^t+1,1(1- (1-ε_i^t ) e^∑_k∈∂ iν_ki^t x_k^t ) ] p( O_i^t| x_i^t ) } p( O_i^T| x_i^T ), The Bayesian inference problem consists in evaluating marginals of the posterior distribution p( X|𝒪), such as the quantity p( x_i^t=x|𝒪) representing the posterior probability that individual i is in state x∈{0,1} at time t given the set of the available observations 𝒪. The posterior distribution is, in general, intractable but it is the starting point for the derivation of approximate inference methods. §.§ The Dynamic Cavity Equations for the SI model with observations The posterior probability in Eq.(<ref>) can be interpreted as a graphical model for dynamical trajectories defined on the contact network, for which a Belief Propagation approach was first proposed in Refs. <cit.>. In particular, defining variable nodes grouping together pairs of dynamical trajectories for neighboring nodes, the short loops naturally introduced by the dynamical constraints can be disentangled, leading to the following set of Belief Propagation (BP) equations <cit.>, c_ij^BP[x_i,x_j | 𝒪] ∝ p(x_i^0) ∑_x_∂ i∖ j{[ ∏_k∈∂ i∖ j c^BP_ki[x_k,x_i|𝒪] ] p( O_i^T| x_i^T ) ×∏_t=0^T-1[ δ_x_i^t+1,x_i^t(1-ε_i^t) e^∑_k∈∂ iν_ki^tx_k^t +δ_x_i^t+1,1(1 - (1-ε_i^t)e^∑_k∈∂ iν_ki^tx_k^t) ] p(O_i^t |x_i^t) }, where x_∂ i∖ j= {x_k}_k ∈∂ i∖ j is the set of trajectories on neighbors of i except for j and c^BP_ij[x_i,x_j | 𝒪] are the BP messages depending on the trajectories of the nodes i and j (at given observation set 𝒪). The BP equations are exact when the underlying interaction graph (the time-independent projection of the contact network) is a tree and provide a rather good approximation of the posterior distribution on sparse loopy graphs. An equivalent formulation can be obtained starting from the posterior probability in Eq. (<ref>), employing a cavity argument by removing the node j (and the corresponding trajectory x_j=(x_j^0,…,x_j^T)) and deriving a set of equations for the marginal probability c_ij[x_i,s_i | 𝒪], that represents the probability of the pair of variable-field trajectories (x_i,s_i) on node i in the cavity graph. In this way (see Appendix <ref> for a full derivation from Eq. (<ref>)), the following observation-reweighted dynamic cavity (DC) equations can be obtained c_ij[x_i,s_i| 𝒪] = 1/𝒵_ij[𝒪]p(x_i^0) ∑_x_∂ i∖ j{[ ∏_k∈∂ i∖ j c_ki[x_k,ν_ikx_i|𝒪] ] p( O_i^T| x_i^T ) ×∏_t=0^T-1[ δ_x_i^t+1,x_i^t(1-ε_i^t) e^s_i^t + ∑_k∈∂ i∖ jν_ki^tx_k^t +δ_x_i^t+1,1(1 - (1-ε_i^t)e^s_i^t + ∑_k∈∂ i∖ jν_ki^tx_k^t) ] p(O_i^t |x_i^t) }, where the notation ν_ikx_i stands for the array (ν_ik^1 x_i^1,…, ν_ik^T x_i^T), and 𝒵_ij[𝒪] is a normalization term for the cavity marginals. While the dynamic cavity equations with variable-field trajectories were originally proposed only for time-forward binary spin dynamics in Refs. <cit.>, those in Eqs.(<ref>) also account for probabilistic reweighting due to the observations. Since the two representations in Eqs.(<ref>) and (<ref>) are equivalent, it is convenient to interpret s_i in the cavity marginal c_ij[x_i,s_i| 𝒪] as a proxy for the trajectory of the missing neighboring node j in the cavity graph (more precisely s_i∝ν_jix_j), essentially recovering in this way the BP cavity marginal c^BP_ij[x_i,x_j | 𝒪]. It appears then an arbitrary but natural choice to normalize the marginals by tracing over both arguments, by defining 𝒵_ij[ 𝒪] = ∑_x_i,s_i p(x_i^0) ∑_x_∂ i∖ j{[ ∏_k∈∂ i∖ j c_ki[x_k,ν_ikx_i|𝒪] ] ∏_t=0^T-1[ δ_x_i^t+1,x_i^t(1-ε_i^t)e^s_i^t + ∑_k∈∂ i∖ jν_ki^tx_k^t. .+δ_x_i^t+1,1(1 - (1-ε_i^t)e^s_i^t + ∑_k∈∂ i∖ jν_ki^tx_k^t) ] p(O_i^t |x_i^t) }p( O_i^T| x_i^T ). With this choice of normalization, the sum over s_i has to run only over field trajectories that are consistent with realizations of the epidemic trajectories ν_jix_j of the neighboring node j. It can be alternatively convenient to normalize the cavity marginal only over x_i at a fixed realization of the field s_i. This is done in Section <ref>, with a simplifying choice which however will have non-trivial consequences. Finally, completing the cavity and computing the total marginal over i gives the posterior marginal probability of one-site trajectories p_i(x_i| 𝒪) ∝ p(x_i^0) ∑_x_∂ i{[ ∏_k∈∂ i c_ki[x_k,ν_ikx_i|𝒪] ] ∏_t=0^T-1[ δ_x_i^t+1,x_i^t(1-ε_i^t)e^∑_k∈∂ i ν_ki^tx_k^t. .+δ_x_i^t+1,1(1 - (1-ε_i^t)e^∑_k∈∂ iν_ki^tx_k^t) ] p(O_i^t |x_i^t) }p( O_i^T| x_i^T ) . §.§ Small-coupling expansion It is convenient to express the cavity marginal in Eq. (<ref>) in terms of the conjugate field trajectory ĥ_i= (ĥ_i^1,…, ĥ_i^T), by introducing its Fourier transform (see Appendix  <ref>), which leads to the following expression for the dynamic cavity equations c_ij[x_i,ĥ_i | 𝒪] = p(x_i^0)/𝒵_ij[𝒪]∏_t=0^T-1{[ δ(ĥ_i^t- i)(1-ε_i^t)(δ_x_i^t+1,x_i^t-δ_x_i^t+1,1)+δ(ĥ_i^t)δ_x_i^t+1,1] p(O^t_i |x_i^t) } ×∏_k∈∂ i∖ j[∑_x_k∫ Dĥ_kc_ki[x_k,ĥ_k | 𝒪]e^- i∑_t(x_k^tν_ki^tĥ_i^t+x_i^tν_ik^tĥ_k^t)] p( O_i^T| x_i^T ), in which the choice of normalization 𝒵_ij[𝒪] will be discussed later. In the spirit of Plefka's approach <cit.> and high-temperature expansions <cit.>, one can perform a formal expansion of the exponential term. Since the argument of the exponential is linear in the parameters ν_ik^t and ν_ki^t, truncating the expansion at some finite order can be understood as a small-coupling approximation of the dynamic cavity equations, which in the case of epidemic processes corresponds to a small infectivity approximation (i.e. λ_ij^t ≪ 1 for all i,j and t). Truncating the Taylor series at the second order, we get c_ij[x_i,ĥ_i | 𝒪] = p(x_i^0)/𝒵_ij[𝒪]∏_t=0^T-1{[ δ(ĥ_i^t- i)(1-ε_i^t)(δ_x_i^t+1,x_i^t-δ_x_i^t+1,1)+δ(ĥ_i^t)δ_x_i^t+1,1] p(O^t_i |x_i^t) } ×∏_k∈∂ i∖ j[∑_x_k∫ Dĥ_kc_ki[x_k,ĥ_k | 𝒪]e^- i∑_t(x_k^tν_ki^tĥ_i^t+x_i^tν_ik^tĥ_k^t)] p( O_i^T| x_i^T ) ≈p( x_i^0)/𝒵_ij[𝒪]∏_t=0^T-1{[ δ(ĥ_i^t- i)(1-ε_i^t)(δ_x_i^t+1,x_i^t-δ_x_i^t+1,1)+δ(ĥ_i^t)δ_x_i^t+1,1] p(O^t_i |x_i^t)} ×∏_k∈∂ i∖ j∑_x_k∫ Dĥ_kc_ki[x_k,ĥ_k|𝒪] { 1- i∑_t(x_k^tν_ki^tĥ_i^t+x_i^tν_ik^tĥ_k^t). . +1/2∑_t,t'(x_k^tν_ki^t(- iĥ_i^t)+x_i^tν_ik^t(- iĥ_k^t))(x_k^t'ν_ki^t'(- iĥ_i^t')+x_i^t'ν_ik^t'(- iĥ_k^t'))} p( O_i^T| x_i^T ), where also the normalization constant is consistently approximated. In order to perform the averages over the dynamic cavity marginals c_ki, we choose the normalization 𝒵_ij[𝒪] in such a way that ∑_x_i∫ Dĥ_ic_ij[x_i,ĥ_i | 𝒪] = ∑_x_i c_ij[x_i,s_i=0 | 𝒪] = 1 which is equivalent to assume that the neighboring individual j stays in the susceptible state at all times. Choosing the normalization to be independent of the trajectory of individual j is highly advantageous for developing an approximation that does not explicitly rely on the epidemic trajectories of both i and j. However, this choice has the consequence that some particular epidemic trajectories of individual i, imposed by the observations, can only be explained by the statistical model if a non-zero self-infection probability is introduced. In Appendix <ref>, we provide an example of this potential issue that is simple enough to be discussed analytically and show how the presence of a self-infection probability effectively resolves it. It is now possible to explicitly perform the averages over the dynamic cavity marginals c_ki, obtaining c_ij[x_i,ĥ_i | 𝒪] ≈p(x_i^0)/𝒵_ij[𝒪]∏_t=0^T-1{[ δ(ĥ_i^t- i)(1-ε_i^t)(δ_x_i^t+1,x_i^t-δ_x_i^t+1,1)+δ(ĥ_i^t)δ_x_i^t+1,1] p(O^t_i |x_i^t)} p( O_i^T| x_i^T ) ×∏_k∈∂ i∖ j[1+ ∑_t(m_k∖ i^tν_ki^t(- iĥ_i^t)+x_i^tν_ik^tμ_k∖ i^t). .+1/2∑_t,t'[ν_ki^t(- iĥ_i^t)ν_ki^t'(- iĥ_i^t')C_k∖ i^tt'+ν_ki^t(- iĥ_i^t)R_k∖ i^tt'x_i^t'ν_ik^t'+ν_ki^t'(- iĥ_i^t')R_k∖ i^t'tx_i^tν_ik^t+B_k∖ i^tt'x_i^tν_ik^tx_i^t'ν_ik^t']] in which a set of one-time and two-time cavity quantities were defined by the relations m_k∖ i^t= m_k∖ i^t[𝒪] = ∑_x_k∫ Dĥ_kc_ki[x_k,ĥ_k|𝒪]x_k^t = ∑_x_kc_ki[x_k,s_k=0 |𝒪]x_k^t, μ_k∖ i^t =μ_k∖ i^t[𝒪] = ∑_x_k∫ Dĥ_kc_ki[x_k,ĥ_k | 𝒪](- iĥ_k^t) = ∑_x_k.δ/δ s_k^tc_ki[x_k,s_k |𝒪]|_s_k=0, and C_k∖ i^t t' = C_k∖ i^t,t'[𝒪] = ∑_x_k∫ Dĥ_kc_ki[x_k,ĥ_k |𝒪] x_k^tx_k^t' = ∑_x_kc_ki[x_k,s_k=0 | 𝒪]x_k^tx_k^t', R_k∖ i^tt' = R_k∖ i^tt'[𝒪] = ∑_x_k∫ Dĥ_kc_ki[x_k,ĥ_k | 𝒪]x_k^t(- iĥ_k^t') = .δ/δs_k^t'∑_x_kx_k^tc_ki[x_k,s_k|𝒪]|_s_k=0, B_k∖ i^tt'= B_k∖ i^tt'[𝒪] = ∑_x_k∫ Dĥ_kc_ki[x_k,ĥ_k| 𝒪](- iĥ_k^t)(- iĥ_k^t') = .δ^2/δ s_k^t∂ s_k^t'∑_x_kc_ki[x_k,s_k| 𝒪]|_s_k=0. Finally, we get the following approximated form of the equations c_ij[x_i,ĥ_i | 𝒪] = p(x_i^0)/𝒵̃_ij[𝒪]∏_t=0^T-1{[δ(ĥ_i^t- i)(1-ε_i^t)(δ_x_i^t+1,x_i^t-δ_x_i^t+1,1)+δ(ĥ_i^t)δ_x_i^t+1,1] p(O^t_i |x_i^t) } p( O_i^T| x_i^T ) ×∏_texp{∑_k∈∂ i∖ j(m_k∖ i^tν_ki^t(- iĥ_i^t)+x_i^tν_ik^tμ_k∖ i^t) . . +1/2∑_k∈∂ i∖ j∑_t'[-ν_ki^tĥ_i^tν_ki^t'ĥ_i^t'C_k∖ i^tt'- iĥ_i^tν_ki^tR_k∖ i^tt'x_i^t'ν_ik^t'- iĥ_i^t'ν_ki^t'R_k∖ i^t'tx_i^tν_ik^t+B_k∖ i^tt'x_i^tν_ik^tx_i^t'ν_ik^t']}, where 𝒵̃_ij[𝒪] is the normalization constant of the re-exponentiated form of the equations. The quantity m_k∖ i^t measures the average probability that node k is infected at time t in the absence of the interaction with i. Similarly, C_k∖ i^tt' represents the two-time autocorrelation function of node k in the cavity graph; the quantity R_k∖ i^tt' is the response function on node k at time t' to a perturbation due to an infinitesimal external field acting on node k at time t'. The remaining two quantities μ_k∖ i^t and B_k∖ i^tt' are of less intuitive interpretation, as they measure the mean and temporal correlations of fluctuations around the unperturbed single-site statistics in the cavity graph. A direct calculation of the quantity ∑_x_k∫ Dĥ_k (- iĥ_k^t)c_ki[x_k,ĥ_k|𝒪=∅] shows that, in the absence of observations, μ_k∖ i^t=0. Similarly, B_k∖ i^tt'=0 in the absence of observations. This result, due to the causality of the dynamical process, does not hold anymore when some observations are included. For notational convenience, the implicit dependence of all marginals and normalization constants on the set of observations 𝒪 will not be further reported in the following. §.§ Small-Coupling Dynamic Cavity (SCDC) approximation A straightforward mean-field approximation can be obtained neglecting the second-order terms in Eq. (<ref>) and focusing on the effects of the first-order ones. The expression of the dynamic cavity equations simplifies as follows c_ij[x_i,s_i] = p(x_i^0)/𝒵̃_ij∏_t=0^T-1[∫ dĥ_i^te^- is_i^tĥ_i^t{δ(ĥ_i^t- i)(δ(x_i^t+1-x_i^t)-δ(x_i^t+1-1))+δ(ĥ_i^t)δ(x_i^t+1-1)}. ×.e^∑_k∈∂ i∖ j(m_k∖ i^tν_ki^t(- iĥ_i^t)+x_i^tν_ik^tμ_k∖ i^t) p(O^t_i |x_i^t) ] ∝p(x_i^0)/𝒵̃_ij∏_t=0^T-1[ ∫ dĥ_i^te^- iĥ_i^t(s_i^t+∑_k∈∂ i∖ jm_k∖ i^tν_ki^t){δ(ĥ_i^t- i)(δ(x_i^t+1-x_i^t)-δ(x_i^t+1-1))+δ(ĥ_i^t)δ(x_i^t+1-1)}. . × e^∑_k∈∂ i∖ jx_i^tν_ik^tμ_k∖ i^t p(O^t_i |x_i^t)] ∝p(x_i^0)/𝒵̃_ij∏_t=0^T-1[{δ(x_i^t+1-x_i^t)e^s_i^t+∑_k∈∂ i∖ jm_k∖ i^tν_ki^t+δ(x_i^t+1-1)[1-e^s_i^t+∑_k∈∂ i∖ jm_k∖ i^tν_ki^t]}. . × e^∑_k∈∂ i∖ jx_i^tν_ik^tμ_k∖ i^t p(O^t_i |x_i^t)] c_ij[x_i,s_i] = p(x_i^0)/𝒵̃_ij∏_t=0^T-1{∫ dĥ_i^te^- iĥ_i^t(s_i^t+∑_k∈∂ i∖ jm_k∖ i^tν_ki^t)[δ(ĥ_i^t- i) (1-ε_i^t)(δ_x_i^t+1,x_i^t-δ_x_i^t+1,1) . . . . +δ(ĥ_i^t)δ_x_i^t+1,1]e^∑_k∈∂ i∖ jx_i^tν_ik^tμ_k∖ i^t p(O^t_i |x_i^t)} p( O_i^T| x_i^T ) ∝p(x_i^0)/𝒵̃_ij∏_t=0^T-1[{δ_x_i^t+1,x_i^t(1-ε_i^t)e^s_i^t+∑_k∈∂ i∖ jm_k∖ i^tν_ki^t+δ_x_i^t+1,1[1-(1-ε_i^t)e^s_i^t+∑_k∈∂ i∖ jm_k∖ i^tν_ki^t]}. . × e^∑_k∈∂ i∖ jx_i^tν_ik^tμ_k∖ i^t p(O^t_i |x_i^t)] p( O_i^T| x_i^T ) where, using the definitions in Eqs. (<ref>) and (<ref>), the two quantities m_i∖ j^t and μ_i∖ j^t turn out to satisfy the self-consistent equations m_i∖ j^t = ∑_x_ix_i^tp(x_i^0)/𝒵̃_ij∏_t'=0^T-1[{δ_x_i^t'+1,x_i^t'(1-ε_i^t')e^∑_k∈∂ i∖ jm_k∖ i^t'ν_ki^t'+δ_x_i^t'+1,1[1-(1-ε_i^t')e^∑_k∈∂ i∖ jm_k∖ i^t'ν_ki^t']}. . × e^∑_k∈∂ i∖ jx_i^t'ν_ik^t'μ_k∖ i^t' p(O^t'_i |x_i^t')] p( O_i^T| x_i^T ) and μ_i∖ j^t =∑_x_ip( O_i^T| x_i^T )p(x_i^0)/𝒵̃_ij ×∏_t'=0^T-1[(1-δ_t,t')δ_x_i^t'+1,1+(1-ε_i^t')(δ_x_i^t'+1,x_i^t'-δ_x_i^t'+1,1)e^∑_k∈∂ i∖ jm_k∖ i^t'ν_ki^t']e^x_i^t'ν_ik^t'μ_k∖ i^t'p(O_i^t'| x_i^t'). The normalization constant is chosen to ensure that the time-dependent quantity m_i∖ j^t represents the mean value of x_i^t in the cavity graph, that is 𝒵̃_ij =∑_x_i p(x_i^0)∏_t=0^T-1[{δ_x_i^t+1,x_i^t(1-ε_i^t)e^∑_k∈∂ i∖ jm_k∖ i^tν_ki^t+δ_x_i^t+1,1[1-(1-ε_i^t)e^∑_k∈∂ i∖ jm_k∖ i^tν_ki^t]}. . × e^∑_k∈∂ i∖ jx_i^tν_ik^tμ_k∖ i^t p(O^t_i |x_i^t)] p( O_i^T| x_i^T ). In addition, the total time-dependent marginal m_i^t of the posterior distribution on the full graph is given by m_i^t = 1/𝒵̃_i∑_x_ip( x_i^0) x_i^t∏_t=0^T-1[ {δ_x_i^t+1,x_i^t(1-ε_i^t)e^∑_k∈∂ im_k∖ i^tν_ki^t+δ_x_i^t+1,1[1-(1-ε_i^t)e^∑_k∈∂ im_k∖ i^tν_ki^t]}. . × e^∑_k∈∂ ix_i^tν_ik^tμ_k∖ i^t p(O_i^t| x_i^t)] p( O_i^T| x_i^T ) with 𝒵̃_i = ∑_x_ip(x_i^0)∏_t=0^T-1[ {δ_x_i^t+1,x_i^t(1-ε_i^t)e^∑_k∈∂ im_k∖ i^tν_ki^t+δ_x_i^t+1,1[1-(1-ε_i^t)e^∑_k∈∂ im_k∖ i^tν_ki^t]}. . × e^∑_k∈∂ ix_i^tν_ik^tμ_k∖ i^t p(O_i^t| x_i^t)] p( O_i^T| x_i^T ). Equations (<ref>) and (<ref>) represent a set of self-consistent equations defining a non-causal dynamic mean-field approximation that, in the following, we will refer to as the Small-Coupling Dynamic Cavity (SCDC) method. The dynamical equations are of mean-field type since correlations are neglected, but in the presence of observations, they describe a non-causal dynamical process. Because of the cavity construction, the fundamental unknown of the equations, the one-time cavity marginals m_i∖ j^t and the one-time cavity fields μ_i∖ j^t, are defined by means of local self-consistent conditions, which can be implemented using a message-passing update scheme. A computational bottleneck of Eqs. (<ref>)-(<ref>) is represented by the partial trace over single-site trajectories x_i, that requires O(2^T) operations, meaning that a complete update of all cavity quantities requires O(2|E|T 2^T), where |E| is the total number of non-zero weighted directed edges on the interaction graph. An efficient algorithmic implementation of the SCDC equations is proposed in the next Section. It exploits a transfer-matrix approach to perform the trace over the trajectory x_i keeping fixed all quantities {m_k∖ i^t} and {μ_k∖ i^t} for all t and k∈∂ i∖ j, which play the role of the parameters in a “temporal” one-dimensional discrete probabilistic model defined on node i. § EFFICIENT FORMULATION OF THE SCDC EQUATIONS The starting point of the derivation is the cavity normalization constant in Eq. (<ref>)-(<ref>), that can be written starting from Eq. (<ref>) as 𝒵̃_ij = ∑_x_ip(x_i^0) ∏_t=0^T-1M_x_i^tx_i^t+1^i∖ j p(O_i^T| x_i^T) in which p(O_i^T| x_i^T)=1 if there is no observation at the final time and the “transfer matrix” M_x^tx^t+1^i∖ j is defined as follows M_x_i^tx_i^t+1^i∖ j =([ M_t,00^i∖ j M_t,01^i∖ j; M_t,10^i∖ j M_t,11^i∖ j ]) = ([ (1-ε_i^t)e^∑_k∈∂ i∖ jm_k∖ i^tν_ki^tp(O_i^t| 0) [1-(1-ε_i^t)e^∑_k∈∂ i∖ jm_k∖ i^tν_ki^t] p(O_i^t| 0); 0 e^∑_k∈∂ i∖ jν_ik^tμ_k∖ i^t p(O_i^t| 1) ]) where again it is assumed that p(O_i^t| x_i^t)=1 if there is no observation on i at time t. The probability that an individual i is infected at time t in the cavity graph is m_i∖ j^t = ∑_x_i^tρ_→ t^i∖ j(x_i^t)x_i^tρ_t←^i∖ j(x_i^t)/∑_x_i^tρ_→ t^i∖ j(x_i^t)ρ_t←^i∖ j(x_i^t) = ρ_→ t^i∖ j(1)ρ_t←^i∖ j(1)/ρ_→ t^i∖ j(1)ρ_t←^i∖ j(1)+ρ_→ t^i∖ j(0)ρ_t←^i∖ j(0), where ρ_→ t^i∖ j(x_i^t) and ρ_t ←^i∖ j(x_i^t) are single-site “temporal” messages that satisfy the recursive equations ρ_→t^i∖j(x_i^t) = ∑_x_i^t-1ρ_→t-1^i∖j(x_i^t-1)M_x_i^t-1x_i^t^i∖j for t ∈ {1, …, T} ρ_t←^i∖j(x_i^t) = ∑_x_i^t+1ρ_t+1←^i∖j(x_i^t+1)M_x_i^tx_i^t+1^i∖j for t ∈ {0, …, T-1}, and the initial (resp. terminal) conditions are given by ρ_→ 0^i∖ j(x_i^0) = p(x_i^0) and ρ_T←^i∖ j(x_i^T) = p(O_i^T| x_i^T). The one-time cavity fields μ_i∖ j^t can be computed as follows μ_i∖ j^t = ∑_x_ip(x_i^0)/𝒵̃_ij(1-ε_i^t)(δ_x_i^t+1,x_i^t-δ_x_i^t+1,1) e^∑_k∈∂ i∖ j(m_k∖ i^tν_ki^t+x_i^tν_ik^tμ_k∖ i^t) p(O_i^t| x_i^t) ×∏_t'≠ t{δ_x_i^t'+1,x_i^t'(1-ε_i^t)e^∑_k∈∂ i∖ jm_k∖ i^t'ν_ki^t'+δ_x_i^t'+1,1[1-(1-ε_i^t)e^∑_k∈∂ i∖ jm_k∖ i^t'ν_ki^t']} . × e^∑_k∈∂ i∖ jx_i^t'ν_ik^t'μ_k∖ i^t'p(O_i^t'| x_i^t') ] p( O_i^T| x_i^T ) = ∑_x_i^t,x_i^t+1ρ_→ t^i∖ j(x_i^t)(1-ε_i^t)(δ_x_i^t+1,x_i^t-δ_x_i^t+1,1)e^∑_k∈∂ i∖ j(m_k∖ i^tν_ki^t+x_i^tν_ik^tμ_k∖ i^t) p(O_i^t| x_i^t) ρ_t+1←^i∖ j(x_i^t+1)/∑_x_i^t,x_i^t+1ρ_→ t^i∖ j(x_i^t)M_x_i^tx_i^t+1^i∖ jρ_t+1←^i∖ j(x_i^t+1) = ρ_→ t^i∖ j(0) (1-ε_i^t) e^∑_k∈∂ i∖ jm_k∖ i^tν_ki^tp(O_i^t| 0 )(ρ_t+1←^i∖ j(0)-ρ_t+1←^i∖ j(1))/∑_x_i^t,x_i^t+1ρ_→ t^i∖ j(x_i^t)M_x_i^tx_i^t+1^i∖ jρ_t+1←^i∖ j(x_i^t+1) = ρ_→ t^i∖ j(0)M_t,00^i∖ j(ρ_t+1←^i∖ j(0) - ρ_t+1←^i∖ j(1))/∑_x_i^t,x_i^t+1ρ_→ t^i∖ j(x_i^t)M_x_i^tx_i^t+1^i∖ jρ_t+1←^i∖ j(x_i^t+1) = ρ_→ t^i∖ j(0)M_t,00^i∖ j(ρ_t+1←^i∖ j(0) - ρ_t+1←^i∖ j(1))/∑_x_i^t,ρ_→ t^i∖ j(x_i^t)ρ_t←^i∖ j(x_i^t). The number of operations necessary for the update of a single-time cavity message is now of O(1) when the single-site “temporal” messages ρ_→ s^i∖ j(x_i^s) and ρ_s ←^i∖ j(x_i^s) for s=t,t+1 are available. The latter quantities are computed by means of time-forward and time-backward update rules from the current set of cavity messages and require O(4T) operations. In summary, a complete update of the cavity marginals m_i∖ j^t and cavity fields μ_i∖ j^t, for every directed edge and time step, requires O(4|E|T). Notice that from Eq. (<ref>), μ_i∖ j^t is zero if the time-backward cavity messages at time t+1 are equal. It is shown in Section <ref> and in Appendix  <ref> that this condition is satisfied when no observations are present at later times and it leads to a pure time-forward reduction of the SCDC equations. Furthermore, because of the non-recurrent property of the SI model, it is possible to derive an alternative efficient formulation of the SCDC equations exploiting the infection-time representation: this is explained in detail in Appendix <ref>. § GENERALIZATION TO OTHER EPIDEMIC MODELS The method generalizes directly to models with a higher number of individual states and transitions. In particular, for the Susceptible-Infected-Susceptible (SIS) and Susceptible-Infected-Removed (SIR) models, there is only one additional transition where an infected individual i can recover at time t with probability r_i^t, with the result that the individual i is either susceptible again (for the SIS model) or in a state of acquired immunity (for the SIR model). A further generalization can be made for the Susceptible-Infected-Removed-Susceptible (SIRS), where each recovered individual i can return to the susceptible state at time t due to loss of immunity with probability σ_i^t. For the SIS model, the only difference with the method described in the previous sections is in the expression of the 2× 2 transfer matrix, which is now given by M_x_i^tx_i^t+1^i∖ j = ([ M_t,00^i∖ j M_t,01^i∖ j; M_t,10^i∖ j M_t,11^i∖ j ]) = ([ (1-ε_i^t)e^∑_k∈∂ i∖ jm_k∖ i^tν_ki^tp(O_i^t| 0) [1-(1-ε_i^t)e^∑_k∈∂ i∖ jm_k∖ i^tν_ki^t] p(O_i^t| 0); r_i^t e^∑_k∈∂ i∖ jμ_k∖ i^tν_ik^tp(O_i^t| 1) (1-r_i^t) e^∑_k∈∂ i∖ jν_ik^tμ_k∖ i^t p(O_i^t| 1) ]) In the case of the SIR model, each individual i can be in three possible states x_i^t∈{S,I,R}, but the derivation done for two-state models can be repeated almost straightforwardly (see Appendix <ref> for details). It is still necessary to introduce the marginals m_i∖ j^t and μ_i∖ j^t, defined as follows m_i∖ j^t = ρ_→ t^i ∖ j (I)ρ_t ←^i ∖ j (I)/∑_x_i^tρ_→ t^i ∖ j (x_i^t)ρ_t ←^i ∖ j (x_i^t) μ_i∖ j^t = ρ_→ t^i ∖ j (S) (1-ε_i^t)e^∑_k ∈∂ i ∖ j m_k∖ i^tν_ki^tp(𝒪_i^t|S) (ρ_t+1 ←^i ∖ j(S) - ρ_t+1 ←^i ∖ j(I) )/∑_x_i^tρ_→ t^i ∖ j (x_i^t)ρ_t ←^i ∖ j (x_i^t), where the quantities ρ_→ t^i∖ j(x_i^t) and ρ_t ←^i∖ j(x_i^t) satisfy a set of equations analogous to (<ref>), with a 3× 3 matrix M_x_i^tx_i^t+1^i∖ j given by M_x_i^tx_i^t+1^i∖ j = ([ (1-ε_i^t)e^∑_k∈∂ i∖ jm_k∖ i^tν_ki^tp(O_i^t| S) [1-(1-ε_i^t)e^∑_k∈∂ i∖ jm_k∖ i^tν_ki^t] p(O_i^t| S) 0; 0 (1-r_i^t)e^∑_k ∈∂ i ∖ jν_ik^tμ_k∖ i^tp(𝒪_i^t|I) r_i^te^∑_k ∈∂ i ∖ jν_ik^tμ_k∖ i^tp(𝒪_i^t|I); 0 0 p(𝒪_i^t|R) ]). The SIRS model differs from the SIR model only for the 3× 3 transfer matrix, which is given by M_x_i^tx_i^t+1^i∖ j = ([ (1-ε_i^t)e^∑_k∈∂ i∖ jm_k∖ i^tν_ki^tp(O_i^t| S) [1-(1-ε_i^t)e^∑_k∈∂ i∖ jm_k∖ i^tν_ki^t] p(O_i^t| S) 0; 0 (1-r_i^t)e^∑_k ∈∂ i ∖ jν_ik^tμ_k∖ i^tp(𝒪_i^t|I) r_i^te^∑_k ∈∂ i ∖ jν_ik^tμ_k∖ i^tp(𝒪_i^t|I); σ_i^tp(𝒪_i^t|R) 0 ( 1 - σ_i^t)p(𝒪_i^t|R) ]). As long as further compartments are included with transitions being parametrized by individual-based rates, generalization of the above construction follows straightforwardly (e.g. SEIR and SEIRS models). § RESULTS In this Section, we provide numerical results to highlight the operation and capabilities of this method. We first analyze the quality of the approximation for time-forward dynamics obtained in the absence of observations; then we effectively demonstrate the role of the cavity fields μ^t_i∖ j in the presence of observations; finally, we evaluate the performances of the SCDC method in various instances of epidemic inference, both on synthetic and real-world contact networks. §.§ Time-forward dynamics Causality-breaking is a consequence of the existence of observations at later times, that have to be taken into account in the mathematical model by a flux of information flowing backward in time and conditioning the whole history of the process. This property reflects in the existence of non-trivial values for the one-time cavity fields μ_i∖ j^t. On the other hand, when no observation is present it is possible to show that all the cavity fields μ_i∖ j^t vanish and, consequently, one can recover the usual causal time-forward mean-field dynamics. To prove this, it is convenient to start from a particular form of the update equations for the cavity marginals m_i∖ j^t (see Appendix <ref> for a derivation), m_i∖ j^t = m_i∖ j^t-1+(1-m_i∖ j^t-1) { 1-(1 - ε_i^t-1) e^∑_k∈∂ i∖ jm_k∖ i^t-1ν_ki^t-1}ρ_t←^i∖ j(1)/(1 - ε_i^t-1)e^∑_k∈∂ i∖ jm_k∖ i^t-1ν_ki^t-1ρ_t←^i∖ j(0)+{ 1-(1 - ε_i^t-1)e^∑_k∈∂ i∖ jm_k∖ i^t-1ν_ki^t-1}ρ_t←^i∖ j(1), where the messages ρ_t←^i∖ j(x_i^t) represent the (non-normalized) backward probability that state x_i^t ∈{0,1} given the dynamic constraints and the observations in the future. In the absence of observations (on all nodes at all times t'≥ t) the backward probability is balanced, i.e. ρ_t←^i∖ j(0)=ρ_t←^i∖ j(1), and Eq. (<ref>) reduces to time-forward mean-field equations, m_i∖ j^t = m_i∖ j^t-1 +(1-m_i∖ j^t-1)[1 - (1 - ε_i^t-1)e^∑_k∈∂ i∖ jm_k∖ i^t-1ν_ki^t-1], and for the total marginals m_i^t m_i^t = m_i^t-1+(1-m_i^t-1)[ 1-(1 - ε_i^t-1)e^∑_j∈∂ im_j∖ i^t-1ν_ji^t-1]. It is possible to verify numerically that, in the absence of observations, the SCDC algorithm in Eqs. (<ref>) and (<ref>) always converges to the same result obtained by running the time-forward Eqs. (<ref>). An intuitive form for the discrete-time IBMF dynamics, obtained by assuming independence of individual marginal probabilities of being infected, is given by the equations <cit.>, m_i^t = m_i^t-1+(1-m_i^t-1)[ 1- (1-ε_i^t-1) ∏_j∈∂ i(1-λ_ji^t-1 m_j^t-1)]. For a densely-connected graph (for which m_j∖ i^t≈ m_j^t) with small infection probabilities (λ_ji^t ≪ 1), equations (<ref>) and (<ref>) reduce to the same expression. Figure <ref> illustrates the quality of the approximation obtained using Eqs. (<ref>)-(<ref>) for studying purely time-forward SI dynamics, in the absence of observations. The simulations were performed on well-known classes of static graphs, i.e. contact graphs where two individuals are in contact at all times or at no time. Comparisons are shown between the Small-Coupling Dynamic Cavity method (SCDC), the Belief Propagation (BP) algorithm, and individual-based mean-field equations (IBMF, corresponding to Eqs. (<ref>)). As a reference, the results obtained with numerical sampling from 10^4 realizations of the exact time-forward Monte Carlo dynamics of the SI model are also reported. All methods based on mean-field approximations tend to overestimate the number of infected individuals in cases in which the assumed factorization of probabilities is not exact. The BP algorithm is exact on trees (Bethe Lattice) and very accurate on sparse random graphs, where both SCDC and IBMF instead considerably overestimate the number of infected individuals. The performances of all methods are good on dense random graphs and much worse on graphs with spatial structure, such as proximity random graphs (see Sec. <ref> for details of construction). In all cases under study, the SCDC approximation gives consistently better results than those obtained using IBMF. §.§ Effects of backward messages When at least one observation is included as evidence, the μ cavity fields are non-zero and cause a time-backward propagation of information which changes the probabilistic weight of the epidemic trajectories. In order to better understand how the method behaves in the presence of observations, we checked on which edges the absolute value of the μ cavity messages is non-vanishing when one or more observations are considered. In particular, we sampled a single epidemic outbreak from a uniform SI model with λ=0.19, on a contact graph built by randomly adding some edges between nodes of a tree. Figure <ref> (a) shows how the fields μ_i∖ j^t propagate into the contact graph up to three times before the observation. For each edge (i,j) a thick line is plotted if one of the two messages μ_i∖ j^t or μ_j∖ i^t is non-vanishing. A thicker line is plotted if both of them are non-vanishing. The plots are shown both for a single observation (top) and two observations (bottom). The plot on the right of Figure <ref> (b) shows the inference accuracy of the method for the two cases. We can see that adding an observation greatly increases the performance. In particular, the prediction is improved mostly on the branches of the contact tree where the new observation produces the propagation of the cavity fields μ. It is clear that observations lead to the activation of the μ fields, which then propagate back in time away from the observed nodes. A more intuitive probabilistic interpretation of the role of μ cavity fields in propagating the information obtained from observations can be obtained by monitoring the temporal behavior of the (normalized) time-forward and time-backward messages ρ_→ t^i ∖ j and ρ_t←^i ∖ j. Consider a realization of the SI model taking place on a small tree, as displayed in Figure <ref> (left), in which the root node gets infected at time t=10. An observation of the state of the root node at a time t_o introduces a source of information that affects the temporal behavior of the messages ρ_→ t^i ∖ j and ρ_t←^i ∖ j for all directed edges (i,j) at all times. In particular, Figure <ref> shows the time-backward messages ρ_t←^i ∖ j as function of time on a set of edges for t_o=8 (center) and t_o=12 (right). The observation of a susceptible node (center) implies that at all times before the observation, the message emerging from that node is exactly zero. Moving away from the observed node the time-backward probability is non-zero (and monotonically increasing with the spatial distance) but monotonically decreasing with time distance from the observation. If instead the root node is observed in the infected state, the time-backward message shows an instantaneous jump to 1 at the time of observation and gradually decreases at earlier times, as the time-backward probability of being infected for the root node decreases. Moving away from the root, the messages increase as time proceeds backward indicating that surrounding nodes might have caused the infection of the observed one. Clearly, the (normalized) time-backward messages are exactly equal to 0.5 when no information is available, i.e. at later time steps compared to the observation. Time-forward messages ρ_→ t^i ∖ j are not analyzed in detail but they exhibit a similar, though more intuitive, phenomenology. §.§ Inference performance We consider a typical risk assessment scenario of epidemic inference in which, given the network of contacts and some observations made on an epidemic realization with one initially infected individual, one has to find the probability of each individual being infected at the final time. The simulations of the epidemic realizations according to the SI model are performed using the EpiGen python package <cit.> on both synthetic and real-world contact networks. The observations are performed on a random subset of the population at the final time. Once the individual probability of being infected at the final time is estimated with an epidemic inference method, the knowledge of the ground truth provided by the corresponding epidemic realization allows to compute a ROC curve of true infected individuals vs. false infected individuals. The area under the ROC curve (AUC) represents an estimate of the probability of correct classification of the individual infection states. The inference through SCDC is carried out using the infection-time representation (discussed in Appendix <ref>), and its performances are evaluated in comparison with two well-established methods for distributed epidemic inference, the Simple Mean Field (SMF) method <cit.> and the Belief Propagation (BP) algorithm <cit.>. For both BP and SCDC methods, the individual probability of being infected at the final time is computed from the corresponding total marginals once the message passing algorithm has reached convergence, i.e. the error on the cavity marginals decreases under a predefined tolerance threshold. In some cases, BP and SCDC do not reach convergence in a reasonable number of iterations (a few thousand in the case of the epidemic instances under study). The lack of convergence can be due to a relevant role played by loop structures and long-range correlations. In such cases, the probability marginals are computed taking an average over a sufficiently large number (up to hundreds) of iterations of the message passing update. The Simple Mean Field (SMF) inference method introduced in <cit.> is instead an inference method based on the IBMF approximation for the SI dynamics in which the information provided by observations of susceptible and infected individuals is taken into account by introducing some specific constraints on the time-forward dynamics (see Ref.<cit.> for a description in the more general SIR model). In Figure <ref> the results for different kinds of synthetic contact graphs are shown. The top panels show results for two classes of static random graphs: Watts-Strogatz graphs <cit.>, and soft random geometric graphs <cit.>. In the Watts-Strogatz model, the edges of a pristine network with a regular locally connected structure are rewired randomly with a probability p_rw, leading to the emergence of non-trivial small-world and clustering properties. In soft random geometric graphs, also known as proximity random graphs, individuals are distributed uniformly at random in the unit square, then only pairs at Euclidean distance l<l_ max are connected with a probability which decays exponentially with l. Both classes of random networks are locally highly structured, with short loops and clusters. The bottom panels refer to synthetic contact networks generated with more realistic agent-based models, the OpenABM-Covid19 <cit.> and Covasim <cit.> models. These agent-based models are able to generate realistic contact networks on large populations, by modeling the interactions in households, schools, workplaces and other locations. Some contacts in these networks also change daily to reflect the dynamic nature of real-life interactions. Only the contact network structures generated by these agent-based models over a time horizon of a few weeks is used in the present work, and the epidemic propagations are generated using the standard SI model. In these networks, the link between two individuals i and j is assigned a weight w_ij^t, representing the aggregate duration of the contact between i and j in day t. Given an infection rate γ per contact per time unit, the infection probability associated to the contact is then computed as λ_ij^t = 1-e^-γ w_ij^t. In both cases, when a relatively small number of observations is provided at the last time, the SCDC method is able to outperform SMF and achieves accuracy on par with the BP method. The same testing framework is also employed to evaluate epidemic inference on two real contact networks, originally presented in Ref. <cit.>, that have been collected with RFID tags in a school (Thiers13 dataset) and in an office environment (InVS15 datasets). The contact data are collected over a period of several days, with a temporal resolution of 20 seconds, which allows for data aggregation over coarse-grained time windows of a preferred size τ_w. In our study, time windows with size τ_w ranging from 3 hours to a day are considered, for a total of T time steps ranging from a minimum of 12 to a maximum of 36 steps. When performing the coarse-graining procedure, the number c_ij^t of contacts between i and j occurring in a time window t of size τ_w is computed and used to estimate the infection probability λ_ij^t between the two individuals at time step t as λ_ij^t = 1-(1-γ)^c_ij^t, where γ is a common parameter describing the infectiousness of a single contact. The results of epidemic risk assessment on these real-world contact networks are shown in Figure <ref>, adopting the same metric used in the case of random graphs. Also in this case, for all contact networks under study, the SCDC method has a performance very close to the BP algorithm, and in general superior to the SMF heuristic. §.§ Inference in recurrent epidemic models While previous results focus on the quantitative analysis of inference performances on irreversible dynamics, the present subsection aims at illustrating the potential of the SCDC method for epidemic inference on recurrent epidemic models. In order to do that, we perform a simple analysis inspired by the one already presented in recent work on Matrix Product Belief Propagation <cit.>, a novel powerful approximation method for recurrent dynamics on graphs. We conducted simulations of a single epidemic outbreak using a SIRS model on an Erdos-Renyi random graph with N=100 nodes and average degree z=3. Figure <ref> shows the value of the posterior marginal probability of being infected p( x_i^t = 0 |𝒪) inferred by the SCDC method. The color scale indicating these probability values is superimposed on the black bars marking the time intervals of true infections, which enables visual inspection of the inference performance of the method. Notably, the SCDC method rather accurately assigns posterior marginal probabilities that closely align with the observed data, demonstrating its effectiveness even for unobserved nodes or time points that are distant from the observations. Several reinfection events are also correctly captured. Although the preliminary results are promising, further investigations are needed to fully understand the performance of the SCDC method on inference problems involving recurrent epidemic models. However, conducting these additional investigations is outside the scope of the current study. § CONCLUSIONS The Dynamic Cavity method is a distributed technique to study discrete-state stochastic processes on graphs, which is exact on trees and often provides very good approximations on sparse graphs. While its original formulation is computationally demanding <cit.>, approximations have been introduced <cit.> and, whenever possible, more efficient parameterizations of single dynamical trajectories have been introduced <cit.>. In the present work, an observation-reweighted version of the Dynamic Cavity formulation, including individual observations, is introduced to model the posterior probability of epidemic processes on contact networks. The formulation exploits a Bayesian approach and is fully equivalent to the Belief Propagation approach to epidemic trajectories <cit.>. Starting from the reweighted Dynamic Cavity formulation and exploiting a small-coupling expansion, a novel set of fixed-point equations for a pair of time-dependent cavity messages m_i∖ j^t and μ_i∖ j^t is obtained. Here, m_i∖ j^t is the probability that individual i is infected at time t in the cavity graph when the interaction with individual j is removed, while μ_i∖ j^t is a cavity field whose role depends on the presence of observations. In the absence of observations, all cavity fields {μ_i∖ j^t} identically vanish, and the dynamics, expressed solely in terms of marginal probabilities {m_i∖ j^t}, becomes causal, reducing to a set of generalized mean-field equations. These time-forward equations, tested on random graphs for the SI model, yield higher accuracy compared to the commonly used individual-based mean-field equations (which they reduce to in the regime of low infectiousness and high connectivity), albeit less accurate than BP, which in the simplified case of non-recurrent forward dynamics coincides with the Dynamic Message Passing method <cit.>. Simple analyses conducted on the SI model with limited observations demonstrate that the role of the cavity fields {μ_i∖ j^t} is to propagate information about observations to neighboring nodes and subsequently distribute this information throughout the contact network, appropriately tilting the probabilistic weight of the associated dynamic trajectories in view of the presence of observations. The presence of observations renders the epidemic dynamics non-causal, with backward-in-time information flow, as evident from the non-uniform distribution of the backward cavity messages ρ_t ←^i∖ j(x_i^t). The main additional approximation assumed in deriving the SCDC method from the DC equations is the independence of cavity messages from the epidemic trajectory of the removed node. This approximation can introduce inconsistencies with specific trajectories imposed by observations, particularly in regimes with numerous observations, including repeated observations on the same individuals. However, this issue is effectively resolved by introducing a small self-infection probability, which practically eliminated the problem in all applications considered. The SCDC algorithm proves to be highly effective in assessing the epidemic risk of individuals, exhibiting performance very similar to that of BP, of which it is essentially an approximation, and substantially outperforming other heuristic methods based on mean-field approximations. As a fixed-point message passing method, a potential drawback of SCDC lies in its convergence properties. In numerical tests, SCDC experiences convergence problems similar to BP, mainly resulting from long-range correlations generated by loops in the contact graphs. Nevertheless, even in the absence of convergence, the estimated marginal probabilities often remain sufficiently accurate, enabling a reliable estimation of the epidemic risk. The main advantage of the SCDC method over Dynamic Cavity and Belief Propagation for epidemic trajectories lies in its straightforward generalization to epidemic models with multiple states (e.g., SIR, SEIR) and recurrent processes (e.g., SIS, SIRS). Indeed, the fundamental components of the method and the efficient algorithm based on the temporal transfer matrix remain largely unchanged, with only modifications in the matrix dimensions and elements to accommodate the model's increased complexity. Consequently, the SCDC algorithm maintains a linear complexity with respect to the duration of the epidemic process and the number of contacts in the network. Preliminary results indicate excellent predictive power even for complex recurrent epidemic processes like the SIRS model. Surprisingly, no assumption of small infectiousness was made in the applications, suggesting that the method performs well beyond the limitations expected from its derivation. The primary limitation of the SCDC method is that its efficient formulation based on the transfer matrix is currently applicable only to Markovian models. Further study is required to develop an efficient algorithm for non-Markovian recurrent epidemic models. Concerning the method itself, another interesting direction for its development involves gaining a better understanding of the role of second-order terms in the small coupling expansion and developing an improved algorithm that takes them into account. Finally, future directions include the possibility to generalize the approach presented here to other type of dynamical processes on networks, e.g. rumor spreading processes <cit.>. § ACKNOWLEDGMENTS Computational resources were provided by the SmartData@PoliTO (http://smartdata.polito.itsmartdata.polito.it) interdepartmental center on Big Data and Data Science. We thank S. Crotti for suggesting the format of Figures <ref> and <ref>, similar to what used in Ref. <cit.>. 42 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Ferretti et al.(2020)Ferretti, Wymant, Kendall, Zhao, Nurtay, Abeler-Dörner, Parker, Bonsall, and Fraser]ferretti2020quantifying author author L. Ferretti, author C. Wymant, author M. Kendall, author L. Zhao, author A. Nurtay, author L. Abeler-Dörner, author M. Parker, author D. Bonsall, and author C. Fraser, https://doi.org/10.1126/science.abb6936 journal journal Science volume 368, pages eabb6936 (year 2020)NoStop [Baker et al.(2021)Baker, Biazzo, Braunstein, Catania, Dall’Asta, Ingrosso, Krzakala, Mazza, Mézard, Muntoni, Refinetti, Mannelli, and Zdeborová]baker2021epidemic author author A. Baker, author I. Biazzo, author A. Braunstein, author G. Catania, author L. Dall’Asta, author A. Ingrosso, author F. Krzakala, author F. Mazza, author M. Mézard, author A. P. Muntoni, author M. Refinetti, author S. S. Mannelli, and author L. Zdeborová, https://doi.org/10.1073/pnas.2106548118 journal journal Proceedings of the National Academy of Sciences volume 118, pages e2106548118 (year 2021)NoStop [Shah and Zaman(2010)]shah2010detecting author author D. Shah and author T. Zaman, https://doi.org/10.1145/1811099.1811063 journal journal SIGMETRICS Perform. Eval. Rev. volume 38, pages 203–214 (year 2010)NoStop [Shah and Zaman(2011)]shah2011rumors author author D. Shah and author T. Zaman, https://doi.org/10.1109/TIT.2011.2158885 journal journal IEEE Transactions on information theory volume 57, pages 5163 (year 2011)NoStop [Lokhov et al.(2014)Lokhov, Mézard, Ohta, and Zdeborová]lokhov2014inferring author author A. Y. Lokhov, author M. Mézard, author H. Ohta, and author L. Zdeborová, https://doi.org/10.1103/PhysRevE.90.012801 journal journal Physical Review E volume 90, pages 012801 (year 2014)NoStop [Antulov-Fantulin et al.(2015)Antulov-Fantulin, Lančić, Šmuc, Štefančić, and Šikić]antulov2015identification author author N. Antulov-Fantulin, author A. Lančić, author T. Šmuc, author H. Štefančić, and author M. Šikić, https://doi.org/10.1103/PhysRevLett.114.248701 journal journal Physical review letters volume 114, pages 248701 (year 2015)NoStop [Biazzo et al.(2022)Biazzo, Braunstein, Dall’Asta, and Mazza]biazzo2022bayesian author author I. Biazzo, author A. Braunstein, author L. Dall’Asta, and author F. Mazza, https://doi.org/10.1038/s41598-022-20898-x journal journal Scientific Reports volume 12, pages 19673 (year 2022)NoStop [Braunstein et al.(2023)Braunstein, Catania, Dall’Asta, Mariani, and Muntoni]braunstein2023inference author author A. Braunstein, author G. Catania, author L. Dall’Asta, author M. Mariani, and author A. P. Muntoni, https://doi.org/10.1038/s41598-023-33770-3 journal journal Scientific Reports volume 13, pages 7350 (year 2023), note number: 1 Publisher: Nature Publishing GroupNoStop [Shah et al.(2020)Shah, Dehmamy, Perra, Chinazzi, Barabási, Vespignani, and Yu]shah2020finding author author C. Shah, author N. Dehmamy, author N. Perra, author M. Chinazzi, author A.-L. Barabási, author A. Vespignani, and author R. Yu, journal journal arXiv preprint arXiv:2006.11913 https://doi.org/10.48550/arXiv.2006.11913 10.48550/arXiv.2006.11913 (year 2020)NoStop [Chen et al.(2022)Chen, Yu, Tan, and Poor]chen2022deeptrace author author S. Chen, author P.-D. Yu, author C. W. Tan, and author H. V. Poor, journal journal arXiv preprint arXiv:2211.00880 https://doi.org/10.48550/arXiv.2211.00880 10.48550/arXiv.2211.00880 (year 2022)NoStop [Čutura et al.(2021)Čutura, Li, Swami, and Segarra]9616110 author author G. Čutura, author B. Li, author A. Swami, and author S. Segarra, in https://doi.org/10.23919/EUSIPCO54536.2021.9616110 booktitle 2021 29th European Signal Processing Conference (EUSIPCO) (year 2021) pp. pages 2204–2208NoStop [Altarelli et al.(2014a)Altarelli, Braunstein, Dall’Asta, Lage-Castellanos, and Zecchina]altarelli2014bayesian author author F. Altarelli, author A. Braunstein, author L. Dall’Asta, author A. Lage-Castellanos, and author R. Zecchina, https://doi.org/10.1103/PhysRevLett.112.118701 journal journal Physical review letters volume 112, pages 118701 (year 2014a)NoStop [Altarelli et al.(2014b)Altarelli, Braunstein, Dall’Asta, Ingrosso, and Zecchina]altarelli2014patient author author F. Altarelli, author A. Braunstein, author L. Dall’Asta, author A. Ingrosso, and author R. Zecchina, https://doi.org/10.1088/1742-5468/2014/10/P10016 journal journal Journal of Statistical Mechanics: Theory and Experiment volume 2014, pages P10016 (year 2014b)NoStop [Braunstein and Ingrosso(2016)]braunstein2016inference author author A. Braunstein and author A. Ingrosso, https://doi.org/10.1038/srep27538 journal journal Scientific reports volume 6, pages 1 (year 2016)NoStop [Bindi et al.(2017)Bindi, Braunstein, and Dall’Asta]bindi2017predicting author author J. Bindi, author A. Braunstein, and author L. Dall’Asta, https://doi.org/10.1371/journal.pone.0176376 journal journal Plos one volume 12, pages e0176376 (year 2017)NoStop [Neri and Bollé(2009)]neri2009cavity author author I. Neri and author D. Bollé, https://doi.org/10.1088/1742-5468/2009/08/P08009 journal journal Journal of Statistical Mechanics: Theory and Experiment volume 2009, pages P08009 (year 2009)NoStop [Kanoria and Montanari(2011)]kanoria2011majority author author Y. Kanoria and author A. Montanari, https://doi.org/10.1214/10-AAP729 journal journal The Annals of Applied Probability volume 21, pages 1694 (year 2011)NoStop [Lokhov et al.(2015)Lokhov, Mézard, and Zdeborová]lokhov2015dynamic author author A. Y. Lokhov, author M. Mézard, and author L. Zdeborová, https://doi.org/10.1103/PhysRevE.91.012811 journal journal Physical Review E volume 91, pages 012811 (year 2015)NoStop [Altarelli et al.(2013)Altarelli, Braunstein, Dall’Asta, and Zecchina]altarelli2013large author author F. Altarelli, author A. Braunstein, author L. Dall’Asta, and author R. Zecchina, https://doi.org/10.1103/PhysRevE.87.062115 journal journal Physical Review E volume 87, pages 062115 (year 2013)NoStop [Gómez et al.(2010)Gómez, Arenas, Borge-Holthoefer, Meloni, and Moreno]gomez2010discrete author author S. Gómez, author A. Arenas, author J. Borge-Holthoefer, author S. Meloni, and author Y. Moreno, @noop journal journal Europhysics Letters volume 89, pages 38009 (year 2010)NoStop [Van Mieghem et al.(2008)Van Mieghem, Omic, and Kooij]van2008virus author author P. Van Mieghem, author J. Omic, and author R. Kooij, https://doi.org/10.1109/TNET.2008.925623 journal journal IEEE/ACM Transactions On Networking volume 17, pages 1 (year 2008)NoStop [Kiss et al.(2017)Kiss, Miller, and Simon]kiss2017mathematics author author I. Z. Kiss, author J. C. Miller, and author P. L. Simon, https://doi.org/10.1007/978-3-319-50806-1 title Mathematics of Epidemics on Networks: From Exact to Approximate Models, series Interdisciplinary Applied Mathematics, Vol. volume 46 (publisher Springer International Publishing, year 2017)NoStop [Pastor-Satorras et al.(2015)Pastor-Satorras, Castellano, Van Mieghem, and Vespignani]pastor2015epidemic author author R. Pastor-Satorras, author C. Castellano, author P. Van Mieghem, and author A. Vespignani, https://doi.org/10.1103/RevModPhys.87.925 journal journal Reviews of modern physics volume 87, pages 925 (year 2015)NoStop [Plefka(1982)]Plefka_1982 author author T. Plefka, https://doi.org/10.1088/0305-4470/15/6/035 journal journal Journal of Physics A: Mathematical and General volume 15, pages 1971 (year 1982)NoStop [Bravi et al.(2016)Bravi, Sollich, and Opper]Braviplefkadynamics_2016 author author B. Bravi, author P. Sollich, and author M. Opper, https://doi.org/10.1088/1751-8113/49/19/194003 journal journal Journal of Physics A: Mathematical and Theoretical volume 49, pages 194003 (year 2016)NoStop [Georges and Yedidia(1991)]georges1991expand author author A. Georges and author J. S. Yedidia, https://doi.org/10.1088/0305-4470/24/9/024 journal journal Journal of Physics A: Mathematical and General volume 24, pages 2173 (year 1991)NoStop [Maillard et al.(2019)Maillard, Foini, Castellanos, Krzakala, Mézard, and Zdeborová]maillard2019high author author A. Maillard, author L. Foini, author A. L. Castellanos, author F. Krzakala, author M. Mézard, and author L. Zdeborová, https://doi.org/10.1088/1742-5468/ab4bbb journal journal Journal of Statistical Mechanics: Theory and Experiment volume 2019, pages 113301 (year 2019)NoStop [Mazza and Biazzo(2023)]epigen author author F. Mazza and author I. Biazzo, https://doi.org/10.5281/zenodo.7852232 title EpiGen: generator of epidemics on contact graphs (year 2023)NoStop [Watts and Strogatz(1998)]watts_collective_1998 author author D. J. Watts and author S. H. Strogatz, https://doi.org/10.1038/30918 journal journal Nature volume 393, pages 440 (year 1998)NoStop [Penrose(2016)]penrose_connectivity_2016 author author M. D. Penrose, https://doi.org/10.1214/15-AAP1110 journal journal The Annals of Applied Probability volume 26, pages 986 (year 2016)NoStop [Hinch et al.(2021)Hinch, Probert, Nurtay, Kendall, Wymant, Hall, Lythgoe, Cruz, Zhao, Stewart, Ferretti, Montero, Warren, Mather, Abueg, Wu, Legat, Bentley, Mead, Van-Vuuren, Feldner-Busztin, Ristori, Finkelstein, Bonsall, Abeler-Dörner, and Fraser]hinch_openabm_2021 author author R. Hinch, author W. J. M. Probert, author A. Nurtay, author M. Kendall, author C. Wymant, author M. Hall, author K. Lythgoe, author A. B. Cruz, author L. Zhao, author A. Stewart, author L. Ferretti, author D. Montero, author J. Warren, author N. Mather, author M. Abueg, author N. Wu, author O. Legat, author K. Bentley, author T. Mead, author K. Van-Vuuren, author D. Feldner-Busztin, author T. Ristori, author A. Finkelstein, author D. G. Bonsall, author L. Abeler-Dörner, and author C. Fraser, https://doi.org/10.1371/journal.pcbi.1009146 journal journal PLOS Computational Biology volume 17, pages e1009146 (year 2021)NoStop [Kerr et al.(2021)Kerr, Stuart, Mistry, Abeysuriya, Rosenfeld, Hart, Núñez, Cohen, Selvaraj, Hagedorn, George, Jastrzębski, Izzo, Fowler, Palmer, Delport, Scott, Kelly, Bennette, Wagner, Chang, Oron, Wenger, Panovska-Griffiths, Famulare, and Klein]kerr_covasim_2021 author author C. C. Kerr, author R. M. Stuart, author D. Mistry, author R. G. Abeysuriya, author K. Rosenfeld, author G. R. Hart, author R. C. Núñez, author J. A. Cohen, author P. Selvaraj, author B. Hagedorn, author L. George, author M. Jastrzębski, author A. S. Izzo, author G. Fowler, author A. Palmer, author D. Delport, author N. Scott, author S. L. Kelly, author C. S. Bennette, author B. G. Wagner, author S. T. Chang, author A. P. Oron, author E. A. Wenger, author J. Panovska-Griffiths, author M. Famulare, and author D. J. Klein, https://doi.org/10.1371/journal.pcbi.1009149 journal journal PLOS Computational Biology volume 17, pages e1009149 (year 2021)NoStop [Génois and Barrat(2018)]Genois2018 author author M. Génois and author A. Barrat, https://doi.org/10.1140/epjds/s13688-018-0140-1 journal journal EPJ Data Science volume 7, pages 11 (year 2018)NoStop [Crotti and Braunstein(2023)]crotti2023large author author S. Crotti and author A. Braunstein, https://doi.org/10.48550/arXiv.2303.17403 title Large deviations in stochastic dynamics over graphs through matrix product belief propagation (year 2023), https://arxiv.org/abs/2303.17403 arXiv:2303.17403 [cond-mat.stat-mech] NoStop [Aurell and Mahmoudi(2011)]aurell2011message author author E. Aurell and author H. Mahmoudi, https://doi.org/10.1088/1742-5468/2011/04/P04014 journal journal Journal of Statistical Mechanics: Theory and Experiment volume 2011, pages P04014 (year 2011)NoStop [Aurell and Mahmoudi(2012)]aurell2012dynamic author author E. Aurell and author H. Mahmoudi, https://doi.org/10.1103/PhysRevE.85.031119 journal journal Physical Review E volume 85, pages 031119 (year 2012)NoStop [Del Ferraro and Aurell(2015)]del2015dynamic author author G. Del Ferraro and author E. Aurell, https://doi.org/10.1103/PhysRevE.92.010102 journal journal Physical Review E volume 92, pages 010102 (year 2015)NoStop [Aurell et al.(2017)Aurell, Del Ferraro, Domínguez, and Mulet]aurell2017cavity author author E. Aurell, author G. Del Ferraro, author E. Domínguez, and author R. Mulet, https://doi.org/10.1103/PhysRevE.95.052119 journal journal Physical Review E volume 95, pages 052119 (year 2017)NoStop [Vázquez et al.(2017)Vázquez, Del Ferraro, and Ricci-Tersenghi]vazquez2017simple author author E. D. Vázquez, author G. Del Ferraro, and author F. Ricci-Tersenghi, https://doi.org/10.1088/1742-5468/aa5d22 journal journal Journal of Statistical Mechanics: Theory and Experiment volume 2017, pages 033303 (year 2017)NoStop [Ortega et al.(2022)Ortega, Machado, and Lage-Castellanos]ortega2022dynamics author author E. Ortega, author D. Machado, and author A. Lage-Castellanos, @noop journal journal Physical Review E volume 105, pages 024308 (year 2022)NoStop [Daley and Kendall(1964)]rumordaley author author D. J. Daley and author D. G. Kendall, https://doi.org/10.1038/2041118a0 journal journal Nature volume 204, pages 1118 (year 1964)NoStop [Boccaletti et al.(2006)Boccaletti, Latora, Moreno, Chavez, and Hwang]rumorlatora author author S. Boccaletti, author V. Latora, author Y. Moreno, author M. Chavez, and author D.-U. Hwang, https://doi.org/https://doi.org/10.1016/j.physrep.2005.10.009 journal journal Physics Reports volume 424, pages 175 (year 2006)NoStop § DERIVATION OF THE DYNAMIC CAVITY EQUATIONS FOR THE SI MODEL In this Section, a derivation of the dynamic cavity equations (<ref>) is presented. For clarity of exposition, the calculations are carried out for the case of pure time-forward dynamics; the addition of observations is discussed afterward. The derivation exploits a path-integral representation of the stochastic epidemic dynamics of the SI model, that is based on interpreting the (Markovian) update rule of the discrete-time stochastic process as a set of dynamical constraints for the degrees of freedom under study, i.e. the binary variables {x_i^t}, and on defining a dynamic partition function of the form 𝒵 = ∑_X p( X) =∑_X∏_i=1^N{ p(x_i^0) ∏_t=0^T-1[ ∑_y_i^t,h_i^t P[ y_i^t | h_i^t] δ_x_i^t+1,x_i^t+(1-x_i^t)y_i^tδ(h_i^t-∑_j=1^Nν_ji^tx_j^t)] } = ∑_X,Y∫ dHP[Y| H]∏_i=1^N{ p(x_i^0) ∏_t=0^T-1[δ_x_i^t+1,x_i^t+(1-x_i^t)y_i^tδ(h_i^t-∑_j=1^Nν_ji^tx_j^t)] } where P[Y| H]= ∏_i=1^N∏_t=0^T-1∑_y_i^t,h_i^t P[ y_i^t | h_i^t] and the time-dependent matrix {ν_ij^t} already defined in Sec. <ref> encodes the infection rates and possible interaction patterns over time. Using the integral representation of Dirac's and Kronecker's delta functions and replacing the explicit expression of the conditional probabilities P[ y_i^t | h_i^t] from Eq. (<ref>), we obtain 𝒵 = ∑_X,Y∫ dHP[Y|H]∏_i=1^N{ p(x_i^0) ∏_t=0^T-1[δ_x_i^t+1,x_i^t+(1-x_i^t)y_i^tδ(h_i^t-∑_j=1^Nν_ji^tx_j^t)] } = ∑_X∏_i=1^N{p(x_i^0) ∏_t=0^T-1[∑_y_i^t=0,1∫_-∞^+∞dh_i^t∫_0^2πdx̂_i^t/2π e^ ix̂_i^t(x_i^t+1-x_i^t-(1-x_i^t)y_i^t)[1-e^h_i^t(1 - ε_i^t)]^y_i^t[e^h_i^t(1 - ε_i^t)]^(1-y_i^t). . ×. . ∫_-∞^+∞dĥ_i^t/2πe^ iĥ_i^t(h_i^t-∑_j=1^Nν_ji^tx_j^t)] } = ∑_X∏_i=1^N{p(x_i^0) ∏_t=0^T-1[∑_y_i^t=0,1∫_-∞^+∞dh_i^t∫_0^2πdx̂_i^t/2π e^ ix̂_i^t(x_i^t+1-x_i^t)e^- ix̂_i^t(1-x_i^t)y_i^t[1-e^h_i^t(1 - ε_i^t)]^y_i^t[e^h_i^t(1 - ε_i^t)]^(1-y_i^t). . ×. . ∫_-∞^+∞dĥ_i^t/2πe^ iĥ_i^t(h_i^t-∑_j=1^Nν_ji^tx_j^t)] }. It is convenient to proceed isolating the interaction terms and performing the sums over the random variables y_i^t, 𝒵 = ∑_X∏_i=1^N{ p(x_i^0) ∏_t=0^T-1[∫_-∞^+∞dĥ_i^t/2π∫_-∞^+∞dh_i^te^ iĥ_i^th_i^t∫_0^2πdx̂_i^t/2πe^ ix̂_i^t(x_i^t+1-x_i^t).. .. ×∑_y_i^t=0,1e^- ix̂_i^t(1-x_i^t)y_i^t[1-e^h_i^t(1 - ε_i^t)]^y_i^t[e^h_i^t(1 - ε_i^t)]^(1-y_i^t)∏_j>i e^- i(ĥ_i^tν_ji^tx_j^t+ĥ_j^tν_ij^tx_i^t)]} = ∑_X∏_i=1^N{ p(x_i^0) ∏_t=0^T-1[∫_-∞^+∞dĥ_i^t∫_0^2πdx̂_i^t/2πe^ ix̂_i^t(x_i^t+1-x_i^t)∫_-∞^+∞dh_i^t/2 πe^ iĥ_i^th_i^t[(1-e^h_i^t(1 - ε_i^t))e^- ix̂_i^t(x_i^t+1-1) + e^h_i^t(1 - ε_i^t)].. . . ×∏_j>i e^- i(ĥ_i^tν_ji^tx_j^t+ĥ_j^tν_ij^tx_i^t)]} then performing the integrals over h_i^t and x̂_i^t, we get 𝒵 = ∑_X∏_i=1^N{ p(x_i^0) ∏_t=0^T-1[∫_-∞^+∞dĥ_i^t∫_0^2πdx̂_i^t/2π[δ(ĥ_i^t- i)(1-ε_i^t)(e^ ix̂_i^t(x_i^t+1-x_i^t)-e^ ix̂_i^t(x_i^t+1-1))+δ(ĥ_i^t)e^ ix̂_i^t(x_i^t+1-1)] . . .. ×∏_j>ie^- i(ĥ_i^tν_ji^tx_j^t+ĥ_j^tν_ij^tx_i^t)] } = ∑_X∏_i=1^N{ p(x_i^0) ∏_t=0^T-1[ ∫_-∞^+∞dĥ_i^t[ δ(ĥ_i^t- i)(1-ε_i^t) (δ_x_i^t+1,x_i^t - δ_x_i^t+1,1)+ δ(ĥ_i^t)δ_x_i^t+1,1] .. . . ×∏_j>i e^- i(ĥ_i^tν_ji^tx_j^t+ĥ_j^tν_ij^tx_i^t)] } = ∑_X∫DĤ∏_i=1^N{ p(x_i^0) ∏_t=0^T-1[[ δ(ĥ_i^t- i)(1-ε_i^t) (δ_x_i^t+1,x_i^t - δ_x_i^t+1,1)+ δ(ĥ_i^t)δ_x_i^t+1,1] .. . . ×∏_j>i e^- i(ĥ_i^tν_ji^tx_j^t+ĥ_j^tν_ij^tx_i^t)] } where ∫DĤ = ∏_i=1^N ∏_t=0^T-1(∫_-∞^+∞dĥ_i^t) for shortness of notation. The probabilistic weight associated with the dynamic partition function 𝒵 is now in a form that can be represented as a graphical model, in which the variable nodes correspond to the spatio-temporal variables x_i and ĥ_i and there are two types of factor nodes (see Figure <ref>): single-node factors ϕ_i[(x_i,ĥ_i)] = p(x_i^0) ∏_t=0^T-1[ δ(ĥ_i^t- i) (1-ε_i^t) (δ_x_i^t+1,x_i^t - δ_x_i^t+1,1)+ δ(ĥ_i^t)δ_x_i^t+1,1] and factors involving pairs of variables on neighboring nodes at the same time ψ_ij[(x_i ,ĥ_i),(x_j,ĥ_j)] = ∏_t=0^T-1 e^- i(ĥ_i^tν_ji^tx_j^t+ĥ_j^tν_ij^tx_i^t) . By grouping together single-node variables at all times, that is trajectories (x_i,ĥ_i)= ({x_i^0,…, x_i^T},{ĥ_i^0,…, ĥ_i^T}), the resulting factor graph reproduces the topology of the underlying interaction graph. It should be noted that the choice of variable grouping in this approach disentangles the locally-loopy structure of the factor graph associated with the space-time problem. This disentanglement is achieved due to the linear coupling between variables on neighboring nodes, which is obtained by introducing auxiliary local fields ĥ_i. A different but equivalent formulation of the dynamic cavity considers factor graphs in which the variable nodes contain pairs of trajectories, e.g. (x_i,x_j), of sites that are neighbors on the underlying interaction graph. This is the formulation that leads to the BP method in Refs.<cit.>. According to this graphical model construction, the following dynamic cavity equations represent an ansatz for describing the stochastic dynamics associated with the dynamic partition function 𝒵 on a tree-like interaction graph, c_ij[x_i,ĥ_i] = 1/𝒵_ijp(x_i^0) ∏_t=0^T-1[ δ(ĥ_i^t- i)(1-ε_i^t) (δ_x_i^t+1,x_i^t - δ_x_i^t+1,1)+ δ(ĥ_i^t)δ_x_i^t+1,1] ×∏_k∈∂ i∖ j{∫ Dĥ_k ∑_x_kc_ki[x_k,ĥ_k] e^- i∑_t (ĥ_i^tν_ki^tx_k^t+ĥ_k^tν_ik^tx_i^t)} where ∫ Dĥ_i = ∏_t=0^T-1[∫_-∞^+∞ dĥ_i^t]. Then, using the Fourier transforms c[x,s] =∏_t=0^T-1[∫_-∞^+∞dĥ^te^- is^tĥ^t]c[x,ĥ] c[x,ĥ] =∏_t=0^T-1[∫_-∞^+∞ds^t/2πe^ is^tĥ^t]c[x,s], the dynamic cavity equations can be written as c_ij[x_i,s_i] = 1/𝒵_ijp(x_i^0) ∏_t=0^T-1{∫_-∞^+∞ dĥ_i^t e^- i s_i^t ĥ_i^t[ δ(ĥ_i^t- i)(1-ε_i^t) (δ_x_i^t+1,x_i^t - δ_x_i^t+1,1)+ δ(ĥ_i^t)δ_x_i^t+1,1] ×∏_k∈∂ i∖ j[∑_x_k^t∫_-∞^+∞d ĥ_k^t ∫_-∞^+∞ds_k^t/2π e^ i s_k^t ĥ_k^t c_ki[x_k, s_k] e^- i(ĥ_i^tν_ki^tx_k^t+ĥ_k^tν_ik^tx_i^t)] }. The expression can be simplified by performing the integrals over the auxiliary variables {ĥ_k}_k∈∂ i∖ j first, c_ij[x_i,s_i] = 1/𝒵_ijp(x_i^0) ∏_t=0^T-1{∫_-∞^+∞ dĥ_i^t e^- i s_i^t ĥ_i^t[ δ(ĥ_i^t- i) (1-ε_i^t) (δ_x_i^t+1,x_i^t - δ_x_i^t+1,1)+ δ(ĥ_i^t)δ_x_i^t+1,1] ×∏_k∈∂ i∖ j[∑_x_k^t∫_-∞^+∞ ds_k^t c_ki[x_k, s_k] e^- iĥ_i^tν_ki^tx_k^t∫_-∞^+∞d ĥ_k^t/2π e^ iĥ_k^t (s_k^t - ν_ik^t x_i^t )] } = 1/𝒵_ijp(x_i^0) ∏_t=0^T-1{∫_-∞^+∞ dĥ_i^t e^- i s_i^t ĥ_i^t[ δ(ĥ_i^t- i)(1-ε_i^t) (δ_x_i^t+1,x_i^t - δ_x_i^t+1,1)+ δ(ĥ_i^t)δ_x_i^t+1,1] ×∏_k∈∂ i∖ j[∑_x_k^t∫_-∞^+∞ ds_k^t c_ki[x_k, s_k] e^- iĥ_i^tν_ki^tx_k^tδ(s_k^t - ν_ik^t x_i^t ) ] } then over the variables {s_k }_k∈∂ i∖ j, c_ij[x_i,s_i] = 1/𝒵_ijp(x_i^0) ∏_t=0^T-1{∫_-∞^+∞ dĥ_i^t e^- i s_i^t ĥ_i^t[ δ(ĥ_i^t- i)(1-ε_i^t) (δ_x_i^t+1,x_i^t - δ_x_i^t+1,1)+ δ(ĥ_i^t)δ_x_i^t+1,1] ×∏_k∈∂ i∖ j[∑_x_k^t c_ki[x_k,ν_ikx_i] e^- iĥ_i^tν_ki^tx_k^t] } = 1/𝒵_ijp(x_i^0) ∑_x_∂ i∖ j{[ ∏_k∈∂ i∖ j c_ki[x_k,ν_ikx_i] ] ∏_t=0^T-1[∫_-∞^+∞ dĥ_i^t e^- iĥ_i^t ( s_i^t + ∑_k∈∂ i∖ jν_ki^tx_k^t). .×( δ(ĥ_i^t- i) (1-ε_i^t)(δ_x_i^t+1,x_i^t - δ_x_i^t+1,1)+ δ(ĥ_i^t)δ_x_i^t+1,1) ] } and finally over ĥ_i, to obtain a more natural form for the dynamic cavity equations c_ij[x_i,s_i] = 1/𝒵_ijp(x_i^0) ∑_x_∂ i∖ j{[ ∏_k∈∂ i∖ j c_ki[x_k,ν_ikx_i] ] ×∏_t=0^T-1[ δ_x_i^t+1,x_i^t(1-ε_i^t)e^s_i^t + ∑_k∈∂ i∖ jν_ki^tx_k^t +δ_x_i^t+1,1(1 - (1-ε_i^t)e^s_i^t + ∑_k∈∂ i∖ jν_ki^tx_k^t) ] }. Due to the locality and independence of observations, the latter can then be included in the above equations as an additional single-node factor term, a local likelihood, to obtain the dynamic cavity equations in Eq. (<ref>). § REDUCTION TO THE TIME-FORWARD EQUATIONS IN THE ABSENCE OF OBSERVATIONS A major consequence of the introduction of time-forward messages ρ_→ t^i∖ j and time-backward messages ρ_t←^i∖ j is that, in the absence of observations, it is possible to prove that the quantities μ_i∖ j^t have to vanish for all edges ∀ (i,j) and ∀ t and then recover a purely time-forward dynamics. Using the definition of m_i∖ j^t by means of the quantities ρ_→ t^i∖ j(x_i^t) and ρ_t←^i∖ j(x_i^t), but performing the slicing one time step later, we obtain m_i∖ j^t = 1/𝒵̃_ij∑_x_i^t,x_i^t+1ρ_→ t^i∖ j(x_i^t)x_i^tM_x_i^tx_i^t+1^i∖ jρ_t+1←^i∖ j(x_i^t+1) = 1/𝒵̃_ijρ_→ t^i∖ j(1)M_t, 11^i ∖ jρ_t+1←^i∖ j(1) or slicing one time step earlier, m_i∖ j^t = 1/𝒵̃_ij∑_x_i^t-1,x_i^tρ_→ t-1^i∖ j(x_i^t-1)x_i^tM_x_i^t-1x_i^t^i∖ jρ_t←^i∖ j(x_i^t) = 1/𝒵̃_ij[ρ_→ t-1^i∖ j(1)M_t-1, 11^i ∖ jρ_t←^i∖ j(1) + ρ_→ t-1^i∖ j(0)M_t-1, 01^i ∖ jρ_t←^i∖ j(1) ]. Using the previous result it is possible to express m_i∖ j^t as function of m_i∖ j^t-1 m_i∖ j^t = m_i∖ j^t-1 + ρ_→ t-1^i∖ j(0)M_t-1, 01^i ∖ jρ_t←^i∖ j(1)/𝒵̃_ij×1-m_i∖ j^t-1/1-m_i∖ j^t-1 = m_i∖ j^t-1 +(1-m_i∖ j^t-1)[ρ_→ t-1^i∖ j(0)M_t-1, 01^i ∖ jρ_t←^i∖ j(1)/ρ_→ t-1^i∖ j(0)M_t-1, 01^i ∖ jρ_t←^i∖ j(1) + ρ_→ t-1^i∖ j(0)M_t-1, 00^i ∖ jρ_t←^i∖ j(0)] = m_i∖ j^t-1 +(1-m_i∖ j^t-1)[[1 - (1 - ε_i^t-1)e^∑_k∈∂ i∖ jm_k∖ i^t-1ν_ki^t-1] ρ_t←^i∖ j(1)/[1 - (1 - ε_i^t-1)e^∑_k∈∂ i∖ jm_k∖ i^t-1ν_ki^t-1] ρ_t←^i∖ j(1) + (1 - ε_i^t-1) e^∑_k∈∂ i∖ jm_k∖ i^t-1ν_ki^t-1ρ_t←^i∖ j(0)]. As already stressed in the main text, the last expression does not represent a time-forward equation because the quantities ρ_t←^i∖ j(x_i^t) are computed backward in time from T to step t. Time-forward dynamics is recovered if the two time-backward messages are equal, which is expected to occur in the absence of observations at later times. To prove this, one can first notice that from (<ref>), μ_i∖ j^t=0 if the time-backward messages are equal, i.e. if ρ_t+1←^i∖ j(0) = ρ_t+1←^i∖ j(1). In the absence of observations also the inverse implication is true: when the set of messages μ_i∖ j^t at time t are zero and there is no observation also at time t, then the corresponding time-backward messages ρ_t ←^i∖ j(x_i^t) are also uniform. Let us start from the time T-1, because by construction μ_i∖ j^T=0, then using (<ref>) with the final time condition ρ_T ←^i ∖ j( x_i^T ) = p( O_i^T| x_i^T ) we obtain μ_i∖ j^t = 1/𝒵̃_ijρ_→ t^i∖ j(0)M_T-1,00^i∖ j(p( O_i^T| 0 ) - p( O_i^T| 1 )). If no observation is provided on the final time, then p(O_i^T| x_i^T)=1 for x_i^T=0,1 and the numerator vanishes, that is μ_i∖ j^T-1=0. Moreover, ρ_T-1←^i∖ j(x_i^T-1) = ∑_x_i^T M_x_i^T-1x_i^T^i∖ j p(O_i^T| x_i^T) = M_x_i^T-10 ^i∖ j + M_x_i^T-11 ^i∖ j that is ρ_T-1←^i∖ j(0) = ( 1 - ε_i^T-1) e^∑_k∈∂ i∖ jm_k∖ i^T-1ν_ki^T-1p(O_i^T-1| 0) + (1-( 1 - ε_i^T-1)e^∑_k∈∂ i∖ jm_k∖ i^T-1ν_ki^T-1) p(O_i^T-1| 0) = p(O_i^T-1| 0), ρ_T-1←^i∖ j(1) = e^∑_k∈∂ i∖ jν_ik^T-1μ_k∖ i^T-1p(O_i^T-1| 1) = p(O_i^T-1| 1), meaning that ρ_T-1←^i∖ j(0)=ρ_T-1←^i∖ j(1) if no observation is included at time T-1. In this way, the equality is guaranteed at time T-1 and one can proceed by induction. By assuming that, in the absence of observations at times larger than t, the equality is valid for time t+1, i.e. ρ_t+1←^i∖ j(0)=ρ_t+1←^i∖ j(1)=ρ_t+1←^i∖ j for all directed edges (i,j), then one obtains that μ_i∖ j^t=0. Computing the time-backward messages at time t, ρ_t←^i∖ j(x_i^t) = ∑_x_i^t,x_i^t+1 M_x_i^tx_i^t+1^i∖ jρ_t+1←^i∖ j(x_i^t+1) = (M_x_i^t0^i∖ j+M_x_i^t1^i∖ j) ρ_t+1←^i∖ j, and using that all μ_k∖ i^t vanish, ρ_t←^i∖ j(x_i^t) ∝ p(O_i^t| 0) if x_i^t=0, p(O_i^t| 1) if x_i^t=1, that is independent of the value of x_i^t if no observation occurs at time t. By induction, this is true for every time, as long as no observation is included. Hence, it is possible to conclude that, in the absence of observations, the equations (<ref>) for the cavity marginals m_i∖ j^t reduce to the more standard time-forward mean-field equations in Eqs. (<ref>). § EFFICIENT IMPLEMENTATION IN THE INFECTION TIME REPRESENTATION As an alternative to the generic efficient formulation presented in Section <ref> in terms of transfer-matrix formalism, the complexity of Eqs. (<ref>)-(<ref>) can be reduced from the exponential (in the temporal length T) to polynomial exploiting the non-recurrency of the SI dynamic - in which only configurations of the type x_i=(0,…,0,1,…1) are allowed - by using a simpler representation in terms of infection times. A SI epidemic trajectory can be parameterized by a unique set of integer variables t_i (one for each node) representing the first time at which individual i is infected, and taking values in t_i∈{ 0,…,T+1}. The case t_i=0 corresponds to individual i being originally infected at the initial time, i.e. being a patient-zero of the epidemics. The other special case t_i=T+1 models the scenario where individual i never gets infected during the dynamics (which formally corresponds to t_i=+∞). The trajectory x_i can be simply expressed as x_i^t=Θ[t-t_i], where Θ[x] is a Heaviside-step function, with the convention Θ[0]=1. After some algebra, and defining ν̃_i^t = log(1- ε_i^t) we can rewrite Eqs. (<ref>),(<ref>) and (<ref>) as follows: m_i\ j^t=1/𝒵̃_ij∑_t_i=0^tp(O_i| t_i)p_0(t_i)[∏_r=0^t_i-2e^ν̃_i^r+∑_k∈∂ i∖ jm_k∖ i^rν_ki^r][1-𝕀_1≤ t_i≤ Te^ν̃_i^t_i-1+∑_k∈∂ i∖ jm_k∖ i^t_i-1ν_ki^t_i-1]∏_s=t_i^T-1e^∑_k∈∂ i∖ jν_ik^sμ_k∖ i^s μ_i\ j^t = 1/𝒵̃_ij∑_t_i=t+2^T+1p(O_i| t_i)p_0(t_i)[∏_r=0^t_i-2e^ν̃_i^r+∑_k∈∂ i∖ jm_k∖ i^rν_ki^r][1-𝕀_1≤ t_i≤ Te^ν̃_i^t_i -1 + ∑_k∈∂ i∖ jm_k∖ i^t_i-1ν_ki^t_i-1]∏_s=t_i^T-1e^∑_k∈∂ i∖ jν_ik^sμ_k∖ i^s+ -𝕀_0≤ t≤ T-1/𝒵̃_ijp(O_i| t+1) p_0(t+1)[∏_r=0^te^ν̃_i^r + ∑_k∈∂ i∖ jm_k∖ i^rν_ki^r]∏_s=t+1^T-1e^∑_k∈∂ i∖ jν_ik^sμ_k∖ i^s 𝒵̃_ij=∑_t_i=0^T+1p(O_i| t_i)p_0(t_i)[∏_r=0^t_i-2e^ν̃_i^r+∑_k∈∂ i∖ jm_k∖ i^rν_ki^r][1-𝕀_1≤ t_i≤ Te^ν̃_i^t_i-1+∑_k∈∂ i∖ jm_k∖ i^t_i-1ν_ki^t_i-1]∏_s=t_i^T-1e^∑_k∈∂ i∖ jν_ik^sμ_k∖ i^s where the function p_0(t_i) is related to the probability of node i being a patient zero, namely p_0(r)=γ_i^0 r=0 1-γ_i^0 r>0 Notice how all the summations/products w.r.t. the infection times are now linear in T. Analogously, the likelihood term for each observations on node i (Eq. (<ref>)) can be rewritten use this representation as p(O_i^τ_o_i| t_i)=(1-f_FPR)θ[t_i-(τ_o_i+1)]+f_FNRθ[τ_o_i-t_i] if O_i^τ_o_i=0 f_FPRθ[t_i-(τ_o_i+1)]+(1-f_FNR)θ[τ_o_i-t_i] if O_i^τ_o_i=1 with p(O_i| t_i)=∏_o_i ∈O_ip(O_i^τ_o_i| t_i). Analogous expressions w.r.t. (<ref>)-(<ref>) can be derived for the single-node marginal m_i^t and its normalization 𝒵_i. An efficient computational scheme can be attained by updating all the cavities of a fixed node at once, and then performing a random shuffling on the order nodes are updated. Intuitively, the speed-up induced by this protocol is that the forward and backward contribution to each cavity message - say, on link (i,j) can be computed by removing the corresponding link contribution from the site term i. In order to clarify this point, let us define the following four quantities, for each node i: ℛ_i^→(t) =ν̃_i^t + ∑_k∈∂im_k∖i^tν_ki^t ℛ_i^←(t) =∑_k∈∂iν_ik^tμ_k∖i^t K_i^→(t) =∑_r=0^t-2ℛ_i^→(r), K_i^←(t) =∑_s=t^T-1ℛ_i^←(s) Analogous definitions hold for each cavity i\ j, just by letting the above summations run over all the neighbors of node i but j. Eqs. (<ref>) have also a physical interpretation. For instance, ℛ_i^→(t) is a mean-field approximation for the log-probability of node i not being infected at time t by none of its neighbors. The above definitions Eqs (<ref>)-(<ref>) allow one to re-write the update of the cavity equations in a more compact form: 𝒵̃_ij= ∑_t_i=0^T+1p(O_i| t_i)p_0(t_i)e^K_i\ j^→(t_i)[1-𝕀_1≤ t_i≤ Te^ℛ_i\ j^→(t_i-1)]e^K_i\ j^←(t_i) m_i\ j^t= 1/𝒵̃_ij∑_t_i=0^tp(O_i| t_i)p_0(t_i)e^K_i\ j^→(t_i)[1-𝕀_1≤ t_i≤ Te^ℛ_i\ j^→(t_i-1)]e^K_i\ j^←(t_i) μ_i\ j^t= 1/𝒵̃_ij∑_t_i=t+2^T+1p(O_i| t_i)p_0(t_i)e^K_i\ j^→(t_i)[1-𝕀_1≤ t_i≤ Te^ℛ_i\ j^→(t_i-1)]e^K_i\ j^←(t_i) -1/𝒵_ijp(O_i| t+1)𝕀_0≤ t≤ T-1p_0(t+1)e^K_i\ j^→(t+2)+K_i\ j^←(t+1) At fixed i, the quantities ℛ_i\ j^→(t), ℛ_i\ j^←(t), K_i\ j^→(t), K_i\ j^←(t) can be computed for each cavity by removing just one link contribution (i.e. the one corresponding to the link removed in that specific cavity graph), without further O( | ∂ i | ) computations, namely ℛ_i\ j^→(t) =ℛ_i^→(t)-m_j\ i^tν_ji^t, ℛ_i\ j^←(t) =ℛ_i^←(t)-ν_ij^tμ_j\ i^t and similarly K_i\ j^→(t) =K_i^→(t)-∑_r=0^t-2m_j\ i^rν_ji^r, K_i\ j^←(t) =K_i^←(t)-∑_s=t^T-1ν_ij^sμ_j\ i^s for any j∈∂ i. Furthermore, the computation Eqs. (<ref>) can be done recursively, in a forward (resp. backward) direction w.r.t. time for K_i^→ (resp. K_i^←), i.e. by exploiting K_i^→(t) =K_i^→(t-1)+ℛ_i^→(t-2), K_i^←(t) =K_i^←(t+1)+ℛ_i^←(t) Clearly, equivalent relations hold for the link quantities K_i\ j^→(←). Using all the above schemes, the overall computational cost to perform a single update of all the cavity quantities for a node i scales as O(|∂ i| T ). A further advantage of such a computational scheme is that the update of all the cavities for one node can be performed in parallel, a particularly convenient choice especially when dealing with dense graphs. The convergence criterion can be defined either w.r.t. the cavity messages { m_i\ j^t} and/or their conjugates {μ_i\ j^t }, or eventually w.r.t. the single-site marginals {m_i^t}: the latters can be computed as m_i^t=∑_t_i=0^tp(O_i| t_i)p_0(t_i)e^K_i^→(t_i)[1-𝕀_1≤ t_i≤ Te^ℛ_i^→(t_i-1)]e^K_i^←(t_i)/∑_t_i=0^T+1p(O_i| t_i) p_0(t_i)e^K_i^→(t_i)[1-𝕀_1≤ t_i≤ Te^ℛ_i^→(t_i-1)]e^K_i^←(t_i) where the normalization is explicitly shown at the denominator. This expression is equivalent to Eq. (<ref>) of the main text but rewritten using the infection-time representation just discussed. § DERIVATION OF SCDC ON THE SIR MODEL In a Susceptible-Infected-Recovered (SIR) model, the possible individual states are x_i^t ∈{S, I, R } and the transition probabilities between states are given by W_i[x_i^t+1=S|𝐱^𝐭] = δ_x_i^t,S(1 - ε_i^t ) ∏_j ∈∂ i[1-λ_ji^tδ_x_j^t,I] W_i[x_i^t+1=I|𝐱^𝐭] = δ_x_i^t,I(1-r_i^t) + δ_x_i^t,S[1-(1 - ε_i^t )∏_j ∈∂ i[1-λ_ji^tδ_x_j^t,I]] W_i[x_i^t+1=R|𝐱^𝐭] = δ_x_i^t,R + δ_x_i^t,Ir_i^t where r_i^t is the recovery probability of individual i at time t and λ_ij^t is the probability that i transmits the infection to j at time t. As for the SI model, it is convenient to introduce a set of local fields h_i^t defined by h_i^t = ∑_jδ_x_j^t,Iν_ji^t in order to disentangle the interaction between individual i and its neighbors. The dynamical partition function of the system (neglecting observations for simplicity) is therefore 𝒵 = ∑_X∏_i p(x_i^0) ∏_t ∫ dh_i^t {δ_x_i^t+1,SW[x_i^t+1 = S|𝐱^𝐭] +δ_x_i^t+1,IW[x_i^t+1 = I|𝐱^𝐭] + . .+ δ_x_i^t+1,RW[x_i^t+1 = R|𝐱^𝐭] }δ(h_i^t-∑_jδ_x_j^t,Iν_ji^t) = ∑_X∏_i p(x_i^0) ∏_t ∫ dh_i^t {δ_x_i^t+1,Sδ_x_i^t,S(1 - ε_i^t )e^h_i^t +δ_x_i^t+1,I[δ_x_i^t,I(1-r_i^t) + δ_x_i^t,S(1-(1 - ε_i^t )e^h_i^t)] + . .+δ_x_i^t+1,R( δ_x_i^t,R + δ_x_i^t,I r_i^t) }δ(h_i^t-∑_jδ_x_j^t,Iν_ji^t) = ∑_X∏_i p(x_i^0) ∏_t ∫ dh_i^t {δ_x_i^t+1,Sδ_x_i^t,S(1 - ε_i^t )e^h_i^t +δ_x_i^t+1,I[δ_x_i^t,I(1-r_i^t) + δ_x_i^t,S(1-(1 - ε_i^t )e^h_i^t)] + . .+δ_x_i^t+1,R( δ_x_i^t,R + δ_x_i^t,I r_i^t) }∫dĥ_i^t/2πe^iĥ_i^t(h_i^t-∑_jδ_x_j^t,Iν_ji^t). Integrating over the local fields h_i^t for all times, we finally obtain the expression 𝒵 = ∑_X∫𝒟Ĥ∏_i p(x_i^0)∏_t {δ(ĥ_i^t)[δ_x_i^t+1,I(δ_x_i^t,I(1-r_i^t)+δ_x_i^t,S)+δ_x_i^t+1,R(δ_x_i^t,R+δ_x_i^t,Ir_i^t ) ]+ . .+δ(ĥ_i^t- i)(1 - ε_i^t ) [δ_x_i^t+1,Sδ_x_i^t,S-δ_x_i^t+1,Iδ_x_i^t,S] }∏_j>ie^δ_x_j^t,Iν_ji^t(-iĥ_i^t)+δ_x_i^t,Iν_ij^t(-iĥ_j^t) which can in turn be interpreted as a graphical model in a similar way to what was done for the SI model. Inserting the observations and using Bayes theorem, it is then possible to obtain the corresponding expression for the posterior probability distribution and the following dynamic cavity equations for the SIR model with observations, c_ij[x_i,ĥ_i] = 1/𝒵_ijp(x_i^0)∏_t {δ(ĥ_i^t)[δ_x_i^t+1,I(δ_x_i^t,I(1-r_i^t)+δ_x_i^t,S)+δ_x_i^t+1,R(δ_x_i^t,R+δ_x_i^t,Ir_i^t ) ]+ . .+δ(ĥ_i^t- i)(1 - ε_i^t )[δ_x_i^t+1,Sδ_x_i^t,S-δ_x_i^t+1,Iδ_x_i^t,S]} p(O_i^t|x_i^t) ×∏_k∈∂ i ∖ j∑_x_k∫ Dĥ_k c_ki[x_k, ĥ_k]e^δ_x_k^t,Iν_ki^t(-iĥ_i^t)+δ_x_i^t,Iν_ik^t(-iĥ_k^t). The expansion for small infection rates can be carried out similarly to the SI model, the only difference being that the average quantities are now defined as follows, m_i∖ j^t = ∑_x_i∫ Dĥ_i c_ij[x_i,ĥ_i]δ_x_i^t,I μ_i∖ j^t = ∑_x_i∫ Dĥ_i c_ij[x_i,ĥ_i] (- iĥ_i^t). With these definitions, approximating at first order and then re-exponentiating c_ij[x_i,ĥ_i] ≈1/𝒵̃_ijp(x_i^0)∏_t {δ(ĥ_i^t)[δ_x_i^t+1,I(δ_x_i^t,I(1-r_i^t)+δ_x_i^t,S)+δ_x_i^t+1,R(δ_x_i^t,R+δ_x_i^t,Ir_i^t ) ] . .+δ(ĥ_i^t- i)(1 - ε_i^t )[δ_x_i^t+1,Sδ_x_i^t,S-δ_x_i^t+1,Iδ_x_i^t,S]}∏_t {e^∑_k ∈∂ i ∖ j( - iĥ_i^t m_k∖ i^t ν_ki^t + δ_x_i^t,Iν_ik^tμ_k∖ i^t)p(𝒪_i^t|x_i^t)} where 𝒵̃_ij = ∑_x_i∫ Dĥ_i p(x_i^0)∏_t {δ(ĥ_i^t)[δ_x_i^t+1,I(δ_x_i^t,I(1-r_i^t)+δ_x_i^t,S)+δ_x_i^t+1,R(δ_x_i^t,R+δ_x_i^t,Ir_i^t ) ] . .+δ(ĥ_i^t- i)(1 - ε_i^t )[δ_x_i^t+1,Sδ_x_i^t,S-δ_x_i^t+1,Iδ_x_i^t,S]}∏_t {e^∑_k ∈∂ i ∖ j(- iĥ_i^t m_k∖ i^tν_ki^t + δ_x_i^t,Iν_ik^tμ_k∖ i^t)p(𝒪_i^t|x_i^t)} = ∑_x_i p(x_i^0)∏_t {δ_x_i^t+1,I(δ_x_i^t,I(1-r_i^t)+δ_x_i^t,S)+δ_x_i^t+1,R(δ_x_i^t,R+δ_x_i^t,Ir_i^t ) . .+(1 - ε_i^t )[δ_x_i^t+1,Sδ_x_i^t,S-δ_x_i^t+1,Iδ_x_i^t,S] e^∑_k ∈∂ i ∖ jm_k∖ i^tν_ki^t}∏_t { e^∑_k ∈∂ i ∖ jδ_x_i^t,Iν_ik^tμ_k∖ i^tp(𝒪_i^t|x_i^t)}. Following the approach presented in Section <ref> of the main text for the SI model, by defining the transfer matrix M_x_i^t x_i^t+1^i ∖ j = {δ_x_i^t+1,I(δ_x_i^t,I(1-r_i^t)+δ_x_i^t,S) + δ_x_i^t+1,R(δ_x_i^t,R + δ_x_i^t,I r_i^t ) . .+(1 - ε_i^t )[δ_x_i^t+1,Sδ_x_i^t,S-δ_x_i^t+1,Iδ_x_i^t,S] e^∑_k ∈∂ i ∖ jm_k∖ i^tν_ki^t}e^∑_k ∈∂ i ∖ jδ_x_i^t,Iν_ik^tμ_k∖ i^tp(𝒪_i^t|x_i^t), the dynamical partition function Eq. (<ref>) can be written as 𝒵̃_ij = ∑_x_ip(x_i^0)(∏_t=0^T-1M_x_i^t x_i^t+1^i∖ j) p(𝒪_i^T|x_i^T). In matrix form, the quantity M^i∖ j_t of Eq. (<ref>) corresponds to Eq. (<ref>) of the main text. Defining again forward and backward messages as ρ_→ t^i ∖ j (x_i^t) = ∑_x_i^0...x_i^t-1p(x_i^0)∏_t'=0^t-1M_x_i^t' x_i^t'+1^i ∖ j = ∑_x_i^t-1ρ_→ t-1^i∖ j(x_i^t-1)M_x_i^t-1x_i^t^i∖ j and ρ_t ←^i ∖ j (x_i^t) = ∑_x_i^t+1...x_i^T∏_t'=t^T-1M_x_i^t' x_i^t'+1^i ∖ j p(𝒪_i^T|x_i^T) = ∑_x_i^t+1ρ_t+1←^i∖ j(x_i^t+1)M_x_i^tx_i^t+1^i∖ j which satisfy recursive equations analogous to Eqs. (<ref>), we finally get, for the normalization Z̃_ij and the 1-time cavity messages m_i∖ j^t, μ_i∖ j^t: 𝒵̃_ij = ∑_x_i^tρ_→ t^i ∖ j (x_i^t)ρ_t ←^i ∖ j (x_i^t) = ρ_→ t^i ∖ j (S)ρ_t ←^i ∖ j (S) + ρ_→ t^i ∖ j (I)ρ_t ←^i ∖ j (I) + ρ_→ t^i ∖ j (R)ρ_t ←^i ∖ j (R) and m_i∖ j^t = ∑_x_i∫ Dĥ_i c_ij[x_i,ĥ_i] δ_x_i^t,I = 1/𝒵̃_ij∑_x_i^tρ_→ t^i ∖ j (x_i^t) δ_x_i^t,Iρ_t ←^i ∖ j (x_i^t) = ρ_→ t^i ∖ j (I)ρ_t ←^i ∖ j (I)/𝒵̃_ij and μ_i∖ j^t = ∑_x_i∫ Dĥ_i c_ij[x_i,ĥ_i] (- iĥ_i^t) = 1/𝒵̃_ij∑_x_i^t, x_i^t+1ρ_→ t^i ∖ j (x_i^t) (1 - ε_i^t )[δ_x_i^t+1,Sδ_x_i^t,S-δ_x_i^t+1,Iδ_x_i^t,S] e^∑_k ∈∂ i ∖ jm_k∖ i^tν_ki^t + δ_x_i^t,Iν_ik^tμ_k∖ i^t p(𝒪_i^t|x_i^t) ρ_t+1 ←^i ∖ j (x_i^t+1) = 1/𝒵̃_ijρ_→ t^i ∖ j (S)(1 - ε_i^t ) e^∑_k ∈∂ i ∖ j m_k∖ i^tν_ki^tp(𝒪_i^t|S) (ρ_t+1 ←^i ∖ j(S) - ρ_t+1 ←^i ∖ j(I) ). § EXAMPLE OF NORMALIZATION ISSUE FOR LEAVES OF THE CONTACT GRAPH The small coupling expansion requires to assume the normalization 𝒵_ij in (<ref>), which sums over all the possible trajectories of node i assuming s_i = 0 at every time. The method thus considers all the trajectories in which the cavity node j is always susceptible, and therefore cannot infect node i. In particular, there are situations, such as the one considered in the example below, in which the normalization vanishes, meaning that it is not possible to explain an observed trajectory within the standard SI model. While this could seem pathological, it is worth stressing that the assumption done is necessary to obtain a message-passing algorithm that is independent of the trajectory of node j, a crucial condition to perform the expansion on which the present method is based. It is however possible to ensure that every trajectory of a node i remains feasible, the normalization constant being finite, by slightly modifying the epidemic model introducing a small self-infection probability. In addition to fix the normalization issue, a small value of self-infection probability does not deteriorate the predictive power of the method. To better illustrate this problem, we consider a leaf node i and its unique neighbor j. In the cavity graph corresponding to the message c_ij[x_i, s_i], node i will appear as an isolated node. As a consequence, it is expected that the approximation behind the SCDC equations cannot explain, within the cavity graph, an infection actually transmitted from node j to node i. Indeed, because of the absence of further neighbors, the normalization term reads 𝒵̃_ij =∑_x_ip(x_i^0)∏_t=0^T-1[{δ_x_i^t+1,x_i^t(1-ε_i^t) +δ_x_i^t+1,1[1-(1-ε_i^t) ]} p(O^t_i |x_i^t)] p( O_i^T| x_i^T ), showing that an infection can only be explained by a self-infection event. When ε_i^t =0, the cavity message admits trajectories for which node i is always susceptible or infected. When a repeated observation, at different time, were imply an infection event at some t≠ 0, the normalization would vanish, indicating an inconsistency in the model. This is prevented by the existence of a finite self-infection probability. Since it is recommended to operate in the limit of a vanishing self-infection, in the present case it is possible to analytically verify the limiting behavior for the cavity messages m_i∖ j^t and μ_i∖ j^t. As an example, we suppose that the leaf i is observed to be susceptible at time t_S and then is observed to be infected at time t_I > t_S. We consider a uniform self-infection probability ε_i^t = ε for any time t and any node i, and a uniform prior probability p( x_i^0 = 0) = 1-γ, p( x_i^0 = 1) = γ. The forward messages are ρ_→ t^i∖ j(x_i^t = 0) = (1-γ)(1-ε)^t if t ≤ t_I, 0 if t> t_I, ρ_→ t^i∖ j(x_i^t = 1) = γ + ε(1-γ)∑_l = 0^t-1(1-ε)^l if t ≤ t_S, ε(1-γ)∑_l = t_S^t-1(1-ε)^l if t_S < t ≤ t_I, ε(1-γ)∑_l = t_S^t_I-1(1-ε)^l if t>t_I, and the backward messages ρ_t←^i∖ j(x_i^t = 0) = ε(1-ε)^t_S-t∑_l=0^t_I-1-t_S(1-ε)^l if t ≤ t_S, ε∑_l=0^t_I-1-t(1-ε)^l if t_S < t ≤ t_I, 1 if t>t_I ρ_t←^i∖ j(x_i^t = 1) = 0 if t ≤ t_S, 1 if t > t_S. The normalization factor taking into account the observations is 𝒵̃_ij = ε(1-γ)∑_l = t_S^t_I-1(1-ε)^l, so that the cavity marginal is given by m_i∖ j^t = 0 if t≤ t_S, ε(1-γ)∑_l = t_S^t-1(1-ε)^l/ε(1-γ)∑_l = t_S^t_I-1(1-ε)^l if t_S < t ≤ t_I, 1 if t>t_I. In the limit of vanishing self-infection ε→ 0, the cavity marginal takes the simple expression m_i∖ j^t = 0 if t≤ t_S, t-t_S/t_I-t_S if t_S < t ≤ t_I, 1 if t>t_I, which gives a reasonable probability profile for the node i to be infected in the absence of node i. It is worth stressing that this is not the full marginal m_i^t, which also depends on the messages coming from j to i. The cavity field instead diverges for times t between the two observation times μ_i∖ j^t = 1 if t≤ t_S, -∞ if t_S < t ≤ t_I, 0 if t>t_I, which is a clear consequence of having a vanishing normalization factor in the limit of zero self-infection. The divergence of the cavity field is thus the very non-physical effect of the inconsistency already discussed. In practice, in order to avoid divergences triggered by some peculiar combinations of observations, we then implement the algorithm using a cutoff μ_ cutoff<0 on the values of μ_i∖ j, such that the update rule (<ref>) is implemented as follows μ_i∖ j^t = max{μ_ cutoff, ρ_→ t^i∖ j(0)M_00^i∖ j(ρ_t+1←^i∖ j(0) - ρ_t+1←^i∖ j(1))/ρ_→ t^i∖ j(0)ρ_t←^i∖ j(0) + ρ_→ t^i∖ j(1)ρ_t←^i∖ j(1)}.
http://arxiv.org/abs/2306.04162v1
20230607052525
On the Defocusing Cubic Nonlinear Wave Equation on $\mathbb{H}^3$ with Radial Initial Data in $H^{\frac{1}{2}+δ} \times H^{-\frac{1}{2}+δ}$
[ "Chutian Ma" ]
math.AP
[ "math.AP" ]
In this paper we prove global well-posedness and scattering for the defocusing cubic nonlinear wave equation in the hyperbolic space ℍ^3, under the assumption that the initial data is radial and lies in H^1/2+δ(ℍ^3)× H^-1/2+δ(ℍ^3) A Survey on Multi-AP Coordination Approaches over Emerging WLANs: Future Directions and Open Challenges Shikhar Verma, Member, IEEE, Tiago Koketsu Rodrigues,  Member, IEEE, Yuichi Kawamoto, Member, IEEE, and Nei Kato, Fellow, IEEE S. Verma, , T.K. Rodrigues, Y. Kawamoto, and N. Kato are with the Graduate School of Information Sciences, Tohoku University, Sendai, Japan. Emails: {shikhar.verma, tiago.gama.rodrigues, youpsan, and kato}@it.is.tohoku.ac.jp ============================================================================================================================================================================================================================================================================================================================================================================ § INTRODUCTION In this paper, we study the defocusing, cubic nonlinear wave equation { u_tt-Δ_ℍ^3u+u^3=0 u(0,x)=u_0 u_t(0,x)=u_1 . The equations such as (<ref>) have been extensively studied on Euclidean space. The (<ref>) is known to be H^1/2-critical in the sense that the scaling symmetry on ℝ^3 u(t,x) → λ u(λ t, λ x) preserves the H^1/2× H^-1/2 norm of the solution. Wellposedness and scattering are defined in the usual way: (Wellposedness). We say the initial value problem is locally wellposed in H^s× H^s-1 if for any (u_0,u_1)∈ H^s× H^s-1 there exist T>0 and a neighborhood U of (u_0,u_1) in H^s× H^s-1, such that for any (u_0',u_1')∈ U, there exists a unique solution u in C_t^0H_x^s([-T,T]×)× C^0_tH_x^s-1([-T,T]×). Furthermore, the map u_0',u_1'→ u is continuous. If T>0 can be chosen arbitrarily large, then we say the initial value problem is globally wellposed. (Scattering). We say that a solution u scatters in H^s× H^s-1 if there exists (u_0^+,u_1^+) and (u_0^-,u_1^-) in H^s× H^s-1 such that (u,u_t)-S(t)(u_0^+,u_1^+)→ 0 in H^s× H^s-1 as t→ +∞ (u,u_t)-S(t)(u_0^-,u_1^-)→ 0 in H^s× H^s-1 as t→ -∞ where S(t) is the propagator to the linear equation. It is conjectured that the initial value problem (<ref>) is globally wellposed and scatters for initial data in H^1/2× H^-1/2. In fact, Dodson in <cit.> proved the conjecture on ℝ^3 under additional radial symmetry assumption. If we assume the initial data has H^1× L^2 regularity, the solution to (<ref>) on time interval I has conserved energy E(t)=∫1/2|∇ u|^2 + 1/2u_t^2 + 1/4u^4 , which remains constant for any t∈ I. Energy conservation holds on Euclidean spaces and general manifolds. In this case, global wellposedness follows directly from local wellposedness and bounded H^1× L^2 norm. In the case where the solution has lower regularity, say H^s× H^s-1 1/2≤ s<1, there is no known conserved quantity that control that norm. Thus local wellposedness alone is not sufficient. On Euclidean spaces, rough data global wellposedness and scattering results are well established for (<ref>) and a wide range of other dispersive equations. Bourgain in <cit.> studied cubic defocusing NLS below energy norm using the Fourier truncation method, proving global wellposedness for s>11/13. In short, Fourier truncation method splits the initial data into a high frequency part and a low frequency part and evolves them under respective equations. The high frequency evolution is rough but has small size, and will be treated as a perturbation. The low frequency evolution is large but has finite initial energy, which will be treated as a smoothed-up approximating solution. Since the high frequency evolution is small, its interaction with the low frequency evolution in the nonlinear terms is expected to be small, which causes the low frequency energy to be "almost conserved". Various later works have employed similar methods in studying rough data IVP for wave equations. <cit.> proved global well-posedness for cubic defocusing NLW in ℝ^3 for 3/4<s<1. See also <cit.>, <cit.>, <cit.>, <cit.>, <cit.> <cit.>, <cit.>. Compared to Euclidean space, there are fewer results regarding such initial value problems on hyperbolic space. Partly due to a lack of tool to effectively localize frequency. The Littlewood Paley operator, which plays a crucial role in general harmonic analysis on Euclidean space, projects frequencies into compact dyadic sets and preserves L^p norms. On hyperbolic space, however, a family of operators with these two properties are not known to exist yet. See <cit.>, where the author gives a sufficient condition for operators of the form ϕ(-Δ_) to be bounded in L^p(), which unfortunately does not include any ϕ that is smooth and compactly supported. Despite this technical difficulty, the studying of dispersive equations on hyperbolic spaces are of particular interest. Previous results suggest that the geometry of hyperbolic space results in stronger dispersive effects for dispersive equations compared to the flat case. Heuristically, this effect is due to the fact that the volume of radius R ball in ℍ^d increases exponentially in R. Therefore, as the waves advance one unit in geodesic direction, they have multiple times of volume to disperse into. For precise results, see <cit.> for example, which produces weighted Strichartz estimates for both wave and Schrödinger equations. <cit.>, <cit.> gives wider range of Strichartz admissible pairs for wave equations compared to what are avaiable on Euclidean spaces. See also <cit.>. In light of the better dispersive properties, we expect stronger results regarding IVP for NLW on . In this paper, we aim to obtain the following result. Assume the initial data (u_0,u_1) satisfies (u_0,u_1) is radial sysmetric, (u_0,u_1)∈ H^1/2+δ× H^-1/2+δ,δ>0 is any positive constant then Given any δ>0, the initial value problem to (<ref>), (<ref>) is globally well-posed and scatters to linear solution. Moreover, there exists function f_δ>0: ℝ→ℝ, such that u_L^4_t,x^4≤ f((u_0,u_1)_H^1/2+δ× H^-1/2+δ) The outline of this paper is as following. In section 2, we list some preliminary results with regard to the analysis on ℍ^3. We will review the basic geometry and introduce the heat flow based frequency operators, which serve as substitute for Little Paley operators on ℝ^d. We also include Strichartz estimates from <cit.>. In section 3, we derive a series of Morawetz estimates. In section 4, we prove the main theorem using Fourier truncation method. The main idea is to use the positive terms from Morawetz estimates in Section 3 to control other interaction terms in the nonlinearity. § PRELIMINARIES §.§ Notations We clarify some of the notations and abbreviation we will use throughout the paper. We say X≲ Y iff there exists a global constant C, such that X≤ CY. We say X∼ Y iff X≲ Y and Y≲ X. When there is no risk of confusion, we will write ·_p instead of ·_L^p_x. §.§ Analysis on ℍ^3 Let ℝ^3+1 be the standard Minkowski space endowed with the metric -dx_0^2+dx_1^2+dx_2^2+dx_3^2 and the bilinear form [x,y]=x_0y_0-x_1y_1-x_2y_2-x_3y_3. Hyperbolic space ℍ^3 is defined as the submanifold satisfying [x,x]=1 whose metric is induced from the Minkowski space. We will use the polar coordinates for our analysis on ℍ^3. We use the pair (r,ω)∈ℝ× S^2 to represent the point (coshr,sinhrω) in the previous model. The metric induced is g_ℍ^3=dr^2+sinh^2rdω^2 where dω^2 is the standard metric on the sphere 𝕊^2. And integrals are computed by ∫_ℍ^3 fdμ=∫_0^∞∫_𝕊^2f(r,ω)sinh^2rdrdω §.§ Heat Flow Frequency Operators In order to study the frequency interaction, we need a tool which serves similar role as the Littlewood Paley operators do in Euclidean spaces. For that purpose, heat flow based operators have been developed and used widely to localize frequencies on manifolds, see <cit.>, <cit.>, <cit.> for examples of their applications. In this paper, we will use the notations from <cit.>. Below we list the definition and some properties of heat flow based operators. For any s>0, we define P_≥ sf=e^sΔf, P_sf=(-sΔ)e^sΔf, P_<sf=f-P_≥ sf Heat flow operators are bounded in L^p. For 1<p<+∞ P_≥ sf_p+P_sf_p+P_<sf_p≲f_p For 0≤β<α<β+1, (-Δ)^β P_≤ sf_2≲ s^α-β(-Δ)^α f_2 (-Δ)^α P_≥ sf_2≲ s^β-α(-Δ)^β f_2 §.§ Sobolev Spaces on ℍ^3 The Laplacian-Beltrami operator on ℍ^3 can be written in polar coordinates as Δ_ℍ^3=∂_r^2+2coshr/sinhr∂_r+1/sinh^2rΔ_𝕊^2 Unlike its counterpart in Euclidean space, the spectrum of -Δ_ℍ^3 is [1,+∞], which is strictly positive. Thus we have the following Poincaré inequality: For 0≤α≤β<∞ and smooth function f, we have (-Δ)^α f_2≲_α,β(-Δ)^β f_2 Sobolev spaces on hyperbolic spaces are defined through Laplacian in the usual way: For 1<p<+∞, s∈ℝ, f_W^s,p=(-Δ)^s/2f_p We have Sobolev embedding If 1<p<q<∞ and 1/q=1/p-s/d W^s,p↪ L^q If 1/p-s/d<0, then W^s,p↪ L^∞ We need the following radial Sobolev inequalities. The proof follows the exact same procedure as its two dimensional counterpart in <cit.>, only to replace sinh(r)^1/2 in 2d with sinh(r) in 3d. Given f radial, and 1/2<α<2, we have sinh(r)f_∞≲_αf_H^α First, we will prove sinh(r)f_∞≲ s^-1/4(f_2+(-sΔ)^1/2f_2) To show (<ref>), f(r)^2 =-∫_r^∞∂/∂ s(|f(s)|^2)ds =-∫_r^∞ 2Re(f̅(̅s̅)̅∂ f/∂ r(s))ds ≲ sinh^-2(r)(∫_r^∞ |f(s)|^2sinh^2(s) ds)^1/2(∫_r^∞|∂ f/∂ r|^2sinh^2(s)ds)^1/2 ≲ sinh^-2(r)f_2(-Δ)^1/2f_2 (<ref>) then follows from Hölder inequality. The rest of the proof goes exactly the same as Corollary 2.22 in <cit.> §.§ Strichartz Estimates We will use the Strichartz estimates obtained in <cit.> for wave equation on the 3 dimensional hyperbolic space. Define ℛ to be the set of (p,q,γ) such that 1/p+1/q≤1/2 p,q≥ 2 γ=3/2-1/p-3/q and ℰ to be the set of (p,q,γ) 1/2-1/p≤1/q≤1/2-1/3p and p>2 or 0<1/q<1/3 and p=2 γ=1-2/q We have the following Strichartz inequalities Suppose (p,q,γ)∈ℛ∪ℰ, then the mapping defined by T:H^γ(ℍ^3)→ L^p_tL^q_x(ℝ×ℍ^3) Tf(t,x)=e^it√(-Δ)f(x) or equivalently T^*:L^p'_tL^q'_x(ℝ×ℍ^3)→ H^-γ(ℍ^3) T^*F(x)=∫_-∞^+∞e^-it√(-Δ)F(t,x)dt is bounded. Note that the solution of <ref> is given by u(t,x)=cos(t√(-Δ))u_0+sin(t√(-Δ))/√(-Δ)u_1+∫_s<tsin((t-s)√(-Δ))/√(-Δ)f(s,x)ds Strichartz estimate for the solution follows from (<ref>) and (<ref>) Let I⊂ℝ and t_0∈ I. Suppose (p,q,γ)∈ℛ, then we have the following estimate if u is the solution to the linear wave equation u_L^p_tL^q_x(I)+u_L^∞_tH^γ_x(I)+u_t_L^∞_tH^γ-1_x(I)≲_p,q,γu_0_H^γ_x+u_1_H^γ-1+f_L^p'_tL^q'_x(I) The estimates for cos(t√(-Δ))u_0 and sin(t√(-Δ))/√(-Δ)u_1 follow immediately from (<ref>). Consider the operator W defined by WF(t,x)=∫_-∞^+∞sin((t-s)√(-Δ)) /√(-Δ)F(s,x)ds Let T_1(t)=sin(t√(-Δ))/√(-Δ). In fact, W=√(-Δ)T_1T_1^*. By (<ref>) and (<ref>), we easily check that W is bounded from L^p'_tL^q'_x to L^p_tL^q_x. Christ-Kiselev lemma<cit.> gives us the desired bound for the last piece in (<ref>). § MORAWETZ ESTIMATES In this section, we prove several Morawetz inequalities derived from choosing different potentials. Given smooth function a(x)∈ C^∞(ℍ^3), let u be a radial solution to the modified NLW (<ref>) with error term 𝒩 u_tt-Δ_ℍ^3u+u^3=𝒩 We define the Morawetz potential to be M_a(t)=-∫_(u_t∇ u·∇ a+u_tuΔ a/2) Assume that a satisfies the following condition { |∇ a|≤ C Δ a≥ 0 Δ^2 a≤ 0 D^2a is positive definite . We differentiate (<ref>) in time, the time derivative reads as following: Suppose a satisfies the condition (<ref>) and u is a radial symmetric solution to (<ref>) on the time interval I, then dM_a/dt=∫_(a^”(r)|∇ u|^2 + 1/4 u^4Δ a + 1/4u^2Δ^2 a + 𝒩∇ a ·∇ u + 1/2𝒩uΔ^2 a )dμ Next, we are going to pick three different function a, denoted by a_1,a_2,a_3, adhering to the conditions (<ref>). There exists a function a_1 on , such that Δ a_1 = 1 and a_1 satisfies the requirement <ref>. We will need the following estimate a_1^”(r)=rcoshr-sinhr/sinh^3r∼{ 1, r→ 0 re^-2r, r→∞ . Denote M_1 to be (<ref>) with a=a_1. There exists a_2>0 such that Δ a_2= { 1/r, 0<r<1 1, r≥ 1 ., which satisfies all but the positive definite condition of (<ref>). In fact, a_2^”(r)∼{ -r, r → 0 1, r≥ 1 . Note that Δ_=∂^2/∂ r^2+2coshr/sinhr∂/∂ r for radial symmetric functions. Thus a'(r) is the solution to the ODE. We have a_2'(r)=1/sinh^2(r)∫_0^r sinh^2(s)Δ a_2(s)ds And further a_2(r)=∫_0^r1/sinh^2(s)∫_0^ν sinh^2(ν)Δ a_2(ν)dν ds The first three conditions of (<ref>) are easy to verify. Unfortunately, a_2”(r) turns out to not be positive everywhere. Use Taylor expansion near r=0 gives us a_2^”(r)∼ -r near 0. Fortunately, this is small near 0 and can be absorbed by a_1^”(r) which is ∼ 1. We will get into these details later. Given any 0<α<1, there exists a_3>0, such that Δ a_3= { 1/r^α, 0<r<1 1, r≥ 1 .. Moreover, a_3 satisfies all conditions in (<ref>) and a_3^'(r)= 1/3-αr^1-α+𝒪(r^2-α), r→ 0 a_3^”(r)=(2-4/3-α)r^-α+𝒪(r^1-α), r→ 0 -Δ a_3(r) = α(1-α)/r^2+α+𝒪(r^-α) The procedure of finding a_3 is exactly the same as that of a_2. (<ref>) comes from Taylor expansion near r=0. Note that when α = 1, the dominant positive term becomes 0, which is reduced to the case of a_2. We will need the following space-time Morawetz estimate in particular Suppose ω is radial and solves the standard cubic equation (<ref>) on I. Then for any δ>0, we have the following estimate on its 2∞ norm retricted on the region r≥ 1: r^1/2ω_2∞(I ×{r> 1}^2≲sup_t ω_H^1/2+δω_t_H^-1/2+δ Plug the a_1 and 𝒩=0 into (<ref>). All terms on RHS are positive. In particular, from the a^” term, we get r^1/2e^-r∇ω^2 ≲dM/dt Denote β(r)=r^1/2e^-r for r>1. By radial Sololev embedding and product rule r^1/2ω_L^∞_x(r>1) = e^r β(r)ω_L^∞_x(r>1) ≲|∇|^1/2+δβ(r)ω_2+β(r)|∇|^1/2+δω_2 Note that |∇|^1/2+δβ decays just like β itself. Square (<ref>), integrate in time and use Morawetz estimate (<ref>), we get r^1/2ω_2∞(I×{r>1}) ≲|∇|^1/2+δβ(r)ω_2^2+ β(r)|∇|^1/2+δω_2 ≲β(r)|∇||∇|^-1ω_2+β(r)|∇||∇|^-1/2+δω_2 ≲sup_t ω_L^2_xω_t_H^-1_x+ ω_H^1/2+δω_H^-1/2+δ ≲sup_t ω_H^1/2+δω_t_H^-1/2+δ as desired. Before ending this section, we prove another Morawetz type inequality. Note that Lemma <ref>, <ref>, <ref> yield positive terms of the form β∇ v_2. We wish to gain similar terms for v_t in place of ∇ v. Unfortunately, terms involving v_t were canceled out in (<ref>). Thus, we define the following modified Morawetz potential for some suitable radial functions a(r) and ϕ(r) on . M(t) = -∫_ϕ(r)( v_t∇ a·∇ v + v_tvΔ a/2) Compute the time derivative as we did before, we get dM/dt = 1/2∫ (∇ϕ·∇ a)|v_t|^2+1/2∫ (∇ϕ·∇ a)|∇ v|^2+∫ϕ D^2a(∇ v,∇ v) - 1/4∫ (∇ϕ·∇ a - ϕΔ a)v^4 Choose a(r)=r^-α̃ for any 0<α<1 and ϕ(r)=log(r)χ(r≤ 1). Plug them into (<ref>), we get r^-1+α/2v_t_2^2 - r^-1+α/2|∇ v|^2_2 - r^-αlog(r)χ(r≤ 1)|∇ v|^2_1 - r^-αlog(r)χ(r≤ 1)v^4_1 ≲dM/dt Note that the negative terms involving r^-αlog(r) can be controlled by terms in (<ref>) due to the fact that r^-αlog(r)<< r^-α on r≤ 1 if we pick our α properly. § GLOBAL WELL-POSEDNESS AND SCATTERING The equation (<ref>) is H^1/2 critical. However, the initial value (<ref>) lies below H^1, which leaves us no energy or other known conserved quantity which controls H^1/2× H^-1/2 norm. Thus, we employ the Fourier truncation method as stated below. Given initial value problem (<ref>),(<ref>), we split the initial data into a high frequency small piece and a smoother large piece, in the following sense: for some s>0, (u_0,u_1)=(ω_0,ω_1)+(v_0,v_1) where (ω_0,ω_1) is the high frequency piece defined by (ω_0,ω_1)=(P_<su_0,P_<su_1) and (v_0,v_1) is the low frequency piece defined by (v_0,v_1)=(P_≥ su_0,P_≥ su_1) Moreover, for any 0<δ_1<δ, v and ω satisfy (ω_0,ω_1)_H^1/2+δ_1× H^-1/2+δ_1≲ s^1/2(δ-δ_1)(u_0,u_1)_H^1/2+δ× H^-1/2+δ (v_0,v_1)_H^1 × L^2≲ s^1/2(s-1)(u_0,u_1)_H^1/2+δ× H^-1/2+δ (<ref>) comes from Bernstein inequality. If we choose s small enough according to (u_0,u_1)_H^1/2+δ× H^-1/2+δ, then (ω_0,ω_1)_H^1/2+δ_1× H^-1/2+δ_1<<1. Thus by small data theory, the evolution of (ω_0,ω_1) under the equation (<ref>) exists globally in time. Let (ω,ω_t) be the solution to (<ref>) with initial data (ω_0,ω_1). Then (ω,ω_t) exists global in time. Moreover, for any δ_1<δ/2, ω satisfies (ω,ω_t)_H^1/2+δ_1× H^-1/2+δ_1<ϵ ω_4 < ϵ where ϵ>0 is some fixed small constant. Suppose the solution u to the original equation (<ref>) exists on time interval I. We will now consider v=u-ω on I. Denote E(t) to be the energy of v for t∈ I. By (<ref>), we have E(0)≲ s^δ-1/2 Our goal now is to show that the energy of v stays bounded throughout I. Suppose that is true, then by local well-posedness theory, v can be extended beyond I and global well-posedness follows. In order to estimate E(t), we take time derivative for the purpose of applying the fundamental theorem of calculus. In fact, v satisfies the following equation v_tt-Δ v+v^3=𝒩 where 𝒩=𝒪(v^2ω+vω^2) And the derivative of E[v](t) reads as dE(t)/dt=<𝒩,v_t>≲|∫ v^2ω v_t|+|∫ vω^2 v_t | Let us look at |∫ vω^2 v_t | first. We can estimate it by |∫ vω^2 v_t | ≲v_6v_t_2ω_6 ≲ω_6^2E(t) If the other term |∫ v^2ω v_t| was not present, the Grönwall inequality will give us E(t)≲ e^ω_26. However, the term |∫ v^2ω v_t| cannot be estimated in the same fashion, as it will yield a E^3/2(t) which cannot be estimated by Grönwall inequality. To combat this, we introduce the following modified energy: Under the previous settings, define in terms of v, ℰ(t)=E(t)-c_1M_1(t)-c_2M_2(t)-c_3M_3(t)-c_4M(t) where M_j and M are defined in section 3, with some 0<α<1 and 0<α<α to be determined for M_3 and M. c_j>0 are small constants to be determined. There exist c_j>0 small enough, such that ℰ(t) ∼ E(t) By Hölder inequality and the fact that |∇ a_j| are uniformly bounded, we have for j=1,2,3: |M_j(t)|≲ E(t) For M(t), recall that M(t)=-∫ log(r)χ(r<1)(v_t∇ a_4·∇ v + v_t v Δ a_4/2) Combined with the fact that Δ a_4∼ r^-α and |∇ a_4|∼ r^1-α on r<1, we have |M|≲ E(t). Thus proposition holds for c_j small enough. Now we compute the time derivative of ℰ(t), which reads as following: dℰ(t)/dt =dE(t)/dt-c_1dM_1(t)/dt-c_2dM_2(t)/dt-c_3dM_3(t)/dt-c_4dM(t)/dt = <𝒩,v_t> - c_1( ∫ a_1^”(r)|∇ v|^2 + 1/4v_4^4-∫𝒩∇ a_1 ·∇ v - 1/2∫𝒩v ) - c_2( ∫ a_2^”(r)|∇ v|^2 + 1/4v/r^1/4_L^4_x(r≤ 1)^4 + v^4_L^4_x(r>1) + 1/4∫ v^2(-Δ^2 a_2) -∫𝒩∇ a_2 ·∇ v - 1/2∫𝒩vΔ a_2 ) - c_3( ∫ a_3^”(r)|∇ v|^2 + 1/4v/r^α/4_L^4_x(r≤ 1)^4 + v^4_L^4_x(r>1) + 1/4∫ v^2(-Δ^2 a_3) -∫𝒩∇ a_3 ·∇ v - 1/2∫𝒩vΔ a_3 ) - c_4( 1/2∫ (∇ log(r)·∇ a_4)χ_≤ 1|v_t|^2 + 1/2∫ (∇ log(r) ·∇ a_4)χ_≤ 1|∇ v|^2 + ∫ log(r)χ_≤ 1a_4^”(r)|∇ v|^2 . - 1/4∫ (∇ log(r)·∇ a_4 - log(r)Δ a_4)χ_≤ 1v^4 - 1/4∫ v^2χ_≤ 1(Δ log(r) Δ a_4 + 2∇ log(r)·∇Δ a_4 + log(r) Δ^2 a_4) . - ∫ log(r)χ_≤ 1∇ a_4·𝒩∇ v - ∫ log(r)χ_≤ 1Δ a_4 𝒩v) For simplicity, we will denote I_j(j=1,2,3,4) to be the expression inside the brackets associated with c_j. There exists some 0<α<α<1 and small c_j>0, whose choices depend on the size of δ and (u_0,u_1)_H^1/2+δ× H^-1/2+δ, such that if s>0 is small enough (i.e. the high frequency piece ω is small enough), we have dℰ/dt≲r^1/2ω^2_L^∞_x(r>1)ℰ(t) + ω_4^4 Note that the negative terms in (<ref>) include -c_1(∫ a_1^”(r)|∇ v|^2+1/4v_4^4) - c_2(1/4v/r^1/4_L^4_x(r≤ 1) + 1/4v_L^4_x(r>1) + 1/4∫ v^2(-Δ^2 a_2)) - c_3(∫ a_3^”(r)|∇ v|^2 + 1/4v/r^α/4_L^4_x(r≤ 1) + 1/4v^4_L^4_x(r>1) + 1/4∫ v^2(-Δ^2 a_3)) - c_4/2∫ (∇ log(r)·∇ a_4)χ_≤ 1|v_t|^2 ≲ -c_1(∇ v^2_L^2_x(r≤ 1) + r^1/2e^-r∇ v^2_L^2_x(r≤ 1) + v_4^4 ) - c_2v/r^1/4^4_L^4_x(r≤ 1) - c_3 ( r^-α/2∇ v^2_L^2_x(r≤ 1) + r^-2+α/2v^2_L^2_x(r≤ 1)) -c_4 r^-α/2log^1/2(r)v_t^2_L^2_x(r≤ 1) It suffices to show that the remaining terms either can be absorbed by the negative terms, or are bounded by r^1/2ω_L^∞_x(r>1)^2ℰ(t). §.§ The <𝒩,v_t> term |<𝒩,v_t>| ≲ω v^2 v_t_1 + ω^2 v v_t_1 = ω v^2 v_t_L^1_x(r≤ 1) + ω v^2 v_t_L^1_x(r>1) + ω^2 v v_t_L^1_x(r≤ 1) + ω^2 v v_t_L^1_x(r>1) The four terms can be estimated separately by ω v^2 v_t_L^1_x(r>1) ≲v_t_2v_4^2ω_L^∞_x(r>1)^2 ≲ϵv_4^4 + C_ϵr^1/2ω_L^∞_x(r>1)^2ℰ(t) ω v^2 v_t_L^1_x(r≤ 1) ≲r^1/2+α/2ω_L^∞_x(r≤ 1)v/r^1/4^2_L^4_x(r≤ 1)r^-α/2v_t_L^1_x(r≤ 1) ≲ϵv/r^1/4^4_L^4_x(r≤ 1) + C_ϵr^1/2+α/2ω^2_L^∞_x(r≤ 1)r^-α/2v_t_L^2_x(r≤ 1)^2 If ϵ>0 is small enough depending on c_2, then the first term on the RHS of (<ref>) is absorbed by the negative terms provided by I_2. For the C_ϵ term, note that r^-α/2v_t_L^2_x(r≤ 1) is present in the negative terms (<ref>). It suffices to show that the coefficient C_ϵr^1/2+α/2ω_L^∞_x(r≤ 1)<<c_3, so that it can be absorbed by (<ref>). This can be proved by interpolating between the radial Sobolev inequality and Morrey's inequality. In fact, rω_L^∞_x(r≤ 1)≲_δ_1ω_H^1/2+δ_1 ω_∞≲ω_H^2_x Picking δ_1>0 to be small enough and interpolating, we have r^1/2+α/2ω_L^∞_x(r≤ 1)≲ω_H^1/2+δ/2≲ s^1/4δ Since s is small, (<ref>) is completely absorbed by (<ref>). Next, the two ω^2 v v_t terms can be estimated by ω^2 v v_t_L^1_x(r>1) ≲r^1/2ω_L^∞_x(r>1)^2v_2∇ v_2 ≲r^1/2ω_L^∞_x(r>1)^2∇ v_2^2 + r^1/2ω_L^∞_x(r>1)^2 v_t_2^2 and ω^2 v v_t_L^1_x(r≤ 1) ≲r^1/2-δ_2ω_L^6_x(r≤ 1)r^-1/2+δ_2v_L^6_x(r≤ 1)r^1/2-δ_2ω_L^6_x(r≤ 1)r^-1/2+δ_2v_t_L^2_x(r≤ 1) ≲r^1/2-δ_2ω_L^6_x(r≤ 1)^2 r^-1/2+δ_2v_L^6_x(r≤ 1)^2 + r^1/2-δ_2ω_L^6_x(r≤ 1)^2 r^-1/2+δ_2v_t_L^2_x(r≤ 1)^2 We'll first bound the ω term, which serves as coefficient. In fact, interpolate between the following inequalities rω_L^∞_x(r≤ 1)≲ω_H^1/2+δ/2 ω_3 ≲ω_H^1/2 We get r^1/2ω_L^6_x(r≤ 1)≲ω_H^1/2+δ/4 Interpolate that again with ω_6 ≲ω_H^1 we get r^1/2-δ/2(2-δ)ω_L^6_x(r≤ 1)≲ω_H^1/2+δ/2≲ s^1/4δ We will just choose δ_2=δ/2(2-δ) in (<ref>). Now let us look at the v terms. r^-1/2+δ_2v_t can be absorbed by (<ref>), providing we choose α=1-2δ_2 and s small enough depending on c_3. As for the r^-1/2+δ_2v_L^6_x(r≤ 1) term, it can be controlled by some more interpolation. There exists 0≤θ,γ≤ 1, such that r^-1/2+δ_2_L^6_x(r≤ 1)≲r^-1-α/2v_L^2_x(r≤ 1)^θ∇ v_L^2_x(r≤ 1)^1-θ Providing α<1 is big enough(very close to 1), there exists b∈ℝ, δ_3>0, q>1, such that the following interpolations hold for some 0≤θ,γ≤ 1 r^-1/2+δ_2v_L^6_x(r≤ 1)≲r^-1-α/2v_L^2_x(r≤ 1)^θr^bv_L^q_x(r≤ 1)^1-θ r^bv_L^q_x(r≤ 1)≲∇ v_L^2_x(r≤ 1) The latter inequality comes from interplolating between rv_L^∞_x(r≤ 1)≲v_H^1/2+δ_3 v_L^3/δ_3_x≲v_H^3/2-δ_3 The desired inequality then follows by choosing suitable θ,γ. Combining (<ref>), (<ref>), then (<ref>) is bounded by s^1/2δr^-1-α/2v_L^2_x(r≤ 1)^2 + s^1/2δ∇ v_L^2_x(r≤ 1)^2 + s^1/2δr^-α/2v_t_L^2_x(r≤ 1)^2 all of which can be absorbed by (<ref>) providing s is small enough depending on c_1,c_3,c_4. Now, we conclude our bound for |<𝒩,v_t>| by combining (<ref>), (<ref>), (<ref>), (<ref>). They are either absorbed by (<ref>) or bounded by r^1/2ω_L^∞_x(r>1)^2. §.§ The ∫ a_2^”(r)|∇ v|^2 term Since a_2^”(r)>0 when r>1, we need only worry about r≤ 1. We have a_2^”(r)≳ -r on r≤ 1, thus this term can be absorbed by c_1∫ a_1^”(r)∇ v|^2, providing c_2<<c_1. §.§ Other terms containing 𝒩 These terms are bounded by 𝒩∇ v_L^1_x + 𝒩v_L^1_x + 𝒩v/r_L^1_x(r≤ 1). 𝒩∇ v_L^1_x can be estimated in the same way as 𝒩v_t_1. The 𝒩v_1 is bounded by 𝒩v_1 ≲ϵv_4^4 + C_ϵω_4^4 Note that ϵv_4^4 can be absorbed by (<ref>) and ω_4^4 is integrable in time thus ready for Grönwall inequality. Finally, by Hölder inequality and Hardy inequality, 𝒩v/r_L^1_x(r≤ 1) ≲v^3ω/r_L^1_x(r≤ 1) + v^2ω^2/r_L^1_x(r≤ 1) ≲v_L^4_x^2v/r_L^2_x(r≤ 1)rω_L^∞_x(r≤ 1) + v^2/r^2+α_L^1_x(r≤ 1)^2r^1/2+α/2ω_L^∞_x(r≤ 1)^2 ≲ϵv_4^4 + C_ϵω_H^1/2+δ/2^2∇ v_L^2_x(r≤ 1)^2 + ω_H^1/2+δ/2^2 v/r^1+α/2_L^2_x(r≤ 1)^2 We used Hardy inquality in r≤ 1, which follows directly from the Euclidean Hardy inequality and the locally Euclidean property of . Note that ω_H^1/2+δ/2≲ s^δ/4. Choosing ϵ and s small enough depending on c_j will cause terms in RHS of (<ref>) to be absorbed by (<ref>). §.§ Non negative terms in I_4 These terms can all be bounded by ∫_r≤ 1 r^-α|log(r)|(|∇ v|^2 + |v|^4) + ∫_r≤ 1 r^-2-α|log(r)|v^2 Since α<α, we have r^-α|log(r)|≲ r^-α and r^-2-α|log(r)|≲ r^-2-α on r≤ 1. Providing c_4<<c_1,c_2 and c_3, then these terms can be absorbed into (<ref>). To conclude the proof of Proposition <ref>, collect all the estimates derived in the proof and plug them into (<ref>). For suitably chosen constants c_j and s, we get what we desired. Note that s is required to be small, in order for terms like ω_H^1/2+δ/2 to be smaller than some fixed constant. Since ω is assumed to be in H^1/2+δ, by Bernstein's inequalities the choice of s need only to depend on the size of ω_H^1/2+δ, not on the profile. The proof of Proposition <ref> yields another estimate v_4^4 ≲dM_1/dt + dM_2/dt + dM_3/dt + ω_L^∞_x(r>1)^2 + ω_4^4 The idea is to write dM_1/dt + dM_1/dt + dM_1/dt. Then we either discard the positive terms or use them to control error terms in the similar fashion as in Proposition <ref>. The desired result then follows. Now we are ready to prove Theorem 1.1, which follows directly from Proposition <ref> by Grönwall inequality. Proof of Theorem 1.1 Suppose u solves (<ref>) and I∈ℝ is the maximum interval of existence. Decompose u into ω+v. Integrate the result of Proposition <ref> in time, we get ℰ(t) ≲ℰ(0) + ω_4[0,t]^4 + ∫_0^t r^1/2ω(s)_L^∞_x(r>1)^2ℰ(s)ds where ℰ(0) ≲ s^-1/2+δ(u_0_H^1/2+δ^2 + u_1_H^-1/2+δ^2) ω_4(ℝ)^4 ≲ s^2δ(u_0_H^1/2+δ^4+u_1_H^-1/2+δ^4)<<ℰ(0) By Corollary <ref> and Grönwall inequality, we get ℰ(t) ≲ℰ(0)e^r^1/2ω_2∞(r>1)^2≲ s^-1/2+δ(u_0_H^1/2+δ^2 + u_1_H^-1/2+δ^2) Thus the energy of v E(t)∼ℰ(t) is uniformly bounded on I. This forces I to be equal to ℝ. To prove scattering, it suffices to show that v_4(ℝ)<∞, since u=ω+v. By Coroallary <ref>, v_4(ℝ)^4 ≲sup_t (|M_1(t)|+|M_2(t)|+|M_3(t)|) + r^1/2ω_2∞^2 + ω_4^4 ≲sup_t v_t_2∇ v_2 + vv_t/r_2 + s^1/2δ(u_0,u_1)^2_H^1/2+δ× H^-1/2+δ ≲sup_t E(t) + s^1/2δ(u_0,u_1)^2_H^1/2+δ× H^-1/2+δ ≲ s^-1/2+δ(u_0,u_1)^2_H^1/2+δ× H^-1/2+δ Since the choice of s depends only on the size of u_0_H^1/2+δ and u_1_H^-1/2+δ, there exist a function f:ℝ×ℝ→ℝ, such that v_L^4_t,x(ℝ)^4 ≤ f(u_0_H^1/2+δ,u_1_H^-1/2+δ) alpha
http://arxiv.org/abs/2306.06415v1
20230610112144
Detecting gamma rays with high resolution and moderate field of view: the air Cherenkov technique
[ "Juan Cortina", "Carlos Delgado" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.HE" ]
Detecting gamma rays with high resolution and moderate field of view: the air Cherenkov technique IACTs Juan Cortina CIEMAT, Avda. Complutense 40, Madrid, Spain, [email protected] Carlos Delgado CIEMAT, Avda. Complutense 40, Madrid, Spain, [email protected] * Juan Cortina and Carlos Delgado Accepted . Received ====================================== The Imaging Atmospheric Cherenkov technique allows to detect very high energy gamma rays from few tens of GeV to hundreds of TeV using ground-based instrumentation. At these energies a gamma ray generates a shower of secondary particles when it enters the Earth's atmosphere. These particles emit Cherenkov light in the visible and near UV ranges. The Cherenkov light produced by the shower reaches the ground as a short pulse of a few nanosecond duration over a large circle of around 100 m radius (a light pool). This pulse of light can be imaged with telescopes provided with fast photodetectors and electronics. Combining the images of several telescopes distributed over this light pool allows to estimate the gamma-ray energy and incident direction, and to reject gamma rays from the strong background of charged cosmic rays. The collection area of an array of a few telescopes is of the order of the area of the light pool, i.e. >10^5m^2. Such an array reaches a sensitivity of a few millicrabs at 100 GeV energies in 50 hours of observations, an angular resolution of ∼5 arcmin and a spectral resolution of ∼10%. This chapter describes the technical implementation of Imaging Atmospheric Cherenkov telescopes and describes how the data are analyzed to reconstruct the physical parameters of the primary gamma rays. § KEYWORDS IACTs, Cherenkov Telescopes, very high energy gamma rays, cosmic rays, instrumentation Mazin:I think it would be good to add: * Sensitivity calculation and Sensitivity plot * Example of a detection plot * Example of an energy spectrum * Example of a sky map for a point like source and an extended gamma-ray source * Example of a light curve I think it might be good to discuss the future possibilities and limitations of the IACT technique. Maybe a short paragraph at the end? § INTRODUCTION Introduction to the chapter; length depends on the topic describing importance of subject and content. The imaging atmospheric Cherenkov technique was pioneered by the Whipple Collaboration in the USA<cit.>. After more than 20 years of development, Whipple discovered the Crab nebula, the first VHE gamma-ray source, in 1989<cit.>. The Crab nebula is one of the most powerful sources of very high energy gamma rays, and is often used as a "standard candle". Modern instruments, which use multiple telescopes to follow the cascades from different perspectives and employ fine-grained photon detectors to enhance the images, can detect sources with a flux below 1% of the Crab Nebula flux. Finely pixelated images were first employed on the French CAT telescope<cit.>, and the use of "stereoscopic" telescope systems to provide images of the cascade from different viewpoints was pioneered on the European HEGRA IACT system<cit.>. For summaries of the achievements of recent years and the scientific case for a next-generation very-high-energy gamma-ray observatory, see <cit.>. Irrespective of the technical implementation details, as far as its performance is concerned, an Imaging Atmospheric Cherenkov Telescope (IACT) is primarily characterised by its light collection capability, i.e. the product of mirror area, photon collection efficiently and photon detection efficiency, by its field of view and by its pixel size, which limits the size of image features which can be resolved. The larger the light collection efficiency, the lower the gamma-ray energy that can be successfully detected. The optical system of the telescope should obviously be able at achieve a point spread function matched to the pixel size. The electronics for signal capture and triggering should provide a bandwidth matched to the length of Cherenkov pulses of a few nanoseconds. The performance is also dependent on the triggering strategy; Cherenkov emission from air showers has to be separated in real time from the high flux of night sky background photons, based on individual images and global information in case the showers are observed from several viewpoints. In addition, the huge data stream from IACTs does not allow to deal with untriggered recording easily. The collection area of an array of 2-4 telescopes is of the order of the area of the light pool, i.e. >10^5m^2. This collection area may grow by simply adding more telescopes to the array. An array of a 2-4 telescopes of the 10 m diameter mirror class reaches a sensitivity of a few millicrabs at 100 GeV energies in 50 hours of observations, an angular resolution of ∼5 arcmin and a spectral resolution of ∼10%. Larger mirrors (20-30 m) bring the energy threshold of the array to few tens of GeV while increasing the number of 10 m diameter telescopes to 10-20 can improve the sensitivity well under 1 millicrab. However telescope optics prevents the field-of-view (FOV) of IACTs to exceed 8-10^∘ diameter and the Cherenkov light of atmospheric showers can only be detected during the astronomical night, typically without Moon, and with good weather conditions, so the duty cycle of IACTs rarely exceeds 15%. These collection areas are orders of magnitude larger than the collection areas of satellite-based detectors<cit.>. This makes IACTs the instruments of choice in the energy range between a few tens of GeV and tens of TeV. During the last two decades IACTs have opened a new astronomical window: the VHE γ-ray range offers a new tool to study sources of non-thermal radiation such as supernova remnants, star forming regions or the surroundings of compact objects (jets and winds around black holes and pulsars). VHE γ-ray astronomy also allows to study the extragalactic background light and intergalactic magnetic fields or to search for dark matter and effects of quantum gravity<cit.>. At larger energies, from hundreds of TeV to a few PeV, one requires even larger collection areas, larger FOVs and/or a duty cycle approaching 100%, as offered by detector arrays sampling the particles in the atmospheric shower. However IACTs still offer unbeatable angular and spectral resolutions. § AIR SHOWER PROPERTIES AND IMAGING When a very high energy gamma-ray interacts with the Earth's upper atmosphere, it converts into an electron-positron pair. Subsequent Bremsstrahlung and pair production interactions generate an electromagnetic shower in the atmosphere, in which the total number of electrons, positrons and photons approximately doubles every log(2) times the radiation length for Bremsstrahlung in the atmosphere (roughly 37.2 g cm^-2)<cit.>. The splitting of the energy of the primary gamma-ray stops when the losses due to ionization of the secondary electrons and positrons dominate over the other processes. This happens when the average lepton energy is about E_0=84 MeV and, as a result, the maximum number of leptons in the shower is E/E_0, where E is the energy of the primary. Given their energy, the electrons and positrons move faster than the speed of light in air, thus emitting Cherenkov radiation<cit.>. The maximum intensity of this emission occurs when the number of particles in the cascade is largest, at an altitude of ∼10 km for primary gamma-ray energies of 100 GeV to 1 TeV near the zenith. During their propagation these particles undergo multiple Coulomb scattering, distributing then in the direction perpendicular to the shower propagation, increasing the spread of their individual direction of propagation. This together with the Cherenkov angle, which amounts to about 1.4^∘, result of a pool of photons nearly uniformly distributed within a circle of about 130 m of radius around the extrapolation of the primary trajectory to ground, with a density of about 100 photons m^-2s^-1 for a primary gamma ray of about 1 TeV., as illustrated in Fig. <ref>, These photons arrive to the ground in a single pulse of few nanoseconds of duration. An IACT detects these photons and determines and kinematics of the primary. The detection technique is based on a simple concept: photons are collected in a large mirror which focuses them in a fast camera with photodetectors coupled to digital samplers[Actually the first detection of Cherenkov from an air shower was done with a free running analog oscilloscope and a single photomultiplier.]. However, as usual, the devil is in the details: even if detection is relatively simple, rejection of cosmic ray showers, night sky photons and determination of the gamma-ray kinematics are challenging problems. Cosmic rays (mainly protons and He nuclei) collide with atmospheric nuclei to generate secondary hadronic or leptonic particles. After further interactions they develop several electromagnetic sub-showers and muons. Compared to a gamma-ray shower and due to the larger transverse momentum of hadronic interactions, cosmic-ray shower particles spread away from the incident direction. The corresponding shower image at an IACT is broader and more irregular. This fact is key to cosmic-ray rejection techniques. In addition, IACTs implement the stereoscopic imaging technique, illustrated in Fig. <ref>: two or more large convex reflector placed within the light pool, focus the Cherenkov light of a single shower onto the same number of cameras equipped with photodetectors. These cameras record the image of the shower from different perspectives, and the geometrical properties of these images allow to determine the properties of the primary particle. In particular, the crossing point of the longitudinal axis of these images projected in to the sky provides a determination of the direction of the primary, and the total recorded number of photons is directly linked with the energy. In order to accurately identify and reconstruct the primary particle properties it is necessary to make use of complex Monte Carlo simulations of the shower development and the detector response. In order to implement this technique it is necessary to design a telescope with a very large optical aperture that allows to collect as many Cherenkov photons as possible, and a large FOV since the shower images can have an angular extend of up to few degrees and are shifted another few degrees from the source position in the sky. In addition, it is common to point slightly off-source to use a source-less portion of the sky within the FOV to estimate the background due to diffuse gamma rays and hadronic cosmic rays. These constraints lead to designs of the optics explained later in this chapter. To cover the required FOV cameras of 10-m type IACTs are usually large, with sizes measured in meters. Their focal plane is instrumented with fast detectors, with response times of order of nanosecond, high photodetection efficiencies (20% or more), large detection areas, and very clean amplification, that allows to resolve single phe (phe) signals. These are described later in this chapter together with the associated electronics. The size of focal plane pixels is a parameter which requires careful optimisation in IACTs. Figure <ref> illustrates how a shower image is resolved at pixel sizes of 0.10^∘ and of 0.20^∘. The gain due to the use of small pixels depends strongly on the analysis technique. In the classical second-moment analysis (Hillas analysis<cit.>), performance seems to saturate for pixels smaller than 0.15^∘. On the other hand, analysis techniques which use the full image distribution can extract the information contained in the well collimated head part of high-intensity images, as compared to the more diffuse tail, and benefit from smaller pixel sizes. Pixel size also influences trigger strategies, since gamma-ray images are contiguous for large pixel sizes, allowing straight-forward topological triggers compared with the case with smaller pixel sizes. § TELESCOPE OPTICS Let us first consider the requirements for the telescope optics: * As a general rule of thumb a γ-ray atmospheric shower produces 1 photon per m^2 and GeV. This means that one needs mirror collection areas of hundreds of m^2 to study γ-rays of ≤100 GeV energy. As a result, IACTs currently have the largest optical mirrors in the world. * However, as we have seen, shower images have an angular extent of tens of arcminutes and undergo intrinsic fluctuations due to the shower development in the atmosphere. Consequently, IACTs only require an optical point spread function (PSF) roughly of the size of a pixel, i.e. about 5 ∼arcminute in diameter. This contrasts sharply with the optical requirements of telescopes in the optical range of light, for which the PSF must be better than 1 arcsecond, and allows to relax the quality requirements for the mirrors and mirror support structure. * The FOV must have an angular diameter of at least 2 arcdeg to contain the images of low energy showers produced by point-like γ-ray sources and at least 4 arcdeg if the telescope is pointed slightly away from the source (”wobble mode”) and the intention is to observe high energy showers. Larger FOVs are needed if one plans to observe sources with an extent of a few arcmin or to perform sky surveys. * As mentioned above, time is also of the essence in the identification and reconstruction of showers. The telescope optics must also satisfy a requirement for photon arrival time: the shape of the reflector must be isochronous to very few nanoseconds. Most IACTs follow a simple optical design. Light is collected by a convergent reflecting mirror surface (”reflector”) into a photodetector camera located at the focal plane. Like all large optical telescopes, IACT reflectors are multifaceted. Mirror facets have a typical surface area of 0.25 - 2 m^2. A spherical reflector shows too poor an optical performance. Two other reflector concepts are used: * A parabola. The overall shape of the reflector is parabolic, while each facet has a spherical shape with a curvature corresponding to the local curvature of the parabola. All facets in a ring around the center of the reflector are equal. In principle, the individual facets should be aspherical, but in practice they are manufactured with the same radius of curvature in the sagittal and tangential directions. A parabolic reflector has an excellent PSF at the center of the FOV, but suffers from off-axis coma aberration. The main advantage of a parabolic is its excellent temporal resolution. * A ”Davies-Cotton” design. Many IACTs have opted for the Davies-Cotton design<cit.>, which can only be applied when the reflector is multifaceted. Figure <ref> shows how this design compares to a spherical reflector. For a given focal length f, the radius of curvature of a spherical reflector R_sphere is 2× f and the normal vectors of the individual facets point to the center of the sphere. Instead, in a Davies-Cotton design, the reflector follows a global spherical shape with a smaller R_sphere = f whereas the individual facets have a constant R_facet = 2× f. However their normal vectors do not cross the center of the reflector sphere but a point at a distance 2× f along the optical axis. Figure <ref> illustrates how the tangential RMS changes with incident angle for different optical designs (the sagittal RMS does not change with the incident angle). The results are based on a ray tracing simulation <cit.>. For any of these optical designs increasing the ratio of focal length and reflector diameter (focal ratio, f/D) reduces aberrations but increases the cost and complexity of the telescope mechanics so IACTs do not exceed a focal ratio of 1.5. Such a simple telescope optics has significant advantages: the design is simple, production is less expensive and there is no loss of light in the secondary optical elements. However some IACTs have opted for more complex optics in order to achieve a larger FOV and to reduce the plate scale. The latter allows to use smaller photosensors, namely Silicon PMs. The so-called Schwarzschild-Couder design<cit.> has two aspheric mirrors that can be configured to correct for spherical and coma aberrations, achieving good optical quality over a large FOV of ∼10^∘ with a small focal ratio and plate scale. §.§ Mechanical structure Compared to telescopes in the visible range where optical precision is higher, IACTs can be built with relatively simple mechanics and without a protecting dome. This reduces the cost considerably. However both the mirrors and the mechanical structure must be particularly resistant to the harsh atmospheric conditions common in astronomical observatories, more specifically to high ice loads or strong wind gusts. Working outdoors also requires low-maintenance technical solutions. In addition some phenomena in VHE astrophysics are very fast. In particular, the prompt emission of gamma-ray bursts lasts only a few seconds. In general, IACTs are expected to re-point in about 1-2 minutes but some IACTs have been designed to re-point to any position on the sky in a time as short as 20-30 seconds. Acceleration during rapid re-pointing introduces additional forces in the structure. Except for the initial smaller (<3 m) models, IACT are typically equipped with alt-azimuth mounts. The structure can be implemented as a positioner (tower, head and yokes) and a reflector support that rotates in azimuth and elevation, or alternatively as a space frame substructure that rotates in azimuth and a reflector support that rotates only in elevation. The reflector support can be implemented as a truss network or a space frame. The space frame elements are fabricated from steel or carbon fiber reinforced plastic (CFRP, crystalline carbon filaments inside a resin such as epoxy). CFRP is much stronger than steel in terms of strength to weight ratios (although an actual comparison should take into account the geometry of the carbon fiber filaments). That means that CFRP structures are significantly lighter than steel ones. The camera holds to the mechanical structure either through a steel truss structure or with a parabolic aluminium or CFRP arch in the vertical plane and additional tension cables. They all hold the camera to points at the edge of the optical support structure. The mechanical accuracy of the structure must be significantly improved in dual-mirror telescopes. All implementations of dual-mirror telescopes attach the secondary mirror to the optical support structure with a lattice structure, and some of them add an additional support element closer to the optical axis. They are designed to achieve minimum shadowing, control stray light and, since the IACTs are not covered by a dome during the day, protect them from sunlight during daytime parking. The mirror facets hold to the optical support structure normally through two positioning points and a floating one. The facet is aligned using the positioning support points, which are typically implemented as electric step motors (actuators) with a range of a few mm and steps of μm. Facets may be aligned at installation and every few months/years to correct for slow degradation in the mechanics (typically on rigid steel structures) or every 10-20 minutes to correct for changing gravity force, using an Active Mirror Control system (typically on less rigid CFRP structures). Active Mirror Control (AMC) calibration is based on a number of ancillary items. A CCD camera is installed in the center of the reflector pointing to the photodetector camera. A diffusing target can be placed near the focal plane. When the telescope is pointed at a bright star, the reflections of the star on the individual facets are identified on the target by the CCD camera. The reflections are referenced to LEDs installed on the edges of the camera. The AMC software aligns each individual facet until all star reflections fall at the center of the camera. This calibration is normally run for various elevation angles and the actuator positions are recorded and used during standard observations. The calibration must be repeated on time scales of years. The same or a second camera is used to monitor the PSF during regular observations or a few times throughout the night to identify technical problems with the AMC. Other schemes for the AMC have been or are being tested, e.g., installing a laser or CCD camera on each facet. Dual-mirror telescopes add the additional complication of aligning the secondary mirror independently of the primary. The accuracy of the secondary mirror is higher because it is located close to the camera. In general, this requires more degrees of freedom in facet alignment, implemented, for example, by Steward platforms. § MIRROR TECHNOLOGY IACT reflectors are among the largest telescope reflectors in the world. That makes cost a significant driver in their design: one typically aim at a cost < 2000 Euro/m^2. At the same time they must be as light as possible and resilient to environmental conditions. On the positive side optical quality is not as high as for telescopes in the visible range. Reflectivity should be as high as possible, typically exceeding 90%, but mainly in the range from 300 to 500 nm where most of the Cherenkov light reaches the ground. Telescopes with dual optics pose extra difficulties due to smaller radii of curvature and aspherical shapes. In general mirror facets have a high aspect ratio (>25), with a thickness of a 5-10 cm and 0.5 - 2 m side to side. Facets have either square or hexagonal shape. Mirrors are either ground from a raw blank and aluminized, or manufactured as a composite sandwich structure. Grinding is typically more expensive, takes longer to produce and is heavier, so composite mirrors are more and more in use. A sandwich mirror is a structure with a face plate, a light weight honeycomb core providing stiffness, and a base plate anchored to the telescope structure. The face plate may be machined to the desired curvature, glued to the core and machined. Alternatively, a thin glass sheet of a few mm with its reflective surface may be pressed against a mold using vacuum suction to reach the required curvature (“cold slumping”). The sheet is later glued to the honeycomb core. The reflective layer in the face plate was traditionally evaporated aluminium although dielectric layers are also used nowadays (e.g. HfO_2, TiO_2, SiO_2). An advantage of a multi-layer coating is that the reflectivity spectral response can be adjusted, for instance to reject night sky background at longer wavelengths. The reflective layer is applied on top of a thicker layer of glass or metal and is typically protected from the environment (mechanically and chemically) using an external layer of ∼100 nm of quartz. § TELESCOPE CONTROL, EVENT RECONSTRUCTION AND DATA PRODUCTS Similar to other telescopes, the control system of an IACT has to deal with a number of subsystems, namely the data acquisition (DAQ), trigger, telescope drive, camera electronics and mechanics control, mirror alignment control, starguider and other auxiliary subsystems (power, absolute clock, condition monitoring, atmospheric monitoring etc). Response times in the control system are typically of a few ms. Most of the IACTs operate in an array so a central array software is generally implemented on top of the individual telescope control. An accuracy of target pointing and tracking better than a few tens of arcseconds is a usual requirement. As a first step the pointing position is monitored using angular encoders on both telescope axes, which ensure real-time that there are no significant tracking errors. But, to achieve the required precision, the telescope needs a pointing model that typically involves the drive and, in the case of IACTs with AMC, also the facet alignment model. The pointing position is measured using a starguider, implemented as a CCD at the center of the telescope reflector that registers simultaneously the position of the camera and the position of a bright stars in the sky. Typically γ-ray data are corrected offline for deviations in time scales of seconds to minutes using starguider data whereas the telescope pointing model is updated only on time scales of months. The response of the camera photodetectors and front-end electronics is monitored using a light calibration source that flashes the camera uniformly. This calibrator generates fast pulses mimicking Cherenkov light pulses in time profile and brightness and is typically implemented with a laser. Calibration pulses may be interleaved with air shower pulses. Random triggers ("pedestal events") are recorded as well to determine the level of background noise. The telescope subsystems are typically monitored in time scales of 1-10 seconds and some key parameters that are necessary for the γ-ray processing enter a "control data stream". Air shower, calibration and pedestal events constitute a separate high-throughput (>GByte/s) "raw data stream". Raw events are delivered by the DAQ and contain information from each pixel. The Cherenkov pulse is typically digitized at the front-end electronics with sampling rates up to 2 GSample/s. The pixel information included in a raw event may vary from just the total integrated charge and one single arrival time, to the whole digitized waveform over a period of a few tens of ns. In addition the event includes an event number, a time tag with a precision of at least a few hundred ns, a tag with information of trigger type (shower, calibration, pedestal, stereo, single-telescope etc) and other control and auxiliary tags. § PHOTOSENSORS The photosensors most commonly used in IACTs are photomultipliers with alkaline photocathodes and dynode chain-based electron multipliers, which provide ultrafast signals and allow measuring single phe. They can reach a relatively high peak quantum efficiency (QE), of about 40%, with a dynamic range of about 5,000 phe. However these are not the only possible photosensors that can be employed in IACTs and new devices, like solid-state Silicon Photomultipliers (SiPM), are becoming viable candidates for new telescopes. Generally speaking, any device is a viable photosensor if it fullfils the following criteria: * The photosensor should allow to determine the arrival time of the photons to each pixel with sub-nanosecond precision for light pulses of sufficient large amplitude. This is necessary to avoid distorting the intrinsic time evolution of the recorded shower. * The Cherenkov spectrum peaks at 350 nm and has a cutoff below 300 nm, therefore the peak of the efficiency of the photosensor must match this wavelength range. In addition, given that the NSB contribution increases with the wavelength, it is desirable that the sensor efficiency drops to zero above 550nm[This is not completely correct: there are re-emission lines of Cherenkov radiation above 650 nm due to the rotation states of OH molecules.] to reduce the background. * The dynamic range of the sensor must be broad enough to accurately reconstruct showers initiated by gammas across a wide range of energies. Typically, a range that starts at one phe and extends to a few thousand phe provides a good balance between cost and performance, allowing for the reconstruction of showers over a range of three orders of magnitude in energy without the need for extrapolation of truncated signals. The dynamic range of the sensors must cover from one phe to few thousands of them, with good linearity (few percent) or with a non-linearity that can be corrected up to this level. This is necessary to detect and reconstruct showers initiate by gammas within an energy range as wide as possible.Mazin: I don’t see how a dynamic range of few thousands results in an energy range “as wide as possible”. Why not few hundred or 100 thousand? I would rather argue that a dynamic range of few thousands provides a good compromise of cost versus performance, i.e. 3 orders in energy range, which can be reconstructed easily (no extrapolation of truncated signals) * Cross talk between adjacent sensors must be below the fluctuation of the signal in the whole dynamic range. Therefore it must be below the 1% level.This is not true for SiPMs. One can leave with 10-20% cross talk, simply the resolution suffers accordingly. * The contribution of spurious signal to the trigger thresholds, and to the trigger rate, shall be negligible. This is specially important for photomultipliers, in which ionized atoms trapped in its interior can give rise to large signals long after true phe have been identified. For SiPM a similar phenomenon takes place, the optical crosstalk, although the mechanism is completely different<cit.>. The rate of spurious signal is typically required to be at the 10^-4 level or below with respect to the true signal. * In order to reduce the variance of the signal, the uniformity of the response within a single sensor must be better than 10%. This requirement can be extended to the uniformity between different sensors in the same camera. * Given that the sensors can be exposed to indirect sunlight during maintenance operations, or to indirect moonlight during observations, they have to be able to survive to strong illumination. In particular, this survival requirement during observations imposes the use of a current-limiter in the high voltage supplier for photomultipliers, whereas SiPM do not require such protection. * The size of the sensor should match the typical angular size of the fluctuation of the shower images in the camera focal plane, and should not introduce dead areas in the camera. This cannot be usually fulfilled by off-the-shelf sensors. However it can be achieved by coupling a light guideWinston coneMAzin:Maybe better “light guide”? Probably a picture would help here as describing typical light guide in words is not trivial. to it, designed to have a pixel FOV of about 0.1 degrees. Moreover, this design allows to reject most of the background light outside this FOV and to keep the dead area of the camera small, without having to used custom geometries for the sensors. Apart of these criteria, there are other operational aspects that have to be considered when selecting photosensors for IACTs. In particular their lifetime, stability and cost are important aspects to consider in order to keep the performance of the camera up to the expectations, with a bounded maintenance cost. § CAMERA TRIGGER AND DAQ Triggering in telescopes have two goals: to reduce the load of the data acquisition system to manageable levels, and to reject background events, mainly due to NSB fluctuations. IACTs are usually triggered in a staged manner. Firstly, individual Cherenkov telescope cameras produce a camera trigger Mazin: The camera trigger is typically a staged trigger as well signal by making use of topological properties in the pixel signals. This camera trigger may be staged itself and is designed to minimize the probability of triggering due to fluctuations of the NSB. The signal is either sent to a centralized facility or to neighbouring telescopes to build a higher level trigger signal by exploiting the temporal coincidence of camera trigger images within the array of telescopes, the so-called stereo trigger or array trigger. The array trigger is sent back to the cameras to proceed recording the images. Therefore an individual telescope typically buffers the shower image from the moment the camera trigger is issued to the moment it receives the stereo trigger. This can be done without introducing dead times in scales of ms for digital buffering, or μs for analog buffering. Therefore, the kind of buffer has a strong influence in the trigger design. Regardless of the chosen implementation, the trigger system must be flexible and software-configurable, since operation modes vary from deep observations, where all telescopes follow the same source, to monitoring or survey applications, where groups of a few telescopes or even single telescopes point in different directions. §.§ Camera trigger The camera trigger must keep the trigger rate due to fluctuations of the NSB low. For this purpose, it exploits the recognition of the pattern due to the concentrations of Cherenkov signals in local regions of the camera. Nowadays this recognition is based in looking for a number of pixels above threshold, or a number of neighbouring pixels above threshold, within the camera. This is typically implemented by dividing the camera up into sectors, which must overlap to provide a uniform trigger efficiency across the camera. In an alternative approach the sum of all pixels signals in a patch of neighbouring pixels, capped to a maximum value to reduce the influence of afterpulsing, is formed, and a threshold is set to initiate a trigger. In both cases the implementation can be digital or analogue, although the second one is usually employed. The decision signal is sent in digital format, sometimes together with additional timing information, to neighbouring telescopes or to a central decision system to form the stereo trigger. §.§ Stereo trigger The stereo trigger schemes for systems of IACTs provide asynchronous trigger decisions, delaying individual telescope trigger signals by an appropriate amount to compensate for the time differences when the Cherenkov light reaches the telescopes, and scanning trigger signals for patterns of telescope coincidence. The time to reach a trigger decision and to propagate it back to neighbouring telescopes is of the order of μs. This can raise up to the ms scale when the signals have to be propagated to a central unit to take the decision. If the data is digitised and buffered with the individual camera trigger or with a lower level stereo trigger that combine individual triggers of neighbouring telescope, restrictions on array trigger latency of a centralized decision are greatly relaxed, and the decision can be software based. In this scheme, along with each local trigger an absolute timestamp with an accuracy of 1 ns is sent to the central decision system, which searches for time coincidences of the events and defines the telescope system trigger. The centralized trigger system sends the information of the coincidence time back to the cameras. The cameras then select the events that fulfil the global trigger condition and should be recorded and transmitted for further stereoscopic processing. This centralized scheme is software-based, but still makes uses of the properties of the individual camera trigger system in an optimal way. §.§ DAQ electronics As already commented, shower images have a pulse width of a few ns, with a background due to the NSB with typical rates from tens to hundred of MHz per pixel depending on mirror and pixel size and the photodetection efficiency. Thus, recording the shower induced signals efficiently requires high bandwidth and short integration times. On top of that, the dynamic range and electronics noise should be such that few phe signals are resolved Mazin:As far as I know, in several experiments the single phe are not resolved (example MAGIC). Resolving the single phe helps the calibration but there are different ways to do it, e.g. F-factor method for PMTs, and signals from one phe up to at least few thousands are recorded without truncation. Given the latency of the trigger signal, the electronics must delay or buffer the signals until the decision to store the signal arrives, which can take up to about 10 μs if the trigger signal of several telescopes are combined. Current available signal recording and processing technologies allow recording a range of signal parameters, from the integrated charge to the full pulse shape, over a fixed time window. The latter option seems optimal for low energies (below few TeV), that requires a more sophisticated background reduction, whereas for higher energies the former parameter is usually enough, complemented with other few parameters like time and time width of the signal. Two techniques for signal recording and processing are in use nowadays. The first technique is based in the use of Flash Analogue-to-Digital Converters (FADCs), while the second one uses analogue sampling memories: * FADC technology: these digitise the photosensor signal at sampling rates between 100 Megasamples/s and few Gigasamples/s. The digitised stream can be subsequently stored digitally for further processing, which allows for longer trigger latencies. Moreover, this technology allows the implementation of fully digital trigger systems that exploit the recorded image in real time. The main disadvantages of using FADCs with respect to analogue sampling memories are their cost and power consumption, which makes difficult its integration in cameras, although recent developments on low-power and low-cost FADCs with sampling speeds of up to 300 MS/s, make them competitive for some IACT cameras. * Analogue sampling memories technology: these consist of banks of switched capacitors which are used in turn to record the signal shape. The maximum recording depth is given by the number of capacitors and the sampling time. The implementation in ASICs is such that they allocate enough capacitors to cope with few microseconds of trigger latency at sampling speeds 1 GS/s or faster for several channels simultaneously, making them very competitive in terms of cost and power scalability, and allowing a full implementation inside the camera. Once the event signals have been sampled and digitised, they can be processed either in FPGAs to perform a first reduction of the information by storing only pixel charges and pulse time width, or they can be fully stored for subsequent offline processing. In either case, the transmission system and consumer electronics and software must be able to deal with trigger rates of up to 20 kHz per camera, in which the signal of few thousand pixels, and few tens of time samples, if not reduction is performed, has to be buffered and eventually transmitted for archival. Currently this can be implemented using local digital buffers in the cameras, and commercial hardware for transmitting the data from the camera to high-end computers, which can assemble the events in real-time, and store them on disk. § ANALYSIS TECHNIQUES The analysis of gamma rays detected using IACTs has to cope with different sources of background at different stages. At the earliest one Mazin: As far as I know, in several experiments the single phe are not resolved (example MAGIC). Resolving the single phe helps the calibration but there are different ways to do it, e.g. F-factor method for PMTs, when the signal collected on individual pixels is integrated to obtain the number of phe, the integration has to be performed in such a manner as to reduce the contribution of electronic noise and the fluctuations of the ambient random photon field, which is mainly due to NSB and diffuse Moon Light. At later stages, the recorded shower images have to be treated to minimize further the contribution of the fluctuations of that random photon field. The techniques used in these two first stages are called Signal Extraction and Image Cleaning respectively. Finally, at the latest stage of the analysis, it is necessary to reject the population of recorded images that are not initiated by gamma rays from the object of interest for the data analyzis. In doing so the technique to be used depends on the physical origin of the image, which is mostly showers initiated by hadrons and gammas from the object source, but could also include Cherenkov rings due to atmospheric muons, diffuse gammas or electrons. Let us discuss the rejection techniques and the main properties of the backgrounds. §.§ Signal extraction Mazin:This is a bit too detailed for my taste but up to you. One aspect which I did not find here is a possibility to use information from neighboring pixels to predict expected arrival time for Cherenkov photons which largely reduces contribution from NSB The NSB light level near the zenith at typical locations of IACTs is about 2.4·10^3 photons sr^-1 ns^-1 m^-2 for wavelengths between 300 and 650 nm <cit.>. For typical IACTs this results in a time average detection level of about 0.1 phe ns^-1 per pixel, which can increase up to 10-fold in the presence of diffuse Moon Light. Given the usual number of pixels, which is about few thousands, the total number of background phe is about few hundreds per nanosecond, with is of the same magnitude order as the total number of phe spread in few nanoseconds for showers initiated by gamma rays with an energy of 0.1 TeV. On the other hand, the total number of phe detected with a typical IACT in a gamma ray with an energy of 0.1 TeV is of the order of few hundreds spread over few nanoseconds, scattered in about 10 pixels<cit.>[For pixels of a typical size of 0.1^∘ and a 10-m type IACT]. The goal of the signal extraction algorithm is to identify the time of arrival and number of phe in each pixel within a recorder window that lasts few tens of nanoseconds. It profits from the clustering in time of the phe due to showers reaching a given pixel. To this end it has to identify the most probable location in time of the signal on each pixel, and integrate it in a time window large enough to capture it, but small enough to minimise the amount of noise accounted. This width relates to the bandwidth of the system, that usually dominates the time width of the signal recorded for a single phe. Most signal extraction algorithms rely on minimizing the following goodness of fit Gof between the recorded signal of a pixel in a given time window and the expected response function: Gof=∑_i∈ n (S_i - N R_i,T))^2=(∑_i S_i^2)_constant +N^2 (∑_i R^2_i,T)_constant -2× N×(∑_i S_i R_i,T)_correlation between R and S where the summation is in the n samples in time, S_i is the signal in sample i, R_i,T is a given normalized response function centered in time sample T and evaluated in sample i, and N is a normalization factor. The rightmost hand side of the equation above separates the Gof in the terms which are relevant for the discussion of the signal extraction. The minimization of Gof is usually with respect to N and T, which gives a time and amplitude for the signal. Depending on the response function R and how the minimization is performed the usual strategies are: * Fixed window: T is kept fixed, R is constant in a window around this sample and zero out of it, that is, a square function of a given width. With this N is proportional to the average of the signal within this window. * Sliding window: R has the same shape as for the fixed window, but Gof is minimized with respect to T. In this case T is the one that maximizes the correlation between the signal and the square function R, and N is the proportional to the average around this T. * Digital filter<cit.>: R_i,T has the expected shape for a single phe arriving at T. The minimization gives the value of T that maximizes the correlation between the signal and R, and N is proportional to the weighted average of the signal around the sample T with the chosen response. The main drawback of these methods relies in the assumption of the response function shape in eq. <ref>. Selecting the right one is a key parameter to reduce the influence of the electronic noise in the extracted signal and noise. On top of that, independently on the chosen response function, variations of the shape of the true signal result in a fluctuation in both, the reconstructed time T and the extracted signal normalization N, which are undesirable. These are not the only possible strategies, and there are algorithms in the literature which aim to improve the background rejection and to reduce the systematic error in the value of the extracted signal, like using smooth basis functions <cit.> to describe R, using the information from neighboring pixels to predict the expected arrival times of the Cherenkov photons, or employing state of the art techniques like deep-learning methods. <cit.> Nevertheless, the impact of these improvements is shadowed by the performance of the image cleaning procedure, which is the next step in the background rejection, except for methods based in whole image processing like some deep learning ones. Except for the methods based in whole image processing, like some deep learning ones, the result of the image cleaning algorithm, i.e. a list of arrival time and total signal for each signal, is used as input for the next step in the analysis: the image cleaning. §.§ Image cleaning Mazin:It might be important to mention that there is a possibility to avoid image cleaning either completely or relax it significantly by comparing every calibrated event with some model which is typically a combination of what is to be expected from a gamma-ray shower + measured noise from the data. Carlos: actually this is said at the end of the section already. I will stress it. The task of an image cleaning procedure is to identify as many pixels as possible which are dominated by the signal from the shower, rejecting the pixels dominated by noise. To this end, the method exploits the correlation of the signal due to a single shower between neighboring pixels, which is not present for the background noise. To understand how these algorithm works, it is useful to introduce a simple model that describes the mean instantaneous signal at time t of a pixel with coordinates x,y with respect to the maximum of a recorded electromagnetic shower. This simple model is based in the Hillas parameterisation<cit.> of shower images, approximating the time dependence of the arrival time of photons to the telescope by a linear function along the axis of the shower images. With this we have that the recorded signal S(x,y,t) is S(x,y,t)= ( b + A e^-1/2( x^2/w^2 + y^2/l^2)× e^-1/2(y-vt)^2/σ_t^2) Δ where the shower axis moves along the Y axis, the shower image maximum amplitude is A, w is its width and l its length, σ_t is the typical time length of a single phe pulse, v gives the shower image development speed, b gives the background signal per unit time, and Δ is the sampling time width. The values of A, w, l and v depend on the shower characteristics (like gamma or hadron energy, impact point of the shower axis on the ground, ...) as well as the optics of the telescope, camera pixel size and detection efficiency. On the other hand Δ and σ_t are mostly due to instrumental characteristics like the sampling speed and the full chain electronics bandwidth. The signal extraction algorithm previously described extracts the time for each pixel in which eq. <ref> is maximized as a function of t, which call S_extracted(x,y). For pixels that record a large number of phe due to the shower compared with the NSB contribution, the resulting amplitude of the signal extraction will be S_extracted^shower(x,y)=Ae^-1/2( x^2/w^2 + y^2/l^2)Δ , and the time will be given by y/v. On the other hand, for pixels that only contain background phe, the average number of recorded phe would be b×Δ× N, where N is the total number of time samples per pixel used as input for the signal extraction algorithms. These photons will be uniformly distributed among all the N samples, and if b×Δ× N<Δ/2σ_tN, they will likely not overlap. Under these usually realistic conditions, the extraction algorithm will result in the maximum amplitude for a single phe S_NSB, and the time will be random among all samples. A first approach to reject pixels dominated by noise is to select only those such that extracted signal is larger than S_NSB+F, where F is a safety factor to account for uncertainties in the response, electronic noise and statistical fluctuations. However this is not fully satisfactory because requires to adjust the rejection level without any a priori knowledge of the shower image parameters. Since the required rejection efficiency will depend on the number of pixels, this results in either keeping NSB-dominated background as part of the image, or rejecting pixels of the shower image. Both possibilities result in introducing a systematic error for showers produced by low energy gammas or by high energy gammas far away from the telescope, and in the rejection of full images in case of showers with a small, but significant, number of photon-electrons detected. consider that the number NSB photons in a typical time window of few tens of nanoseconds containing the shower will be small. If this time window is Δ T, the worst case for the NSB is all photons reaching a single time sample, that results in a maximum average number of phe of b×Δ T. Therefore, it is possible to reject pixels dominated by the NSB by selecting only those with S_extracted > b×Δ T + C √(b×Δ T), where C provides the rejection level. An improved approach is built by realizing pixels dominated by the shower cluster around the image maximum, so given a pixel which is safely considered not due to the background, its next-neighbours are likely due to shower photons too. This idea is implemented by using two background rejection levels instead of one. The first one is tight enough to select a list of pixels which are due to shower photons, typically selecting pixels with a signal larger than the one expected for 10 phe. Then other pixels are added recursively to this list if they are neighbours of pixels already contained in the list and other signals are above a second looser rejection level, which usually are close to 5 phe. To improve the performance of the selection, as well as its robustness against background signals that do not fulfill the conditions for the background considered above, some additional constraints are required to add pixels to the list: a) pixel above the first level are only added if they have at least a next-neighbor pixel also above this level (these are called core pixels); b) pixels which are above the second but not the first level are only added if at least one of its next-neighbours is above the first level (these are called boundary pixels). An additional improvement to this consists in taking into account the extracted time t between pixels<cit.>, or performing a global fit in time and space to a model similar to that of eq. <ref> <cit.>, thus increasing the signal over noise ratio of the reconstructed shower image. Employing sophisticated models, this latest approach allows to relax the necessity of the event cleaning or even eliminate it completely. Some plots about performance (from MAGIC for example?) §.§ Gamma-Hadron separation The above-mentioned techniques deal mainly with the NSB. However, once the images have been cleaned, the main background are shower images produced by the interaction of non-gamma primaries with the atmosphere. These are dominated by protons and He nuclei, that constitute more than 95% of primaries. The traditional rejection method for this background is based in the aforementioned Hillas parameterisation <cit.> and the exploitation of the stereoscopic observation of the showers. This consists in fitting the distribution of photons in the cleaned images of the same shower observed by all Cherenkov Telescopes to bivariate gaussian distributions (one per image). The result of this parameterisation are total amount of light contained in the shower, the position of the core of the shower in the sky, the projection of the axis of the shower in the sky, and the length of the major and minor axes of the shower, shown in <ref>. Since the image of an electromagnetic shower is well described by a single compact distribution except for very low energy events, a first level of rejection is traditionally obtained by discarding events in which at least one telescope recorded an image that contains more separated shower. On top of that, given that the shower axis follows the original direction of the gamma, a second level is achieved by rejecting events in which the projected shower axis of the parameterisation of all the recorded images of the same shower do not cross, within the expected resolution obtained from Monte Carlo simulation, in the same point in the sky. A final rejection level is the rejection of non-electromagnetic initiated events. In traditional analyses it exploited the Width parameter, which is directly linked with the Molière radius in the upper atmosphere, and, contrary to other Hillas parameters, has a weak dependence on direction or energy of the primary gamma. A discriminating variable can be built using the Width obtained from each telescope scaled by adding them in quadrature scaled by the expected one obtained from Monte Carlo simulations as a function of the cleaned image amount of light on each telescope, and the distance of the telescope to the extrapolated impact point of the shower direction on the ground (the so called impact parameter). W_scaled = 1/√(N_telescope)∑_i∈ imagesWidth_i - <Width_i>(Size,D)/σ_i(Size,D) where Size is the measured amount of light and D is the impact parameter, <Width_i>(Size,D) is the expected Width for telescope i, and σ_i(Size,D) is the expected variance of the Width. Figure <ref> shows the distribution of the W_scaled for gammas of 1 TeV and the proton background for observations with 4 telescopes and gammas, showing that a simple cut can reject most of the proton initiated showers. More sophisticated methods make use of all Hillas parameters simultaneously using multivariate classificators like Random Forests or Boosted decision trees <cit.>. There are also extension of the Hillas parameterisation, for example the use of templates in the so-called Model Analysis <cit.> or fitting all the images to a 3D model for the shower <cit.>, or lately using Deep neutral network models <cit.>. These provide an improved background rejection efficiency over the traditional one, at the cost of a larger complexity. They are especially useful to enlarge the lower end of the energy range covered by the telescope, where the statistical fluctuations of the images make the classification based on Hillas parameters less performing. threshold of the telescope trigger system the proton background is usual larger due to its power law distribution being softer than that of a gamma-ray source MAzin:also gammas have typically a power law distribution. Maybe something like “because the energy distribution of the hydronic background has typically a softer energy distribution than the one of the gamma-ray source” , and the recorded shower images have larger statistical fluctuation which makes more difficult the classification. Mazin:Makes the classification based on Hillas parameters less performant? §.§ Determination of gamma-ray energy and incident direction The final stage of the analysis of a sample of shower images taken with IACTs, once backgrounds have been rejected, is the determination of primary gamma-ray direction and energy on an image by image basis. The accurate determination of the direction is possible thanks to the stereoscopic observation of the shower, as already sketched in Fig. <ref>:Mazin:It might be good to introduce a theta2 plot? This would also help understanding that there is an irreducible background which has to be illuminated / subtracted on a statistical basis comparing ON and OFF regions. Carlos: shouldn´t this be part of the analysis chapter? since the development of the electromagnetic shower is symmetric around the incident direction of the primary gamma ray, the axis of symmetry of each registered image represents the direction projected into the FOV as observed from the position of each telescope. Actually, this projection, together with the position of the center of the telescope mirror, defines a plane in real space. If the telescopes are pointing to the same position in the sky, then the crossing point of these projections is, within the statistical and systematic errors of the determination of the images axes, the primary gamma-ray incident direction because it is the only direction common to all image axes. Additional information can be extracted if the description based in planes in real space is used, since the intersection of these planes does not only provides the direction, but also the impact point if the extrapolation of the trajectory to the ground. It is obvious that the accuracy of this determination then depends on the number of collected photons, which correlates with the energy of the primary gamma ray, and the number of telescopes that register the same shower. The resolution reached for a single gamma ray for the current generation of IACTs is the range 0.1-0.05^∘ for a primary energy above few hundreds of GeV. Figures <ref> and <ref> are typical results obtained with current-generation IACTs. The first one displays a distribution of reconstructed directions of γ-rays around a source and the latter one the distribution of angular distances between these reconstructed directions and the known or inferred position (so-called θ^2) of the source. A cut of θ^2≲0.02 is typically applied to define the signal region and estimate the significance of the source detection. The reconstruction of the primary gamma-ray energy relies on two facts already described: i) the number of electrons and positrons of the shower is approximately proportional to the primary energy; ii) the Cherenkov photons reaching the ground are nearly uniformly distributed within a radius of 130m of the impact point on ground. Based on these, the energy reconstruction relies on building a parameterised estimate of the primary energy given the impact parameter and the register number of photons of each image after cleaning, which are combined accounting for the statistical fluctuations. The parameterisation is built by means of complex Monte Carlo simulations of the shower development, based on models of the atmosphere and electromagnetic interactions(see <cit.> for example), and the telescope response using ray-tracing codes and detailed descriptions of the photosensors and DAQ. As usual, there are more sophisticated methodologies which improve the accuracy and precision of the reconstruction of the primary properties at the cost of complexity and computational resources <cit.>. §.§ Typical performance and scientific plots Key parameters characterizing the performance of IACTs are flux sensitivity, angular resolution, and energy resolution. Whilst angular resolution refers to the accuracy of measuring the incoming direction of the gamma ray, energy resolution refers to the accuracy of measuring its energy. Flux sensitivity is the minimum flux of a gamma-ray source that can be detected beyond a certain statistical significance, usually five sigma, in a given amount of time. Performance figures are calculated by means of Monte Carlo simulations tailored to the observed properties of the detectors, resulting in typical angular resolutions in the range of 0.15 to 0.05 degrees and typical energy resolutions in the range of 10-15%. This dependency on the energy of the gamma rays being observed is evident in figures <ref> and <ref> for the sensitivity and angular resolution of some representative instruments. Regarding typical results for IACTs, the reader can refer to <cit.> for a comprehensive overview of high-level analysis techniques. Figures <ref>, <ref> and <ref> provide examples of what can be obtained through the analysis of a gamma-ray source with IACTs: they show respectively 1) the distribution of reconstructed directions of gammas in a given energy range; 2) the reconstructed energy spectrum for a bright source; and 3) the light curve, that is, the reconstructed flux as a function of time for a given energy range, for an exceptionally variable gamma-ray source. § CURRENT TELESCOPES AND FUTURE EVOLUTION OF THE TECHNIQUE Second generation IACT arrays such as H.E.S.S. in Namibia, MAGIC in Spain and VERITAS in USA have brought the technique to maturity and are still operational. The reader is referred to <cit.> for the status and a review of the roughly last 20 years of scientific results of these three instruments. The community behind these three IACT arrays have come together to design and build a full-sky open observatory called the Cherenkov Telescope Array Observatory (CTAO, <cit.>). CTAO will consist of two arrays of IACTs in the northern and southern hemispheres. CTAO-North will be located at the Roque de los Muchachos observatory (La Palma, Spain) and CTAO-South at Cerro Paranal (Chile). The CTAO IACTs will have mirrors of three different sizes optimised for overlapping energy ranges: the Large-Sized Telescopes (LST) will be equipped with the largest mirrors (23 m diameter) and target the lowest energies down to an energy of ∼20 GeV, the Medium-Sized Telescopes (MST) will be equipped with 12 m diameter mirrors and cover the range from roughly 100 GeV to a few TeV, and Small-Sized Telescopes (SSTs) with ∼4 m diameter mirrors will be sensitive to the highest energies up to hundreds of TeV. CTAO is designed to operate for 30 years. Over this long period of time most probably the mechanics and optics of the CTAO telescopes will remain unchanged but the cameras will be upgraded with higher efficiency photodetectors, and faster trigger and readout electronics. Data analysis methods are expected to improve, especially with the application of new machine learning techniques, which may still be limited by computing power. With an increase in photodetection efficiency the IACT may expand to even lower energies, probably even below 10 GeV. Extracting physical information may prove challenging, given the fact that γ-ray showers develops farther and farther away from the telescope with decreasing energy and their images get correspondingly smaller and smaller. A low zenith 300 GeV γ-ray shower reaches its maximum at 8 km altitude above sea level while a 3 GeV shower does so at 14 km altitude. For higher zenith angles the distance to the shower maximum is even larger. Both camera pixelization and optical PSF will need to improve in order to resolve the relevant features of the shower image. The IACT technique may also evolve through innovative optics to increase the FOV, reduce the threshold or improve the optical quality <cit.>. See <cit.> for a more extensive review of future initiatives in ground-based gamma-ray astronomical detectors, including large FOV shower particle detector arrays. 99. handbook_history Chapter III ”The development of ground-based Gamma-ray astronomy: a historical overview of the pioneering experiments”, section 6, Handbook of X-ray and Gamma-ray astrophysics, Springer Verlag. Whipple Weekes, C. T. et al. (Whipple collaboration), Observation of TeV gamma rays from the Crab Nebula using the Atmospheric Cerenkov Imaging technique, The Astrophysical Journal, 342 (1989) 379-395 CAT Barrau, A. et al., The CAT Imaging Telescope for Very-High-Energy Gamma-Ray Astronomy, NIM A 416 (1998) 278-292 HEGRA Daum, A. er al. (The HEGRA Collaboration),First results on the performance of the HEGRA IACT array, Astroparticle Physics 8 (1997) 1-11 handbook_space Section 5 ”Space-based Gamma-Ray Observatories”, Handbook of X-ray and Gamma-ray astrophysics, Springer Verlag. handbook_science Sections 7-15 of Handbook of X-ray and Gamma-ray astrophysics, Springer Verlag. Heitler Heitler, W., Quantum theory of radiation, 1954. Cherenkov light Cherenkov, P. A., Visible emission of clean liquids by action of γ radiation. Doklady Akademii Nauk SSSR 2 (1934) 451 Preuss et al. Preuβ S., Hermann G, Hofmann W., Kohnle A., Study of the photon flux from the night sky at La Palma and Namibia, in the wavelength region relevant for imaging atmospheric Cherenkov telescopes, NIM A 481 (2002) 229-240 digital filter Albert, J. et al (MAGIC collaboration) FADC signal reconstruction for the MAGIC telescope, NIM A 594 (2008) 407–419 Hillas Hillas, A., Cherenkov light images of EAS produced by primary gamma, Proc. 19nd ICRC (La Jolla), Vol 3, 445 (1985) image cleaning Aliu, E. et al (MAGIC collaboration), Improving the performance of the single-dish Cherenkov telescope MAGIC through the use of signal timing, Astropart. Phys. 30 (2009) 293 likelihood approach Emery, G. et al, Reconstruction of extensive air shower images of the first Large Size Telescope prototype of CTA using a novel likelihood technique, PoS(ICRC2021)716 Davies-Cotton Davies, J.M. & Cotton, E.S., Design of the quartermaster solar furnace, J. Solar Energy Sci. and Eng. 1 (1957) 16 Schliesser-Mirzoyan Schliesser A. & Mirzoyan R., Wide-field prime-focus imaging atmospheric Cherenkov telescopes: A systematic study, Astrop. Phys. 24 (2005) 382 Couder Couder A., Sur un type nouveau de télescope photographique, Compt. Rend. Acad. Sci., Paris 45 (1926) 1276 corsika Heck, D. et al., CORSIKA: A Monte Carlo code to simulate extensive air showers, Tech. Rep. FZKA-6019, Forschungszentrum Karlsruhe, (1998) random forest Albert, J. et al (MAGIC collaboration), Implementation of the Random Forest method for the Imaging Atmospheric Cherenkov Telescope MAGIC, NIM A 588 (2008) 424-432 model analysis de Naurois, M. et al, Application of an Analysis Method Based on a Semi-Analytical Shower Model to the First H.E.S.S. Telescope, Proc. 28nd ICRC, 2907 (2003) 3d reconstruction Lemoine-Goumard, M. et al, 3D-reconstruction of gamma-ray showers with a stereoscopic system, Proc. of toward Network of Atmospheric Cherenkov Detectors VII (2005) 173-182 dl Nieto, D. et al., Exploring deep learning as an event classification method for the Cherenkov Telescope Array, Prov. of 35th ICRC, (2017) handbook_analysis Section 16-19 of Handbook of X-ray and Gamma-ray astrophysics, Springer Verlag. handbook_current_IACTs Chapters VIII to X of section 6 of Handbook of X-ray and Gamma-ray astrophysics, Springer Verlag. handbook_CTA Chapters XI of section 6 of Handbook of X-ray and Gamma-ray astrophysics, Springer Verlag. cherenkov_schmidt Mirzoyan, R. & Andersen, M.I., A 15 deg wide field of view imaging air Cherenkov telescope, Astrop. Phys. 31 (2009) 1–5. machete Cortina, J., Moralejo A., Lopez-Coto, R, MACHETE: A transit imaging atmospheric Cherenkov telescope to survey half of the very high energy gamma-ray sky, Astrop. Phys. 72 (2016) 46-54. plenoscope Mueller, S. A., Cherenkov-Plenoscope, 2019, arXiv:1904.13368. handbook_future Chapter XII of section 6 of Handbook of X-ray and Gamma-ray astrophysics, Springer Verlag. MAGIC sensitivity MAGIC coll., The major upgrade of the MAGIC telescopes, Part II: A performance study using observations of the Crab Nebula, Astropart. Phys. 72 (2016) 76 VERITAS sensitivity VERITAS: public specifications webpage <https://veritas.sao.arizona.edu/about-veritas/veritas-specifications> HESS sensitivity Preliminary sensitivity curves for H.E.S.S.-I (stereo reconstruction), based on/adapted from Holler et. al 2015 (Proceedings of the 34th ICRC) MAGIC Crab MAGIC coll., Measurement of the Crab Nebula spectrum over three decades in energy with the MAGIC telescopes, Journal of High energy Astrophysics 5-6 (2015) 30-38 basic-contrib Brown B, Aaron M (2001) The politics of nature. In: Smith J (ed) The rise of modern genomics, 3rd edn. Wiley, New York, p 234–295 basic-online Dod J (1999) Effective Substances. In: The dictionary of substances and their effects. Royal Society of Chemistry. Available via DIALOG. <http://www.rsc.org/dose/title of subordinate document. Cited 15 Jan 1999> basic-DOI Slifka MK, Whitton JL (2000) Clinical implications of dysregulated cytokine production. J Mol Med, doi: 10.1007/s001090000086 basic-journal Smith J, Jones M Jr, Houghton L et al (1999) Future of health insurance. N Engl J Med 965:325–329 basic-mono South J, Blass B (2001) The future of modern genomics. Blackwell, London Key-points to have in mind: * The Handbook of X-ray and Gamma-ray Astrophysics is aiming to publish a work of tertiary literature, which provides easily accessible, digested and established knowledge derived from primary or secondary sources in the particular field. Therefore, your contribution should be clear and concise and be a comprehensive and up-to-date overview of your topic. * Length of text: every chapter normally consists of about 10,000-20,000 words (excluding figures and references), which would be about 20-40 typeset pages. However, this is not a rigid rule and it is something to discuss with the Section Editors of the Section of the chapter. * Colored figures are welcome in any standard format (jpg, tif, ppt, gif). If possible please provide the original figure in high resolution (300 dpi minimum). Please do not forget to obtain permission in case the figure is from a published article. For journals like ApJ, PRD, PRL, etc. you do not need the permission if you are an author of the article of the original figures, but for most journals you need the permission even if you are the author of those figures. However, if you create a new figures with minor changes, you can claim it is a new figure and you do not need any permission. Submission: Please submit source files, compiled pdf file of the chapter, and figure permissions through the Meteor system. arXiv policy: Authors can post their chapter on arXiv if they wish to do it. In the comment field, please write something like “Invited chapter for Handbook of X-ray and Gamma-ray Astrophysics (Eds. C. Bambi and A. Santangelo, Springer Singapore, expected in 2022)”. After the publication of the chapter, you may add the DOI on the arXiv page. § TENTATIVE TIMELINE This is our current timeline for the project: [t]3cmMar-Sep 2021 [t]15cmAuthors write their chapters [t]3cmSep 2021 [t]15cmDeadline for chapter submission [t]3cmSep 2021-Jan 2022 [t]8.3cmSection Editors check the submitted contribution and recommend possible modifications Authors amend and resubmit their chapter Section Editors approve chapters for publications [t]3cmJan 2022 [t]15cmWe have all chapters ready for publication [t]3cmJun 2022 [t]15cmSpringer publishes the hard copy of the handbook § CROSS-REFERENCES (IF APPLICABLE) Include a list of related entries from the handbook here that may be of further interest to the readers. § NOTE: REFERENCES AND CITATIONS §.§ References Should be restricted to the minimum number of essential references compatible with good scientific practice. Include all works that are cited in the chapter and that have been published (including on the Internet) or accepted for publication. Personal communications and unpublished works should only be mentioned in the text. Do not use footnotes as a substitute for a reference list. §.§ Citations All references should be cited in the text by the numbered style [n]; <cit.>, <cit.>, <cit.>, <cit.> <cit.>, etc. The recommended style for references is Springer Basic Style (examples are given below). 99. basic-contrib Brown B, Aaron M (2001) The politics of nature. In: Smith J (ed) The rise of modern genomics, 3rd edn. Wiley, New York, p 234–295 basic-online Dod J (1999) Effective Substances. In: The dictionary of substances and their effects. Royal Society of Chemistry. Available via DIALOG. <http://www.rsc.org/dose/title of subordinate document. Cited 15 Jan 1999> basic-DOI Slifka MK, Whitton JL (2000) Clinical implications of dysregulated cytokine production. J Mol Med, doi: 10.1007/s001090000086 basic-journal Smith J, Jones M Jr, Houghton L et al (1999) Future of health insurance. N Engl J Med 965:325–329 basic-mono South J, Blass B (2001) The future of modern genomics. Blackwell, London
http://arxiv.org/abs/2306.04353v1
20230607113546
Reversible Numeric Composite Key (RNCK)
[ "Nicola Asuni" ]
cs.DB
[ "cs.DB", "E.2; H.3.1" ]
Quasi-Newton Detection in One-Bit Pseudo-Randomly Quantized Wideband Massive MIMO Systems Gökhan Yılmaz, Graduate Student Member, IEEE, and Ali Özgür Yılmaz, Member, IEEE The work of G. Yılmaz is supported by Vodafone Turkey within the framework of the 5G and Beyond Joint Graduate Support Program coordinated by the Information and Communication Technologies Authority of Turkey. The authors are with the Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, Turkey. (E-mail: {yilmaz.gokhan_01, aoyilmaz}@metu.edu.tr) July 31, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= In database design, Composite Keys are used to uniquely identify records and prevent data duplication. However, they require more memory and storage space than single keys, and can make queries more CPU-intensive. Surrogate Keys are an alternative that can overcome some of these limitations, but they can also introduce new disadvantages. To address these challenges, a new type of key called a Reversible Numeric Composite Key (RNCK) has been developed. RNCK is a single number that encodes multiple data attributes, and can be decoded back to the original values. This makes it possible to achieve the benefits of both Composite Keys and Surrogate Keys, while overcoming some of their limitations. RNCK has been shown to improve query performance and reduce memory and storage requirements. It can be used in relational databases, large static datasets, and key-value caching systems. RNCK has been successfully used in production systems for several years. § INTRODUCTION In data modeling, or database design, a Composite Key is a unique identifier made up of two or more attributes (database table columns). For example, a book record might be identified by its ISBN code, title, and author. These attributes cannot be used individually to identify a book. The ISBN code alone is not a unique identifier, because there could be two or more books with the same code. However, no two books will have the same combination of these attributes. Composite Keys <cit.> generally help maintain data integrity and prevent data duplication. Compared to Surrogate Keys <cit.>, which are artificial keys that are not based on real-world data, Composite Keys are easy to implement because they can be created using existing attributes that are often Natural Keys <cit.>. However, when a Composite Key is referenced in multiple data sets (database tables) as a Foreign Key <cit.>, it uses more memory or storage space, as multiple attributes (columns) are required instead of just possibly one. This leads to a more complex schema. Queries become CPU-intensive, as every search and join requires comparing multiple attributes instead of just one in the case of a single key. Surrogate Keys can overcome some of the Composite Key limitations at the expense of introducing other disadvantages. Surrogate Keys do not provide any information about the represented data, making it difficult to understand and interpret the data. They must be generated and maintained separately from the data they represent, in what can be an error-prone process. They may require more storage space for the creation of additional natural attribute indexes in order to avoid duplications and full table scans when fulfilling likely queries. The performance of queries, including memory consumption, is also affected by the type of each attribute. Comparing integer numerical types is a very well-optimized operation in current computer architectures. Most current CPUs can perform multiple uint64 comparisons at the same time using SIMD (Single Instruction, Multiple Data) instructions. Generally, a x86 CPU with 16 computing cores can perform a theoretical maximum of 32 uint64 comparisons per clock cycle. With a 5.0 GHz CPU, this is a theoretical maximum of 160 billion comparisons per second. The actual number of uint64 comparisons that can be performed per second will depend on other factors, including the specific application and the number of other tasks that the CPU is running. In contrast with the simplicity and high-performance of numerical comparisons, string comparisons are a slow operation, especially for large strings. This is because strings are typically stored as arrays of characters and each character must be compared individually. This is assuming the best scenario where each string has already been normalized to a common canonical form. This is not necessarily common, more complex and expensive comparisons can be required. Alternatively, string comparisons can be performed more efficiently by using a hash table, but this can introduce some disadvantages like space complexity, collisions, load factor, and hash function performance <cit.>. To combine the advantages of Composite Keys (CKs) and Surrogate Keys (SKs) while overcoming some of their limitations, a Reversible Numeric Composite Key (RNCK) is presented here. RNCK can be used only in certain cases, when the total number and maximum size of CK attributes is relatively small. RNCK encodes one or more attributes into a number, such that it is possible to directly and efficiently decode the original attributes while preserving some attribute sorting and searching properties. In general, compared to CKs and SKs, the use of RNCK allows to increase query performance while reducing memory and storage requirements. In addition to relational databases, RNCK can be a very effective index in large static datasets and key-value caching systems. § DEFINITION A Reversible Numeric Composite Key (RNCK) is a single number that uniquely represents a Composite Key (CK) or a single non-numeric Natural Key. The RNCK number is generated from one or more data attributes using an encoding function. The code that is generated can be directly reversed to the original attributes using a decoding function. The encoding and decoding functions are bijective. The RNCK format is designed to make encoding and decoding operations fast and inexpensive. § NORMALIZATION Before encoding, a normalization step may be required to ensure a consistent and unambiguous representation of the CK attributes. For example, Unicode strings should be normalized to a canonical form. Small in-memory lookup tables can be used by the encoding and decoding functions to efficiently enumerate limited attribute sets. § FORMAT In performance-focused applications, RNCK is typically a 64-bit unsigned integer (uint64), as this is natively supported by most current hardware platforms. Other data types compatible with binary operations can also be used, such as uint32 or uint128, if they are available. A RNCK follows a binary pattern, where distinct sections of the binary number (groups of bits) represent different attributes or combinations of them. The binary sections are organized from the Most Significant Bit (MSB) to the Least Significant Bit (LSB) in the order of the sorting priority of each attribute. This allows sorting by RNCK to be equivalent to sorting by the attributes in order. The number of bits required for each section is calculated from the maximum number of distinct possible values of the corresponding attribute. For example, for an attribute with a maximum of 100 distinct values (including the null value), a binary section of at least ⌈ log_2(100) ⌉=7 bit is required. This is because 2^7=128, which is greater than or equal to the maximum number of possible values (100). § APPLICABILITY AND LIMITATIONS RNCK can only be used instead of CK in certain cases, such as when the total number of attributes and the maximum size of each attribute is relatively small. The use of RNCK is only possible if the underlying data type is large enough to contain the encoding of all CK attributes. Each binary section must be large enough to store all possible values of the corresponding attribute. In some cases, additional flag bits may be required to indicate special cases. § PARTIALLY REVERSIBLE ENCODING To overcome the RNCK capacity limitation, it is sometimes possible to adopt multiple encoding schemas that are indicated by bit flags. For example, in VariantKey <cit.>, some input variants may exceed the REF+ALT binary section capacity. This is true for only about 0.4% of the records in the reference dataset. In these rare cases, the least significant bit (LSB) is set to 1 and the remaining 30 bits are filled with a hash value that is used as a key for a relatively small lookup table. This alternate encoding is a good compromise because it is rarely used and still preserves some of the RNCK properties, such as the ability to sort and search the variants by chromosome (first binary section) and position (second binary section). § PROPERTIES * Each RNCK code is unique for a given set of CK attributes. * RNCK can be quickly encoded and decoded. * Comparing two CK values by RNCK only requires comparing two numbers, which is a very well-optimized operation in current computer architectures. * A RNCK can be represented as a fixed-length hexadecimal string. This is useful for compatibility with text-based data representation formats or as an interchange format. * Sorting the fixed-length hexadecimal representation of RNCK in alphabetical order is equivalent to sorting the RNCK numerically. * Sorting by RNCK is equivalent to sorting by the CK attributes in enumeration order. * RNCK can be used to replace CK and SK in a database to simplify common searching, merging, and filtering operations. * All types of database joins between two datasets (inner, left, right, and full) can be easily performed using RNCK as a single index. * RNCK can reduce data storage, memory usage, and improve performance. * RNCK can be used with existing key-value systems, including in-memory caches, where the key is RNCK. * RNCK can be used with columnar data formats (e.g. Apache Arrow <cit.>, Apache Parquet <cit.>) to perform really fast binary searches. * In some cases, RNCK can be used to speed up CK searches in a given range. See the VariantKey Overlapping Regions <cit.> for more information. § EXAMPLES Practical implementations of Reversible Numeric Composite Key (RNCK) in multiple programming languages have been successfully used in production systems for some time. §.§ VariantKey VariantKey is a RNCK for Human Genetic Variants <cit.>. A reference implementation of VariantKey in multiple programming languages can be found at <cit.>: https://github.com/tecnickcom/variantkeyhttps://github.com/tecnickcom/variantkey The VariantKey is composed of 3 sections arranged in 64 bit: [] 0 4 5 32 33 63 | | | | | | 01234 567 89012345 67890123 45678901 2 3456789 01234567 89012345 67890123 5 bit CHROM >| |< 28 bit POS >| |< 31 bit REF+ALT >| Encoding example: [] | CHROM | POS | REF | ALT | ——————-+——-+——————————+—–+——————————-+ Raw variant | chr19 | 29238770 | TC | TG | Normalized variant | 19 | 29238771 | C | G | ——————-+——-+——————————+—–+——————————-+ VariantKey bin | 10011 | 0001101111100010010111110011 | 0001 0001 01 10 0000000000000000000 | ——————-+——-+——————————+————————————-+ VariantKey hex | 98DF12F988B00000 | VariantKey dec | 11015544076520914944 | ——————-+—————————————————————————-+ §.§ NumKey NumKey is a RNCK for Short Codes or E.164 LVN. A reference implementation of NumKey in multiple programming languages can be found at <cit.>: https://github.com/tecnickcom/numkeyhttps://github.com/tecnickcom/numkey The NumKey is composed of 3 sections arranged in 64 bit: [] 0 4 5 59 60 63 | | | | | | 01234 567 89012345 67890123 45678901 2 3456789 01234567 89012345 6789 0123 5 bit COUNTRY >| |< 50 bit NUMBER >| |< 4 bit LENGHT Encoding example: [] | COUNTRY | NUMBER | NUM | | [ISO 3166] | [E.164] | LEN | —————+——+—–+————————————————-+—–+ Number | I T | 123456 | 6 | —————+—- -+—–+————————————————-+—–+ NumKey bin | 10011 01000 0000000000000000000000000000000011110001001000000 0110 | —————+——————————————————————–+ NumKey hex | 4D000000001E2406 | NumKey dec | 5548434740922426374 | —————+—+—————————————————————-+ § CONCLUSIONS Reversible Numeric Composite Key (RNCK) is a new type of data key that combines the advantages of both Composite Keys and Surrogate Keys. It overcomes their limitations by being reversible, meaning that the original values can be decoded from the RNCK. RNCK also allows preserving some attribute sort and search properties. RNCK can only be used in certain cases, such as when the total number and maximum size of key attributes is relatively small. In these cases, RNCK can be applied very effectively to relational databases, static datasets, and key-value caching systems. Adopting RNCK can help reduce memory and storage requirements, while increasing query performance. RNCK has already been successfully used in production systems for several years. unsrtnat
http://arxiv.org/abs/2306.08627v1
20230614164155
Graph-Based Matrix Completion Applied to Weather Data
[ "Benoît Loucheur", "P. -A. Absil", "Michel Journée" ]
cs.LG
[ "cs.LG", "cs.CE", "physics.ao-ph" ]
Graph-Based Matrix Completion Applied to Weather Data Benoît Loucheur ICTEAM Institute UCLouvain Louvain-la-Neuve, Belgium [email protected] P.-A. Absil ICTEAM Institute UCLouvain Louvain-la-Neuve, Belgium [email protected] Michel Journée Dept. Climatology Royal Meteo. Inst. of Belgium Brussels, Belgium [email protected] July 31, 2023 ===================================================================================================================================================================================================================================================================================================================== Low-rank matrix completion is the task of recovering unknown entries of a matrix by assuming that the true matrix admits a good low-rank approximation. Sometimes additional information about the variables is known, and incorporating this information into a matrix completion model can lead to a better completion quality. We consider the situation where information between the column/row entities of the matrix is available as a weighted graph. In this framework, we address the problem of completing missing entries in air temperature data recorded by weather stations. We construct test sets by holding back data at locations that mimic real-life gaps in weather data. On such test sets, we show that adequate spatial and temporal graphs can significantly improve the accuracy of the completion obtained by graph-regularized low-rank matrix completion methods. Matrix Completion, Low Rank, Graph, Regularization, Time Series, Air Temperature § INTRODUCTION Matrix completion seeks to find a low-rank matrix when only a fraction of its entries are available. This branch of research has application in numerous research projects such as collaborative filtering (Netflix challenge <cit.>) <cit.>, traffic sensing <cit.>, image inpainting <cit.>, and gene expression imputation <cit.>. Sometimes it is known that there is a particular underlying structure that connects the data. Take for example the most famous case of collaborative filtering, the Netflix problem. In this case, each row corresponds to a user and each column to a movie. It has been shown that this rating matrix <cit.> admits good low-rank approximations. The low-rank structure can be understood from the fact that movies can be organized in a few genres, which are rated similarly by the same kinds of users. However, this low rank property is not the only form of structure; there can also be a graphical structure that represents connections between the row/column entities. In such a representation, the rows, resp. columns, are modeled as the nodes of a weighted undirected graph where the weights represent the similarity between the nodes. For example, in the case of the Netflix problem, one can create a graph that connects movies with a weight equal to the number of common actors between those movies. Missing data in weather time series can appear for various reasons, usually related to sensors deficiencies or communication intermittency. It was recently shown in <cit.> that matrix completion methods are able to outperform the state of the art for the completion of missing data in daily extreme temperature series from a network of weather stations. In this paper, we are interested in the application of graph-regularized matrix completion on air temperature data recorded with a 10-minute time resolution provided by the Royal Meteorological Institute (RMI) of Belgium. We show that adding a regularization by spatial and temporal graphs considerably improves the accuracy of low-rank matrix completion. We also investigate how the pattern of missing data influences the outcomes. A comparison of performance with the state of the art is also performed. Notation: Lowercase and uppercase boldface letters stand for vectors and matrices respectively. A_F is the Frobenius norm of the matrix A. x_* denotes the ℓ_1 norms of vector x. Finally, I is the identity matrix and 0 is the zero matrix. § BACKGROUND AND RELATED WORKS Matrix completion is the task of recovering all the entries of a matrix M by observing only a subset Ω of them. Formally, given a matrix M of size m× n, we have only access to |Ω| ≪ m· n entries and the goal is to predict the remaining unobserved ones in Ω̅. §.§ Convex Models A classical version of the matrix completion problem consists in finding a minimum rank matrix X which is equal to the observation matrix M in the set Ω: min_X∈ℝ^m× n (X), s.t. M_ij=X_ij, ∀ i,j ∈Ω. Unfortunately, solving such problem is hard <cit.>, and moreover this problem formulation is inadequate when the available data matrix is not exactly low rank due, e.g., to measurement noise. In order to reduce the complexity of this problem, <cit.> proposed to deal with the tightest possible convex relaxation of the rank operator, which is called the nuclear norm. The relaxed formulation is as follows: min_X∈ℝ^m× n X_* s.t. M_ij = X_ij, ∀ i,j ∈Ω. The model can then be generalized to take into account the measurement noise. The constraint is then relaxed and the optimization problem becomes: min_X∈ℝ^m× n 1/2𝒫_Ω(M)-𝒫_Ω(X)_F^2 + λX_*, where λ > 0 is a regularization parameter. §.§ Factorized Models When the sparse matrix X is large, which is usually the case for recommendation systems, it is more efficient to impose a low-rank structure on X by representing it in factorized form, X=AB^T, where A and B have few columns. min_A∈ℝ^m× r B∈ℝ^n× r 1/2𝒫_Ω(M)-𝒫_Ω(AB^T)_F^2 + λ_a/2A_F + λ_b/2B_F, where λ_a,λ_b > 0 are two regularization parameters and r is the target rank of the matrix. Formulation (<ref>) and (<ref>) are closely related in view of the following property of the nuclear norm <cit.>: X_* = min_A∈ℝ^m× r B∈ℝ^n× r A_F^2+B_F^2 s.t. X = AB^T. §.§ Graph-based Regularization For simplicity, we present the case of a graph on the rows of M; since the columns of M are the rows of M^T, the extension to a graph on the columns of M is direct. Hence assume the availability of a graph 𝒢^a = (V^a,E^a,W^a) on the rows with vertices V^a={1,⋯,m}, edges E^a⊆V^a×V^a and non-negative weights on the edges represented by the symmetric m × m matrix W^a. If there is an edge between the nodes i and j, then W_ij^a=W_ji^a≠ 0. This information in the form of a graph can then be incorporated into the minimization (<ref>) using the following term <cit.>: 1/2∑_i,jW_ij^aa_i-a_j_2^2=(A^T𝐋𝐚𝐩(W^a)A), where 𝐋𝐚𝐩(W^a) = D^a - W^a is the graph Laplacian of W^a, D^a is the diagonal degree matrix, with D_i,i^a=∑_i≠ jW_i,j^a. The left-hand side term of (<ref>) favors a similarity between the rows a_i and a_j of A whenever W_ij^a is large. The reasoning is the same for the graph on the columns 𝒢^b. In view of the above, the problem of matrix completion over graphs can be formulated as follows <cit.>: min_A∈ℝ^m× r B∈ℝ^n× r 1/2𝒫_Ω(M)-𝒫_Ω(AB^T) + λ_L/2{(A^T𝐋𝐚𝐩(W^a)A) + (B^T𝐋𝐚𝐩(W^b)B)} +λ_a/2A_F^2 + λ_b/2B_F^2. This formulation having two terms of regularization by graphs, it is more commonly called Graph-Regularized Matrix Completion (GRMC). A method called GRALS, to address (<ref>) by applying a conjugate gradient method alternatively to A and B, is provided in <cit.>. §.§ State of the Art In <cit.>, the following methods were compared on weather data completion tasks. IDW (Inverse Distance Weighting) is a widely-used method for estimating missing data in meteorological and other time series datasets. The approach uses a weighted average of neighboring observed data points, with weights computed by the inverse distance between the missing data points and the neighbors. IDW is simple, computationally efficient, and can produce accurate estimation when the underlying data has a smooth spatial structure. However, its performance is affected by outliers or uneven data distributions. PCA <cit.> (Principal Component Analysis) is another method for the estimation of missing data. The approach involves decomposing the data into its principal components, which capture the most important patterns or variation in the data. Then, the missing data can be estimated using a linear combination of the principal components based on the observed values in the corresponding time periods. SoftImpute <cit.> is a matrix completion algorithm that uses the Singular Value Decomposition (SVD) to estimate the low-rank structure and then applies a soft-thresholding operator to shrink small singular values. The formulation of SoftImpute is as follow: min_X∈ℝ^m× n X_*, s.t. 𝒫_Ω(M)-𝒫_Ω(X) ≤δ. Finally, RTRMC <cit.> addresses the matrix completion problem by applying a trust-region methods to the following optimization problem on the Grassmann manifold 𝒢^m× r: min_𝒜∈𝒢^m× rmin_B∈ℝ^n× r1/2𝒫_Ω(M)-𝒫_Ω(AB^T)_F^2 + λ^2/2𝒫_Ω̅(AB^T)_F^2, where Ω̅ is the complement of the set Ω. § GRAPH GENERATION In addition to the estimation of the classical hyperparameters of a matrix completion method, we also need to generate two graphs, one for the rows of the matrix (i.e., each timestamp) and one for the columns (i.e., each station). In this subsection, different ways to create our spatial and temporal graphs are described. §.§ Spatial Graph The graph of the matrix columns is in our case the spatial graph. Each node represents a weather station and we are free to create a weighted edge or not between two stations. We have compared different graph possibilities with various levels of complexity. The most basic approach is the K-Nearest Neighbors (KNN), in which case each station is connected by an edge to its K nearest neighbors using the geographic distance. (see Fig. <ref>). It is then possible to add constraints on the generation of the KNN graph by adding as weights for the edges the inverse of the distance between the two stations. Thus, the closer the stations are, the more strongly they are connected (see Fig. <ref>). The visualization of the weights in the graph is done via the transparency and thickness of the edges. The more important the weight is, the darker and thicker the edge will be. A third version of the spatial graph consists in adding a constraint of maximum altitude difference between two stations. If the altitude difference between two stations is greater than a chosen threshold value, then this connection is ignored. This avoids the rather special case where two stations are very close spatially but have a large difference in altitude, e.g., more than 100 meters (see Fig. <ref>). Such cases can appear in areas with complex orography and can negatively impact the completion of missing data. By comparing the Fig. <ref> and Fig. <ref>, no changes are observed for the edges in the north of Belgium. This is due to the relatively flat terrain in this geographical area which will then never trigger the altitude limit condition. In contrast, the southern part of Belgium has a much more complex orography which causes changes in the edges. §.§ Temporal Graph The graph of the matrix rows is in our case the temporal graph. Each node represents a date and to link them we use the notion of lag set ℒ which is the (repeating) dependency pattern of the graph. Fig. <ref> shows the simplest lag set when ℒ=[1]. This means that the temperature measured at time t will be connected to the temperature at time t-10 minutes and t+10 minutes. Fig. <ref> shows the lag set ℒ=[1,2,3], which creates a link between the current measurement and the three previous and following ones. The first lag set only tries to capture the immediate relationship or dependency between two consecutive measures, whereas the second lag set tries to capture a slightly longer-term relationship. Once the temporal graph is created, we have the possibility to choose weights for its edges. In our experiments, we chose two different rules for the attribution of weights. The first trivial one is when w_i = 1 which allows us to have in a non-weighted graph. The other possibility considered is to use the rule w_i=1/i to emphasize the importance of short-term relationships in the data, as shorter lags often reflect more immediate or direct dependencies, while longer lags may be more influenced by noise or other indirect factors. § MACHINE LEARNING MODEL All our experiments were performed with data provided by the RMI, consisting of air temperature measurements from 50 automatic weather stations located in Belgium, which provide data with a temporal resolution of 10-min. We consider in this study the data for the period from January 1, 2020 to March 1, 2022. The source code is available at <https://github.com/bloucheur/GRMCWeatherData>. Implementations of the existing algorithms are also publicly available, however the dataset we use is confidential. An open source dataset containing similar data is available. §.§ Generation of Missing Weather Data To evaluate the performance of our model, we have to synthetically create missing entries in the matrix M. These synthetic missing data will be used to evaluate the error committed by our model once the completion of these missing data is calculated. However, as we will show in our results in Section <ref>, the pattern of missing data significantly impacts the quality of the completion. In our case, two types of patterns are considered. The first one, represented in Fig. <ref>, considers that the missing data are in the form of long duration blocks (between one and 3 days). While the second scenario, represented in Fig. <ref>, considers synthetic holes of small duration (10 minutes to 2 hours). It is important to note that in total the number of missing data generated for both scenarios is the same. Furthermore, the number of artificially generated missing data is 10% of the maximum number of entries in the matrix, more formally: |Ω_Block| = |Ω_Spread|= 0.1*mn. Note that this condition is not applied in Fig. <ref> and Fig. <ref> as these represent an artist's view of our two scenarios. §.§ Hyperparameter Tuning The graph regularized matrix completion model we study, named GRALS<cit.>, has hyperparameters to choose from. In particular, the value of the rank (r) for the factorization, the three constants of regularization (λ_L,λ_a and λ_b) as well as variables of our design for the generation of our graphs which are explained above. We also want to simulate our missing data according to particular patterns as shown in Fig. <ref>. The machine learning model that corresponds to our expectations is the Monte Carlo Cross-Validation <cit.>. The advantage of this evaluation technique is that it allows us to simulate missing data according to a pattern that we can define. We first split our datasets in two, the data from January 1, 2020 to August 31, 2021 are part of the training set, the rest (i.e. September 1, 2021 to March 1, 2022) are part of the testing set. For each experiment, we consider one week of data for all stations, i.e., the observation matrix M∈ℝ^1009× 50. In order to perform the hyperparameter tuning, we select 10 weeks without gaps and that do not intersect in the training set. For each of these 10 weeks, we generate 5 patterns of missing data, to create our sparse matrix M, according to the scenario imposed for the experiment. The optimal set of hyperparameters is determined as the one that results into the lowest RMSE when averaged over the 5× 10 folds. It is important to note that the weeks selected for the two scenarios are the same, only the way the missing data are generated differs. For the final evaluation on the test set, the same missing data generation methodology is applied with 5 different weeks and 3 generations of missing data patterns. § EXPERIMENTAL RESULTS §.§ Comparison of the methods TABLE <ref> shows the values of RMSE for every method on the test set. The results indicate that the performance of GRALS, under our experimental conditions, is comparable to the other low-rank matrix completion methods (SoftImpute and RTRMC). In the remainder of this experimental section, we will conduct ablation studies on GRALS to evaluate how much the achieved RMSE depends on its spatial and temporal graph regularization. §.§ Graph Part TABLE <ref> represents all the hyperparameters of the graph regularized matrix completion model that we have. For each hyperparameter, we define a set of possible and relevant values. Due to the high number of hyperparameters in our model, we generate all possible combinations of hyperparameters, and a sampling without replacement is performed. This approach is based on the RandomizedSearchCV model from Scikit-learn <cit.>. The last two columns of this table represent the optimal hyperparameters obtained during the training phase for the two scenarios considered. Whether it is the block or spread scenario, the spatial graph requires a large number of K neighbours and it is important to note that both require a weighted spatial graph coupled with the altitude boundary condition. TABLE <ref> represents different RMSE values on the test set by imposing or not a priori conditions on some hyperparameters. The Case #1 does not impose any constraints, and we consider these results as a reference. Setting all regularization terms to zero (λ_L=λ_a=λ_b=0) drastically decreases the performance of the completion regardless of the missing data pattern scenario. Case #5 and Case #6 show the impact of considering only a temporal graph (Lap(W^b)=0) or only a spatial graph (Lap(W^a)=0). A comparison between Case #1 and Case #6 shows that the temporal graph contributes little to the resulting completion performance in the case of the Block scenario. On the other hand, in the Spread scenario, removing the temporal graph significantly impacts the completion with an average RMSE that increases by 35%. Then, by comparing Case #1 and Case #5, we see that the spatial graph is important for both scenarios. This is a rather expected result: the layout of the weather stations as well as the continuous nature of the temperature in space, reinforces the idea that linking stations close in space could improve the results. § CONCLUSION It is known <cit.> that the low-rank matrix completion methods SoftImpute and RTRMC are very effective for completing missing meteorological data. Our experiments (Section <ref>) have shown that a graph-regularized low-rank matrix completion method, termed GRALS, is also very effective, and that its graph regularization is essential to obtain competitive results. This opens two avenues of research: (i) enhance RTRMC and SoftImpute with graph regularization and (ii) improve the construction of the graphs. Currently, our graphs are built using only the metadata at our disposal (i.e., geographic coordinates of the stations as well as their altitudes). In our future research, we will attempt to build adaptive graphs based on the measurements. By defining the notion of distance between two time series, it is possible to perform clustering and thus generate a spatial graph. For the temporal graph, it is possible to fit an autoregressive model on the known data, thus giving us weight values for each weight of the lag set. IEEEbib
http://arxiv.org/abs/2306.03975v2
20230606191747
Revisiting Conversation Discourse for Dialogue Disentanglement
[ "Bobo Li", "Hao Fei", "Fei Li", "Shengqiong Wu", "Lizi Liao", "Yinwei Wei", "Tat-Seng Chua", "Donghong Ji" ]
cs.CL
[ "cs.CL" ]
[email protected] Wuhan University China National University of Singapore Singapore Singapore [email protected] [email protected] Wuhan University China National University of Singapore Singapore Singapore Management University Singapore Singapore National University of Singapore Singapore National University of Singapore Singapore [email protected] Wuhan University China Dialogue disentanglement aims to detach the chronologically ordered utterances into several independent sessions. Conversation utterances are essentially organized and described by the underlying discourse, and thus dialogue disentanglement requires the full understanding and harnessing of the intrinsic discourse attribute. In this paper, we propose enhancing dialogue disentanglement by taking full advantage of the dialogue discourse characteristics. First of all, in feature encoding stage, we construct the heterogeneous graph representations to model the various dialogue-specific discourse structural features, including the static speaker-role structures (i.e., speaker-utterance and speaker-mentioning structure) and the dynamic contextual structures (i.e., the utterance-distance and partial-replying structure). We then develop a structure-aware framework to integrate the rich structural features for better modeling the conversational semantic context. Second, in model learning stage, we perform optimization with a hierarchical ranking loss mechanism, which groups dialogue utterances into different discourse levels and carries training covering pair-wise and session-wise levels hierarchically. Third, in inference stage, we devise an easy-first decoding algorithm, which performs utterance pairing under the easy-to-hard manner with a global context, breaking the constraint of traditional sequential decoding order. On two benchmark datasets, our overall system achieves new state-of-the-art performances on all evaluations. In-depth analyses further demonstrate the efficacy of each proposed idea and also reveal how our methods help advance the task. Our work has great potential to facilitate broader multi-party multi-thread dialogue applications. Revisiting Conversation Discourse for Dialogue Disentanglement Donghong Ji July 31, 2023 ============================================================== § INTRODUCTION Multi-turn multi-party conversations are often characterized by intertwined utterances with many speakers, and multiple coexisted topic threads <cit.>. This adds challenges to the dialogue understanding and responding. Dialogue disentanglement has thus been proposed with the aim of decomposing entangled utterances from different threads or sessions <cit.>, i.e., finding the reply-to relations between the chronologically-listed utterances (cf. Figure <ref>). Earlier research builds handcrafted discrete features with machine learning models to cluster utterances into different sessions or predict whether there is a replying relation between utterances <cit.>. Currently, the rapid development of deep neural models has greatly advanced the dialogue disentanglement task <cit.>, especially by making use of the pre-trained language models (PLM) <cit.>, e.g., BERT <cit.>, DiaBERT <cit.>. As extensively revealed, the essence of the task lies in the understanding of the underlying conversational discourse <cit.>, and thus it is key to model the discourse structure of the dialogue. Despite the progress of dialogue disentanglement achieved by prior efforts, existing explorations, unfortunately, still need to harness the conversational discourse nature fully. At feature modeling perspective, current research fails to sufficiently make use of the conversational discourse structural information. In the dialogue context, there are various types of discourse structures beneficial to the dialogue disentanglement task. The first type is the speaker-utterance discourse. In the multi-party dialog, due to the participant persona consistency, different utterances by the same speaker may exhibit identical styles. Thus, capturing the speaker-utterance correlations will facilitate the responding recognition. Second, the speaker-mentioning discourse. The speaker coreferences indicate the interactions between different utterances and speakers, which are the value clues to the detection of replying relation. Third, the utterance-distance discourse. A conversation usually involves a great number of turns, in which the valid contexts that offer critical features for the replying reasoning actually center around the current utterance (i.e., near neighbors), and the farther away, the lower the efficacy of the features. The Ubuntu IRC data <cit.> statistics shows that the average turn of conversation is 10.17, while 80%/90%/95% replying relations are scattered within 6/13/21 utterances forward and backward, respectively. Fourth, the partial-replying discourse. In fact, the incorporation of the previously-discovered partial replying structure is beneficial to the detection of the following replying relation. For example as shown in Figure <ref>, directly determining the reply-to relation #6^↷#3 can be tricky; while knowing the replying relation of #3^↷#2 as prior, the detection of #6^↷#3 can be greatly eased. At system optimization perspective, existing works disregard the dialogue discourse characteristic for model training and decoding. On the one hand, utterances are governed under both the local replying thread and global session discourses, and thus, the dialogue disentanglement task measures both the session-level and pair-wise detection. Unfortunately, most current models only train with the pair-wise cross-entropy loss without considering the higher-level optimization (e.g., thread, session) <cit.>. On the other hand, existing methods take a front-to-end reading order to decode the replying relation for an utterance with merely the precedent context. Yet this decoding approach can be less effective and provide less informative results as the semantics of a conversation are organized in a hierarchical structure rather than a linear chronological order. Intrinsically, as humans, we always first recognize those utterances with simple replying relations, then try the harder cases gradually with more clues. Taking Figure <ref> as an example with #5 as the current utterance, it can be easier to first determine the replying dependency of #8^↷#5 with the cue word `mac' in the following context. Based on the established replying structure #8^↷#5, the hard one of #5^↷#1 can be further detected without much effort. In this work, we rethink the conversation discourse to dialogue disentanglement, and propose to enhance the task by giving full consideration to the above observations. First of all, to take advantage of the rich conversational discourse features (<ref>), we construct the four types of graphs to represent the aforementioned discourse structures. We consider the two static speaker-role graphs, including the speaker-utterance and speaker-mentioning structures. Also, we build the two dynamic contextual structures, including a Gaussian-based utterance-distance structure and a dynamically updated partial-replying structure. We further develop a structure-aware framework (<ref>) for dialogue disentanglement. As shown in Figure <ref>, we encode and integrate the various heterogeneous graphs with edge-aware graph convolutional networks (EGCN), where the resulting rich structural features aid the modeling of the intrinsic conversational contexts. Figure <ref> illustrates our enhancement for dialogue disentanglement in different stages: feature encoding, model learning, and inference. Further, we propose to optimize the learning and inference of the above framework, following the hierarchical nature of conversation discourse. First, for model training, we devise a hierarchical ranking loss mechanism (cf <ref>). We group the candidate parents of the current utterance into different discourse levels within a dialogue, based on which we define the learning losses under three hierarchical levels, covering both the pair-wise and session-wise optimizations. Then, during inference, we introduce an easy-first relation decoding algorithm (cf <ref>). By consulting both the precedent and subsequent context, the utterance-parent pairing procedure is taken place in an easy-to-hard manner without following the sequential order. Specifically, we maintain a global utterance-pair scoring matrix, and at each decoding step, the utterance pair with the highest score will be selected. Also, the established replying pair is incrementally added into the partial-replying structure to update features for further facilitating the follow-up inference. We conduct experiments on two benchmark datasets, including the Ubuntu IRC <cit.> and Movie Dialogue <cit.>. The results show that our overall system outperforms the current state-of-the-art (SoTA) baselines with significant margins on all datasets and metrics. Model ablation studies prove the necessity of integrating the various dialogue discourse structure information, the hierarchical ranking loss mechanism, and also the easy-first decoding algorithm for dialogue disentanglement. Additionally, our experiments highlight the effectiveness of our discourse structure-aware method, particularly in scenarios with longer utterance distance and multiple participating speakers. Furthermore, our in-depth analysis of the Hierarchical Ranking Loss mechanism reveals its crucial role in rectifying prediction errors and improving dialogue disentanglement performance. The insights derived from our experimental analysis also underscore the power of the easy-first decoding strategy in facilitating confident decision-making, thereby demonstrating its significant advantage in the field of dialogue disentanglement. Moreover, we conducted an experiment comparing our model to GPT-3.5 for the task of dialogue disentanglement. The results of our experiment demonstrate a clear superiority of our model over GPT-3.5, indicating a significant advancement in the dialogue disentanglement task. These outcomes suggest that larger language models, despite their comprehensive capabilities, exhibit discernible shortcomings in the dialogue disentanglement task, particularly with respect to effectively capturing dialogue discourse structures. Based on these insights, we validate the effectiveness of our model and emphasize the necessity for specialized optimization of models tailored for dialogue discourse. All in all, this paper revisits the discourse attribute of conversation for better dialogue disentanglement. To our knowledge, this is by far the first work taking full consideration of dialogue discourse, from feature modeling to model optimization. To aid understanding, we summarize our key contributions as follows. * We construct various dialogue discourse structural features to enrich the dialogue contexts. * We propose a hierarchical ranking loss method to cover both pair-wise and session-wise task learning. * We present an easy-first decoding algorithm to enable a highly effective inference of replying relations. Our work can be instructive for a wider range of multi-party multi-thread dialogue applications without much effort. To facilitate the follow-up research, we will release all our codes and metadata upon acceptance. § RELATE WORK §.§ Dialogue Disentanglement Dialogue disentanglement, also known as conversation disentanglement, conversation management, or thread detection <cit.>, has long been an important research topic in the field of dialogue understanding. Dialogue disentanglement is also the prerequisite for a wide range of downstream applications, such as response generation <cit.>, dialogue state tracking <cit.>, dialog-based machine reading comprehension <cit.> and dialogue information extraction <cit.>. The task has been extensively modeled as a pair-wise relation classification, i.e., determining the rely-to relation between utterance pairs. Early works rely heavily on discrete handcrafted features with machine learning models <cit.>, modeling the utterances into the pair-wise relation prediction. Later research frequently utilizes the deep learning models to train classifiers <cit.>, especially with the use of contextual features from PLMs <cit.>. For example,  <cit.> transform the dialogue disentanglement into a link prediction problem with LSTM and CNN models. <cit.> leverage the pointer network to achieve online decoding. <cit.> propose to learn the utterance-level and session-level representation based on contrastive learning. Representing the key properties of dialogue utterances with effective features is pivotal to dialogue disentanglement. Prior research has explored various conversational features, e.g., time, speakers, topics, and dialogue contexts <cit.>, at different levels, e.g., inter-utterance features <cit.> and session-level features <cit.>. More recently, <cit.> emphasize the learning of conversational discourse information for the task. The dialogue discourse structure depicts the intrinsic semantic layout and organization of the dialogues. By integrating the speaker-utterance and speaker-mentioning structural features to enhance the discourse understanding, they thus achieve the current SoTA performance. This work follows the same line and extends the triumph of <cit.> on the modeling of discourse structure information, Yet ours is more advanced than theirs in three major aspects. Firstly, <cit.> consider merely the two types of static speaker-role structures, which may lead to limited performance improvement. In contrast, we further consider two dynamic context structures. Specifically, the utterance-distance structure helps instruct the model to allocate correct attention to utterances according to the cross-utterance distances instead of equal treatment as in <cit.>. And the partial replying structure capturing the direct task clues are overlooked by the existing works. Secondly, we fully consider the hierarchical discourse structure of the utterance nodes, optimizing the system on the session cluster-level performances with a novel hierarchical ranking loss, instead of the sorely pair-wise measuring in <cit.>. Thirdly, we develop an easy-first decoding algorithm to break the constraint of the traditional front-to-end manner with only the precedent context. We borrow the easy-first strategies from the syntax parsing community <cit.>, where the most confident transition decision will be selected at current step based on the global-level context, instead of the uni-directional decoding direction. §.§ Discourse Structure Modeling To gain a deeper understanding of conversations, it is essential to go beyond semantic content and incorporate the discourse structure, which encompasses elements such as speaker roles, replying relations, and dialogue threading. Analyzing a conversation's discourse structure poses a greater challenge compared to examining flattened documents that only exhibit linear structures. Consequently, researchers have endeavored to explore accurate methods of encoding discourse features, which can be broadly classified into three categories: hierarchical structure modeling, speaker-oriented modeling, and graph-based modeling. Hierarchical structure modeling, which captures the macro-to-micro hierarchy from the overall dialogue down to specific threads, utterances, and words, is the first among these. This structure offers a valuable perspective for understanding dialogue. Consequently, hierarchical modeling has found widespread application in diverse dialogue-related tasks such as dialogue emotion recognition <cit.>, dialogue state tracking <cit.>, dialogue-based question answering <cit.>, and dialogue generation <cit.>, to name a few. Secondly, the role of the speaker is pivotal in dialogue modeling. Sentences uttered by the same speaker often demonstrate consistency. Moreover, the interpretation of an utterance can vary significantly depending on the speaker to whom it is addressed. In light of these complexities, a range of models have been proposed to exploit the role of the speaker in dialogue. DialogueRNN <cit.> was among the first to model dialogues from the speaker's perspective, thereby better capturing the speaker's emotions and states. SCIJE <cit.> utilizes speaker-aware interactions to model different and similar speakers in the context of conversation representations. Dynamic speaker modeling has also found applications in other tasks, such as dialogue act classification <cit.> and speaker classification <cit.>. Third, discourse graph modeling provides a new perspective for dialogue modeling involving speakers, replying relations, co-reference, and multi-threads. This integration into a graph architecture promises a more comprehensive and precise dialogue modeling. DialogueGCN <cit.> considers speaker-level interaction and utterance relative distance for emotion recognition. D2G <cit.> jointly considers speaker and context features for dialogue relation extraction. DSGFNet <cit.> uses a dynamic scheme for encoding multi-domain state tracking tasks. Given the promising applications of graph-based methods in numerous dialogue-related tasks, this study also employs a graph-based model for dialogue discourse modeling. Distinguishing from previous studies that typically consider only one or two kinds of edges for modeling, we incorporate four kinds of edges to model discourse features, offering the most comprehensive approach for discourse feature exploration and a deep understanding of dialogue structures. § PRELIMINARY §.§ Task Formulation Given a dialogue U={u_1,⋯, u_n} in chronological order, each utterance u_i=<s_i,o_i> consists of the speaker s_i and text content o_i. Also we define a thread set T={t_1,⋯,t_p} as a partition of U, with t_p={t_p1,⋯, t_pk} representing a thread of dialogue. Our goal is to find all reply-to relations between utterances y_k={u_i^↷u_j} (i≥ j), via which we disentangle the U into T. §.§ Construction of Conversational Discourse Structures To facilitate the identification of reply-to relations and to provide a robust representation of dialogue dynamics, we propose constructing four types of dialogue discourse structures. The first two types, the speaker-utterance structure (denoted as G^S=<V, E^S>) and speaker-mentioning structure (G^M=<V, E^M>), capture the static speaker roles. On the other hand, the remaining two types, the utterance-distance structure (G^D=<V, E^D>) and partial-replying structure (G^R=<V, E^R>), describe the dynamic discourse contexts. These structures derive four heterogeneous graphs, with the same set V of utterance node u_i, but different definitions of edges E^S/M/D/R. To strengthen the message passing, we also make four graphs bidirectional, and add the self-loop link for all nodes. Speaker-utterance Structure. Throughout a multi-party dialogue, each participant speaks multiple utterances, which are interspersed with utterances from other speakers. Generally, due to the persona, different utterances by the same speaker may exhibit a consistent style. The interaction between these utterances can help capture their consistency and contribute to the overall understanding of the dialogue. To represent this interaction, we create speaker-utterance edges, linking all utterances generated by the same speakers. For example, as shown in Figure <ref>, we connect u_1 and u_8, both uttered by krystoff. Similarly, for utterances u_2, u_6, and u_8 from the same speaker rubem, we add links between each pair. Technically, we denote an edge e^S_i,j=1 in E^S if there is a connection between utterances u_i and u_j, otherwise e^S_i,j=0. Since the speaker relation is bidirectional, we add e^S_i,j=1 and e^S_j,i=1 to E^S if u_i and u_j are from the same speaker. Finally, we complete the graph to yield speaker-utterance structures. Speaker-mentioning Structure. Within multi-party dialogues, it is a commonplace for speakers to reference other participants in their responses, as demonstrated in Figure <ref>. This speaker co-reference information carries crucial indicators of the participants' roles within the conversation. Therefore, enhancing speaker-mentioning relationships will contribute significantly to the comprehension of replying relationships and dialogue structures. To establish these relationships, we create speaker-mentioning edges between the corresponding utterances. For example, in Figure <ref>, we establish a connection between u_1 and u_5 since u_5:“sgarrity, never heard of. I know” refers to sgarrity, the originator of u_1. Likewise, we connect u_4 with u_3 and u_7 individually, where the speaker of u_3 and u_7 is bob2, who is mentioned within u_4. Formally, if u_i or u_j mentions the speaker s_j or s_i, the edge e^M_i,j=1 and e^M_j,i=1 is established in E^M. Otherwise, e^M_i,j and e^M_j,i is set to 0. This process leads to the creation of the speaker mention graph. Utterance-distance Structure. The context crucial for determining reply dependencies is typically interspersed around the current utterance, with a positive correlation between the degree of dependency and the distance between the two utterances. This suggests a relational dynamic where closer utterances have a higher likelihood of forming reply-to pairs, while the influence between utterances weakens as the distance between them increases. To model this tendency, we introduce the concept of utterance-distance, which is characterized using a Gaussian prior distribution. As depicted in Figure <ref>, utterances within shorter ranges are assigned with higher edge weights, while those at longer distances receive the converse treatment, being assigned lower weights. The formulation for the initial relationship representation is as follows: e^soft_i,j = exp(Biaf(h_i, h_j)))/∑_kexp(Biaf(h_i, h_k)), Biaf (h_i,h_j) = [ h_i; 1 ]^T W^d [ h_j; 1 ] , Following <cit.>, we set the μ=0, σ=1/√(2π) for a Gaussian distribution 𝒩(μ,σ^2) to obtain a distance-aware weight f(d)=exp(-π d^2). Then we introduce this weight into Eq. (<ref>) to obtain the distance-aware weight: e^ajs_i,j = e^soft_i,j· f(d_i,j) = exp(-π(i-j)^2) ·exp(Biaf(h_i, h_j)) /Z_1 = exp(-π(i-j)^2 + Biaf(h_i, h_j)) /Z_1 =Softmax_j(G(i)), where Z_1=∑_jexpG(i)_j with G(i) being a vector and G(i)_j=-π (i-j)^2 + Biaf(h_i,h_j). To further refine the equation and balance the two items, we divide them by scale factors 2π√(2π) and √(|i-j|), respectively, following the operation in the Transformer <cit.> block. Thus, we obtain the revised presentation of e^ajs_i,j: G'(i)_j = (i-j)^2/2√(2π) + Biaf(h_i,h_j)/√(|i-j|) , e^D_i,j = Softmax_j(G'(i)) , where e^D_i,j is the distance-aware weight between u_i and other neighbor utterance u_j (j≠ i). As the distance between the i-th and j-th utterances increases, the weight e^D_i,j gradually decreases, resulting in a reduced interaction between them. Partial-replying Structure. As revealed above, the previously discovered partial reply-to structure can actually serve as an important dynamic discourse feature for the following detection. For example, in Figure <ref>, nalioth addresses rubem's question from u_2 in u_3 by saying “You can check via this: any KDE …”. In response, rubem is likely to express gratitude to nalioth in u_6. In essence, if speaker A responds to speaker B in an utterance, then speaker A may reply to B to sustain the dialogue. Consequently, we construct the partial-replying graph by connecting utterances involved in reply relations with edges e_i,j^R=1 and e_j, i^R=1 (e_i,j^R ∈E^R). During the training process, we incorporate all established reply-to relations between utterances to help to optimize the prediction process. During the inference phase, the reply-to relation is not pre-determined. In light of this, it's noteworthy that this particular structure G^R undergoes incremental updates as the task decoding progresses, ensuring the constant adaptation and refinement of the model. § STRUCTURE-AWARE FRAMEWORK In this section, we elaborate on the proposed structure-aware framework for dialogue disentanglement, which leverages rich conversational discourse information. Overall, the architecture of our model consists of three tiers, i.e., Utterance Encoding, Discourse Structure Modeling, Predicting, as illustrated in Figure <ref>. §.§ Utterance Encoding Initially, we convert the input utterances into contextual representations. This conversion employs a pair-wise approach to utterance encoding. Assuming the current utterance is denoted as u_c, in an attempt to better understand the context surrounding u_c, we incorporate both the preceding and succeeding ω utterances into its context window. In other words, we create a dialogue clip, U, comprising u_c-ω, ⋯, u_c, ⋯, u_c+ω. Subsequently, we formulate utterance pair text as follows: X_i = [CLS], u_i,[SEP],u_c,[SEP], where u_i denotes a potential parent utterance of u_c within the defined context window, U. The special token [CLS] symbolizes the overarching conversation, while [SEP] is employed to demarcate individual utterance pairs. In the role of a dialogue encoder, we utilize a Pretrained Language Model (PLM). At the heart of the PLM is a multi-head self-attention block, which calculates its weight via the following formula: head_i = Softmax((QW_i)(K^TW_i)/√(d))(VW_i), MultiHeadAtt(Q, K, V) = Concat(head_1,head_2,⋯,head_h)W_o . In this equation, the parameters Q,K,V are equivalent to the input x. Through stacking multiple multi-head self-attention blocks, the PLM is able to uncover and capture intricate contextual dependencies and semantic relationships within the input text. Therefore, we can derive the token-level representation through the PLM: {h_CLS, h_1, ⋯, h_n} = PLM( X_i ) , v_ci = h_CLS , where v_ci∈ℝ^d signifies the representation of the utterance pair {u_i,u_c}, i.e., the representation of u_c in relation to u_i. §.§ Discourse Structure Modeling Building upon the utterance representations, we proceed to encode the discourse structure features as delineated in Section <ref>. Our inspiration for this step draws from the spatial graph convolutional network (GCN) <cit.>. Consequently, we have designed an edge-aware GCN (EGCN), capable of modeling both the topological properties and edge labels of the graph. In the l-th layer of the EGCN, the input for the i-th node comprises node features r^t_ci(l) and an adjacency link set e^t_i,c-ω,e^t_i,c-ω+1,⋯, e^t_i, c+ω, where t∈M,S,D,R represents the graph type. In this scenario, the initial layer node feature r^t_ci(0) is initialized using the representation v_ci procured from the PLM. The adjacency link e^t_i,j is derived from the corresponding structure. As per the definitions in Section <ref>, the value of e^t_i,j in speaker-utterance structure, speaker-mentioning structure, and partial-replying structure is set to 1 if a link exists, otherwise it is set to 0. In the case of the utterance-distance structure, e^t_i,j is a continuous value that changes in accordance with the value of |i-j|. Finally, the output of the l-th layer for the i-th node is the updated feature r^t_ci(l+1). Notably, we utilize four distinct EGCNs for graph modeling, each corresponding to one of the four heterogeneous graphs (G^M, G^S, G^D, and G^R). The EGCN operation can be formalized as follows: r^t_ci(l+1) = Sigmoid( ∑_j=c-ω^c+ωπ^t_i,j(l)(W^t(l) r_cat^t,c,i,l + b^t(l)) ) , r_cat^t,c,i,l = [r^t_ci(l) ; r^t_cj(l) ; a^t ] . In this equation, W^t(l) and b^t(l) stand for the trainable parameters for the l-th layer. The edge label embedding of t type of graph is represented by a^t∈ℝ^a, and it is randomly initialized. EGCN weight π^t_i,j(l) guides the graph aggregation: π^t_i,j(l) = e^t_i,j·Agg(i,j,t,l) /∑_k=c-ω^c+ω e^t_i,k·Agg(i,k,t,l) , Agg(i,j,t,l) = exp(Biaf([r^t_ci(l); a^t], [r^t_cj(l);a^t]) , where Biaf(·, ·) is the Biaffine function defined in Eq. (<ref>). In this case, Biaf(·, ·) refers to the Biaffine function defined in Eq. (<ref>). It should be noted that we implement a total of L layers of EGCN propagation for each graph. This is done to ensure comprehensive learning of structural features. The final representation for each node's corresponding discourse structure is denoted by the output of the last layer r^t_ci(L). Following this, we propose fusing the heterogeneous graph features into a unified representation by performing an addition operation: r_ci = r^M_ci(L) ⊕r^S_ci(L) ⊕r^D_ci(L) ⊕r^R_ci(L) . §.§ Predicting In addition to the non-linear discourse structure, the natural chronological order also provides vital cues for understanding the entirety of the dialogue. To effectively exploit the temporal dependencies embedded in dialogues, we utilize a BiLSTM model as the foundational component of our methodology <cit.>. Given the overfitting issues prevalent in the conventional LSTM model <cit.>, we incorporate a DropoutConnect BiLSTM <cit.> to more efficiently extract temporal dependencies. The following equation succinctly illustrates the underlying mechanism of this model: i_t = σ(W_ix_t + (M_i ·U_i) h_t-1 + b_i) , f_t = σ(W_fx_t + (M_f ·U_f) h_t-1 + b_f) , o_t = σ(W_ox_t + (M_o ·U_o) h_t-1 + b_o) , c̃_t = tanh(W_cx_t + (M_c ·U_c) h_t-1 + b_c) , c_t = f_t ⊙c_t-1 + i_t ⊙c̃_t , h_t = o_t ⊙tanh(c_t) , where the parameters [W_*, U_*, b_*] are trainable, with * ∈{i,f,o,c}. M_*, a binary matrix, signifies the dropout connection and importantly, M*^ij follows a Bernoulli distribution with a probability parameter p. Applying Drop-Connect to the hidden-to-hidden weight matrices [U_i, U_f, U_o, U_c] can regularize the models sufficiently, thereby overfitting on the recurrent connections of the LSTM can be effectively mitigated <cit.>. For each time step, we adopt the node's representation from the Edge-aware GCN (EGCN), denoted as r_ci, as the input x_t. The hidden state at each time step in the final layer, denoted as h_ci, serves as the representation of the i-th node. Then, our model architecture incorporates a residual connection between the initial utterance representation vci, derived from the PLM, and the current discourse feature h_ci. This serves as the final contextual feature for the subsequent prediction: ĥ_ci = h_ci⊕v_ci , where h_ci represents the output from the last layer of the BiLSTM. Building upon this final discourse-enhanced contextual feature representation, a feedforward neural network (FFNN) is utilized to compute a pairwise score: S(ci) = FFNN( ĥ_ci ) , Here, S(ci) serves as a unary score that signifies the confidence level that u_c responds to u_i. Subsequently, a softmax layer is applied across all candidates to identify the final parent of u_c: p_ci = exp(S(ci))/∑_c-ω <j≤ cexp(S(cj)) . The candidate with the highest p_ci is identified as the parent. It is critical to remember that the dialogues are arranged in chronological order, so a potential parent can only precede u_c or be u_c itself, [ Please note that a node u_c can be its parent itself when u_c is the start utterance of a session.] i.e., c-ω<i≤ c. § FRAMEWORK TRAINING AND INFERENCE In this section, we elaborate on how the framework is training on the train set (cf <ref>), and then how to make predictions on the test set (cf <ref>). Both two stages are optimized by fully considering the conversational discourse characteristic. §.§ Learning with Hierarchical Ranking Loss The evaluation of dialogue disentanglement not only measures the pair-wise detection of the reply-to relation but also considers the correctness of the session cluster-level prediction. Considering a case where the parent of current utterance u_c is wrongly assigned as u_k that belongs to the same session with u_c, the prediction will be considered valid under the cluster-level perspective, even u_k is not u_c's parent. Therefore, different from regular learning tasks, if only minimizing the cross-entropy loss over gold replying relational pairs, the model will be guided to ignore the session-level learning and result in biased prediction. To this end, we propose a hierarchical ranking loss (HRL) for model optimization. Specifically, we consider the training of our system under multiple levels. Given the current utterance u_c, and currently other utterances U with established replying relations, we now need to determine the replying relation from u_c to any candidate in U. Considering that dialogues are characterized by intrinsic discourse with hierarchical structures, we can naturally divide all u_c's parent candidates into four different levels, as illustrated in Figure <ref>: * Parent utterance node (R_1), which is the first-order and only parent of u_c. * Ancestor utterance nodes (R_2), where the utterances are under the same thread, having a second-order relation with u_c in the two-hop distance. * Inner-session utterance nodes (R_3), where utterances belong to the same session with u_c but these utterances are not included in R_1 or R_2. * Outer-session utterance nodes (R_4), where utterances are separated in different conversational sessions with u_c. Based on the above division of utterances, we can denote the training objectives of HRL into three different levels: ℒ_1 = -log∑_u_i ∈ R_1expS(ci)/∑_u_j ∈ R_1∪ R_2∪ R_3∪ R_4expS(cj) , ℒ_2 = -log∑_u_i ∈ R_2expS(ci)/∑_u_j ∈ R_2∪ R_3∪ R_4expS(cj) , ℒ_3 = -log∑_u_i ∈ R_3expS(ci)/∑_u_j ∈ R_3∪ R_4expS(cj) , where ℒ_1 indicates the regular pair-wise optimization; ℒ_2 and ℒ_3 aim at the session-wise optimizations. S(ci) is the unary score obtained via Eq. (<ref>). We can summarize the final total loss as follows: ℒ = α_1 ℒ_1 + α_2 ℒ_2 + α_3 ℒ_3 . where α_*∈ (0, 1] are the weighting coefficients. §.§ Easy-first Replying Relation Decoding In the inference stage, all the existing work decodes the relying relation for each utterance from the front-to-back direction. A clear drawback of such practice is the local and shortsighted context modeling: the decision at each step is made with the prior information. Also, the conversation utterances are essentially organized with structured discourse, and thus it is not sensible to take a sequential decoding order. Here we mimic the human-like inference of the replying relation, executing the decoding by looking at the global contexts in a non-directional manner, i.e., with an easy-first decoding algorithm. We start from easy determining decisions and proceed to harder and harder ones. When making later decisions, the system has access to the entire structure built in earlier stages. We first build a pair-wise utterance graph over two utterance lists to implement the idea: the current utterance set U and the memory of candidate parents Q. Q caches the utterances in the current dialogue U, and any utterance u_i in U will select its potential parent utterance from Q. Initially, Q is loaded with U. By Cartesian multiplication over U and Q, we maintain a scoring matrix (i.e., utterance graph): S_ij = S(ij), i - ω < j ≤ i -∞, otherwise where S(ij) is the pair-wise score calculated via Eq. (<ref>), which is set to -∞ when j is out of the range (i-ω, i], such that the parent node of u_i ∈U is always the antecedent utterance or itself within a certain range ω. For each scoring iteration, we select the pair (u_i^↷u_j) with the highest score, i.e., the most confident pair, as a valid replying pair in the current step. Thereafter, we add the pair u_i^↷u_j into the partial-replying graph G^R and update it. Also, we remove u_i from U, as u_i has already found its parent; and append the successor utterance u_1+i+ω into the window as new U. Meanwhile, we maintain Q by adding the following utterance u_1+i+ω into Q each time, such that each utterance in U has the chance to find its parent in Q. The decoding follows the window sliding iteratively, until all elements in U find their parent, and U is empty. In Figure <ref>, we depict the easy-first decoding, which is formulated by the Algorithm in Appendix <ref>. ∙ Remark. It is noteworthy that during training, with gold replying annotations as training signals, the model phase does not need to take the easy-first decoding.[ Meanwhile, the training set comes without annotations for the easy-first procedure. ] Also, in the training stage, the gold replying data enables the partial-replying structure G^R always to be of high-quality to guide the learning of further replying relations. During inference, easy-first decoding is engaged to give better inference, as cast above. However, at inference time, the G^R we used is based on the previous model predictions, where the errors can be accumulated to worsen the following prediction. To minimize the gap between training and inference, during the training phase, we purposely create some noises by randomly replacing the gold parent of an utterance with the wrong utterance, at a probability of 15%. Such a teacher-forcing strategy helps enhance the model generalization ability and maintain stable training. Algorithm <ref> summarizes the easy-first inference algorithm in a formal format. § EXPERIMENT SETTINGS §.§ Dataset Our empirical analysis is conducted on two publicly available dialogue disentanglement datasets, namely, the Ubuntu IRC corpus and the Movie Dialogue dataset. The Ubuntu IRC dataset <cit.> is a rich corpus that provides a wealth of question-and-answer content related to the Linux operating system. This dataset is characterized by its depth and complexity, containing 173 conversations that collectively comprise a staggering 74,963 utterances. The average dialogue in this dataset is notably extensive, with an average of 433.3 utterances per conversation. This characteristic makes the Ubuntu IRC dataset a challenging and valuable resource for dialogue disentanglement research, as it offers a diverse range of conversational threads and interactions to analyze and understand. The Movie Dialogue dataset <cit.>, is a unique resource that has been annotated based on movie scripts. This dataset is composed of 33,715 conversations, providing a substantial volume of dialogue for analysis. However, in contrast to the Ubuntu IRC dataset, the conversations in the Movie Dialogue dataset are typically shorter, averaging 24 utterances per dialogue. This dataset was developed by extracting 56,562 sessions from 869 movie scripts that explicitly indicate plot changes. These sessions were then manually intermingled to create a synthetic dataset, with the minimum and maximum session numbers in one dialogue being 2 and 4, respectively. The Movie Dialogue dataset offers a different kind of challenge for dialogue disentanglement, as it involves understanding and separating intertwined dialogues from the context of movie scripts. These two datasets, with their distinct characteristics and challenges, provide a comprehensive basis for our experiments in dialogue disentanglement. The Ubuntu IRC dataset, with its extensive and complex dialogues, and the Movie Dialogue dataset, with its intertwined dialogues from movie scripts, together offer a broad and diverse range of data for our research. Table <ref> shows the detailed data statistics. §.§ Implementations Hyper-parameters In this study, we utilize BERT-base (uncased)[<https://huggingface.co/bert-base-uncased>] as our backbone PLM. To further improve the performance of our model, we apply a dropout <cit.> layer with a rate of 0.2 after the encoder. We set batch size as two dialogues. The initial learning rate for BERT and non-BERT layers are set to 6e-6 and 1e-5, respectively. AdamW optimizer <cit.> and LR scheduler with warmup mechanism <cit.> are employed for optimization. We use a two-layer EGCN with a 768-d hidden size. The FFNN in Eq. (<ref>) is with two layers and 300-d. α_1,α_2, α_3 is set to 1, 0.1, and 0.05, respectively, based on the development experiment. ω is set to 50, and the maximum length of the utterance pair is limited to 128. All experiments were conducted on Ubuntu systems with two RTX 3090 GPUs. In Table <ref>, we show the detailed parameter choosing. Details of model In the data processing stage, no special processing is applied to the text strings. Instead, the dialogue text is segmented using the tokenizer provided by BERT. For utterance pairs exceeding the maximum allowed length, truncation is applied. If the shorter utterance is less than half the length, the longer utterance is truncated to ensure the pair length does not exceed the maximum length. If two utterances are longer than half the maximum allowed length, both are truncated to half the maximum length to preserve the information as fairly as possible. During the graph construction phase, the speaker-utterance graph is directly constructed using the given speaker information for each utterance. The speaker-mentioning graph is constructed by matching the speaker's name of the previous utterance, establishing the speaker-mentioning relationship. §.§ Baselines We adopt the prior strong-performing dialogue disentanglement systems as our baselines. Following, we give a brief description of each baseline: * FeedForward  <cit.> predicts the replying relation with simple feedforward networks. * Elsner  <cit.> takes a pairwise classification with handcrafted features for determining the replying relation. * BERT  <cit.> is the vanilla BERT model, which treats the task as a sentence pair classification problem, i.e., predicting reply-to relationships. * BERT+MF  <cit.> combines BERT with handcrafted features for dialogue disentanglement. * Transition  <cit.> solves the dialogue disentanglement with a transition-based model. * DiaBERT  <cit.> employs a hierarchical transformer to enhance thread and session features extraction, during which additional handcrafted pairwise features (e.g., token-level overlapping) are incorporated into the model to enhance performances. * Struct <cit.> constructs an utterance-level graph to strengthen the utilization of discourse structural information, and further leverage refined LSTM to model the context information. * PtrNet <cit.> uses the pointer networks for dialogue disentanglement. It selects a parent for the current utterance. * Bi-Level <cit.> incorporates a bi-level (thread, session) contrastive loss into dialogue disentanglement. Note that Struct is the current best-performing system for the task. To ensure fair comparisons, most of the recent baselines use the PLM with the base version. The baseline results are copied from their raw papers, while we re-implement the Struct model for some analyzing experiments. §.§ Evaluations Following prior works <cit.>, we evaluate the task performance with the cluster-level and exact pair-wise metrics. Also, we add a new metric for in-depth analysis (see Figure <ref>). Here we provide a detailed description of the evaluation metric used in our experiments. 1) Cluster-level metric: * Variation of Information (VI) <cit.> measures the information change between two sessions: VI(X, Y)=2 * H(X, Y) - H(X) - H(Y), where X and Y are golden and predicted clusters, and H(·) is the entropy. In line with prior research <cit.>, we present 1-VI as the final metric, where a higher value indicates better performance. * Adjusted Rand Index (ARI)  <cit.> evaluates the similarity between golden and predicted clusters, by counting all identical pairs in two clusters: ARI(X,Y) = ∑_ijn_ij2 - [∑_i a_i2∑_j b_j2 ] / n2/1/2 [∑_i a_i2∑_j b_j2 ] - [∑_i a_i2∑_j b_j2 ] / n2 , Where n_ij=| X_i ∩ Y_j | is the amount of common elements between X_i and Y_j, a_i=∑_j n_ij and b_j=∑_i n_ij is the row-wise and column-wise sum of n_ij. * One-to-one (1-1)  <cit.> reflects the ability to summarize the whole conversation. The metric is computed by finding an optimal matching bipartite between two clusters from the golden and prediction set. * Normalized Mutual Information (NMI)  <cit.> also evaluate cluster-level prediction, which calculates the mutual information between two sets: NMI(X, Y) = 2× I(X;Y) /[H(X) + H(Y)] , where I is the mutual information. * Local_k <cit.> measures the accuracy of determining whether an utterance belongs to the same cluster as the previous k utterances: Local_k =∑_j=1^k ∑_i1( 1(C_p^u_i = C_p^u_i-j) = 1(C_p^u_i = C_p^u_i-j) )/∑_j=1^k ∑_i 1 , where C_p^u_i and C_g^u_i are the predicted/gold cluster that u_i belongs to, 1(x=y) is a binary indicator returning 1 if x=y, else 0. In our practice, we set k=3. * Shen-F1 (S-F) <cit.> assesses how well the predicted clusters align with the golden clusters: Shen-F = ∑_i n_i/nmax_i2n_ij^2/n_i + n_j , where notations are same as in Eq. (<ref>). * Cluster Exact Match (P, R, F1) evaluates the precision (P), recall (R), and F1 (F1) scores at the cluster level, where a cluster is considered a match if completely identical to the golden set. 2) Pair-wise metric: * Link Exact Match (P, R, F1) evaluates exact matchings of all reply-to pairs. 3) Additional metric The original cluster-level evaluation metrics are calculated on the complete cluster set, which cannot be applied to the subset evaluation. To evaluate the performance of the subset, we improved the evaluation index based on ARI by proposing a new metric called partial-ARI: Par-ARI(𝒮, Y) = ARI(𝒮, {Y_i | Y_i ∈ Y, ∃ϕ∈𝒮: |Y_i ∩ϕ| > 0}) where Y contains the predicted clusters and 𝒮 contains gold clusters whose size is in a certain range. Based on this formula, we can group golden clusters by size and then calculate the score for each group. The best-performing model on the validation set is used for testing, and the average results of five runs were reported. All scores from our implementation are after paired t-test with p< 0.05. § EXPERIMENT RESULT AND DISCUSSION §.§ Main Results and Observations Table <ref> shows the comparisons with baselines on the Ubuntu IRC dataset. Overall, baselines that capture the conversational discourse structures show better performances, such as PtrNet, DiaBERT, and Struct. Struct, in particular, shows impressive performance by specifically modeling the speaker-utterance and speaker-mentioning structure features, yielding superior outcomes on most metrics. However, despite the strengths of the existing models, our proposed framework surpasses them all under cluster-level evaluation metrics. We've achieved enhancements as high as +7.50 on ARI, +3.02 on 1-1, and +4.10 on F1, when compared to Struct, the current best-performing model. By comparing ours with Struct model, we achieve more significant improvement on the exact match evaluation than Struct, by a high to 6% F1 score. This notable progression is mainly attributed to our distinctive approach of incorporating utterance-distance and partial-replying structures into the model. Our framework's ability to capture and utilize such dynamic contextual discourse features further underlines their importance in dialogue disentanglement. In Table <ref>, we further give the performances under the pairwise link-matching perspective. We see that our system still outperforms the best-performing baseline. The above comparisons demonstrate the efficacy of our overall system. Moving to a different corpus, we present our model's performance on the Movie Dialogue dataset in Table <ref>. Here, given that each session in this dataset only features a single thread, we solely report the cluster-level results. As seen, our model boosts the baselines on this dataset much more strikingly, with an increase of 8.60(=66.10-57.50) NMI, 20.12(=58.32-38.20) ARI and 8.36(=83.06-74.70) S-F, respectively. This substantial improvement is partly due to the Movie Dialogue dataset's unique structure, which has fewer utterances per dialogue compared to the Ubuntu IRC data. The limited conversational contexts available for each utterance pose a unique challenge that our model adeptly handles. This capability again highlights the versatility and adaptability of our framework in handling different dialogue structures and datasets. Even when compared to the robust Struct model, our framework, equipped with rich structural features, first-order decoding, and bidirectional contextual encoding, consistently delivers superior results. This comparative analysis unequivocally demonstrates our system's efficacy and potential for further applications in dialogue disentanglement. §.§ Model Ablation In our overall framework, the rich discourse features and optimization strategies are carefully designed to synergize with each other, with the goal of enhancing the model's capacity for dialogue disentanglement. To investigate and quantify the individual contribution of each component, we have conducted comprehensive ablation studies. Table <ref> presents a detailed summary of these analyses. The four types of discourse structures incorporated into our framework — namely the utterance-distance structure (G^D), partial-replying structure (G^R), speaker-utterance structure (G^S), and speaker-mentioning structure (G^M) — contribute significantly to the overall performance of the model. In particular, the removal of any one of these structures invariably leads to a decline in performance, indicating their non-trivial roles in the task. However, it's worth noting that the impacts of these structures vary. The dynamic structures, namely the utterance-distance and partial-replying structures, exert more influence on the model's performance than their static counterparts, the speaker-utterance, and speaker-mentioning structures. This distinction is more pronounced in the fine-grained pair-wise relation detection task, where the speaker-mentioning and partial-replying structures play especially pivotal roles. Of all the four structures, the partial-replying structure has the most pronounced impact on the performance. The removal of all structure information from the model precipitates the most severe decline in performance, underscoring the overarching importance of these discourse structures in dialogue disentanglement. Further, we inspect the influence of the optimization strategies. The hierarchical ranking loss, which involves the integration of ℒ_2 and ℒ_3, is instrumental in enhancing the model's performance. The exclusion of either ℒ_3 or both ℒ_2 and ℒ_3 from the training phase leads to a noticeable dip in the overall cluster-level evaluation results. This indicates that the hierarchical optimization of the task is a crucial factor in achieving high performance. Finally, the significance of the decoding method becomes evident when we switch from our proposed easy-first decoding mechanism to the standard front-to-back approach. A consistent decrease in performance across both the cluster-level and pair-wise metrics confirms the necessity of a more advanced decoding method for this task. This finding reinforces the notion that the decoding stage in dialogue disentanglement should not be overlooked, and that an efficient decoding strategy can substantially contribute to the overall success of the model. §.§ In-depth Analyses Above we have demonstrated the efficacy of our overall framework. Now we take a further step, presenting more in-depth analyses to answer the following several questions and explore how our method advances the task. Q1: What is the influence of utterance distance on the dialogue disentanglement task? In the field of discourse analysis, the distance between utterances is a critical factor that affects the performance of disentanglement models. The complexity of the task is inherently impacted by the utterance distance, with greater distances posing more challenges in identifying the relationships between utterances. To quantitatively assess the impact of varying utterance distances on dialogue disentanglement, we conducted a rigorous comparison between our method, the Struct baseline, and different variants of our model that exclude specific types of structural features, as shown in Figure <ref>. The results of our experiments provided valuable insights. As anticipated, we consistently observed a decrease in performance across all methods as the utterance distance increased. This finding not only confirms our initial hypothesis that larger utterance distances present a more challenging learning context but also underscores the importance of incorporating appropriate structural features to address this problem. Both the Struct baseline and our models exhibited resilience against the performance degradation caused by increased utterance distances, particularly when compared to the variant model that lacked any discourse structure features (w/o All). This resilience clearly demonstrates the effectiveness of discourse structure features in mitigating the challenges posed by long-range dependencies <cit.>. Furthermore, our comprehensive model outperformed the Struct baseline, primarily due to the integration of diverse and advanced structural discourse features. Upon closer examination of the individual features in our models, we discovered that the Gaussian-based utterance-distance structure feature had a significant impact on shorter distances (≤25), while the partial-replying structure feature played a crucial role in addressing longer distances (>30). This fine-grained analysis provides valuable insights into the unique contributions of each discourse feature, highlighting their respective strengths and applications in dialogue disentanglement tasks. Q2: How does the session size influence the disentanglement task? Increasing the session numbers within a conversation invariably adds to the complexity of topic entanglement, which subsequently makes the disentanglement task more challenging. To evaluate the impact of different session sizes, we utilized a novel metric, partial-ARI, which is defined in Eq.(<ref>). As shown in Figure <ref>, this metric enables the assessment of the matching effect of sessions with varying sizes. The experimental results reveal the influences of session size on the dialogue disentanglement task. When the dialogue comprised less than 30 sessions, the effect of session numbers appeared to be somewhat minimal. However, the BERT model, which operates without any discourse structure information, underperformed compared to the Struct model equipped with two static conversational structure features. When the session size increased beyond 30, there was a noticeable decrease in performance across all models. Despite this overall decline, our models still provided two noteworthy observations. Firstly, our overall framework demonstrated impressive capability when dealing with complex dialogues, surpassing other baselines. Secondly, the partial-replying structure feature exhibited a vital role in counteracting the negative impacts of increased session size. By providing direct signals of intrinsic relations of utterances, the partial-replying structure feature significantly enriches the semantic features available for the task, thereby maintaining the model's performance even under challenging conditions. This finding underscores the importance of this feature in developing robust dialogue disentanglement models, particularly when faced with conversations characterized by large session sizes. Q3: How does the Hierarchical Ranking Loss (HRL) mechanism enhance dialogue disentanglement? The Hierarchical Ranking Loss (HRL) mechanism, introduced in our method, is designed specifically to optimize cluster-level predictions. The quantitative results in previous sections have already demonstrated the effectiveness of HRL. Now, we dive deeper into an in-depth exploration of how the HRL mechanism aids in improving performance. To elucidate the contribution of HRL, we analyze the changes in the cluster-level true and false predictions of our system when equipped with the HRL method compared to the system devoid of HRL. The changing ratio of predictions depicted in Figure <ref>(a) indicates that the HRL method has corrected approximately 5% of predictions out of a total of 5k instances. This improvement by itself is a substantial indication of the effectiveness of the HRL method in enhancing the model's overall performance. To understand the specifics of this improvement, we look closely at the positive changes, i.e., false predictions corrected to true (F→T). We categorize these corrected predictions based on the detailed utterance types to comprehend the underlying mechanism of performance improvement. According to the data shown in Figure <ref>(b), it is clear that a significant proportion (73%) of R_4 utterances have been corrected to the exact parent utterances. This correction remarkably enhances both cluster-level and pair-wise prediction accuracies. Moreover, an additional 17% and 10% of the R_4 utterances have been shifted to the ancestor R_2 utterances and the inner-session R_2 utterances, respectively. This change, while more subtle, significantly contributes to the improvement of cluster-level prediction. In summary, the HRL mechanism demonstrates a significant capacity to rectify prediction errors, thereby enhancing the model's overall performance. The detailed analysis of the types of utterances that are most affected by the application of HRL offers valuable insights into its inner workings and effectiveness, further substantiating its value in dialogue disentanglement tasks. Q4: What is the working mechanism of the easy-first decoding strategy in enhancing the performance of dialogue disentanglement? In this part of the analysis, our objective is to uncover the underlying mechanism that allows the easy-first decoding strategy to bolster overall performance in the task of dialogue disentanglement. To this end, we examine the decoding behavior of our model by measuring the entropy of the model prediction at each utterance decoding step. Intuitively, a lower entropy (↓-log p) corresponds to higher predicting certainty (↑ p), which suggests that the decoding decision is easier to make with confidence. We make a comparative study against the standard sequential decoding strategy, which operates in front-to-back order. As shown by the trends plotted in Figure <ref>, the proposed easy-first decoding strategy demonstrates significant superiority in coordinating task inference. To elaborate, in the initial 400 steps, the easy-first decoding strategy attains a near-perfect predicting certainty (p∼1). This implies a precise and highly confident decision-making process in determining the replying relation between utterances. In contrast, the sequential-order decoding strategy showcases a rather fluctuating process, particularly in the initial steps. The reason behind this can be attributed to the sequential-order paradigm, wherein available features from only prior contexts can be severely limited for reasoning the correct parent for the current utterance. This constraint considerably hampers the ability of the model to make accurate and confident decisions. In essence, the easy-first decoding strategy, by ensuring high predicting certainty from the initial steps, provides a decisive edge in the task of dialogue disentanglement. Its performance, as compared to the sequential decoding strategy, validates its effectiveness in providing clear and confident decisions for identifying replying relations between utterances. §.§ Q5: How do LLMs compare with fine-tuned models in dialogue disentanglement? In recent years, there has been a surge in the development and utilization of generative-based large language models (LLMs), which have yielded promising results in various Natural Language Processing (NLP) tasks. This substantial advancement in LLMs, backed by innovations such as in-context learning <cit.>, prompt-based learning <cit.>, and the chain-of-thought approach <cit.>, has further enhanced the adaptability of these models to downstream tasks. In the wake of the introduction of ChatGPT, the GPT-3.5 model, along with its derivative versions, has, remarkably, demonstrated performance equivalent to or surpassing that of fine-tuned models in a variety of NLP tasks, even in zero-shot settings. These tasks encompass areas such as sentiment analysis, question answering, and summarization <cit.>. This leads us to question the necessity and relevance of employing fine-tuned models in the specific task of dialogue disentanglement in the LLM era. Thus far, there is lacking a research exploration regarding the application of LLMs in dialogue disentanglement tasks. Motivated by this, we ventured to design an exploratory experiment aimed at comparing the performance of fine-tuned models with the GPT-3.5 model in the context of dialogue disentanglement. Due to the unavailability of access to fine-tune the GPT-3.5 model, we employed the GPT-3.5-Turbo-0301 API [< https://platform.openai.com/docs/models/gpt-3-5>] to execute zero-shot experiments. We carefully selected 100 utterances randomly from the test set, which encompassed the 50 sentences preceding each utterance. The prompt was employed to instruct the GPT-3.5 model to identify a 'parent' utterance, as depicted in Figure <ref>. Following this prompt sent to the GPT-API, we received a response indicating the index number of the parent utterance as predicted by ChatGPT. We then compared the prediction results of the 100 utterances to those of our model. As shown in Table <ref>, our fine-tuned model significantly outperformed ChatGPT in a zero-shot setting. This suggests that LLMs have not yet reached a point where they can match supervised models on all NLP tasks, particularly those involving complex discourse structures. A possible explanation is that LLMs have not encountered similar tasks during the pre-training stage, leading to subpar performance. This points to a need for innovative solutions in model design and training strategies to enable LLMs to effectively learn and handle complex discourse features. In the future, we plan to extend our research to find more sophisticated methods to better integrate LLMs with discourse structures, hoping to achieve superior performance. §.§ Case Study Finally, to better understand how our overall model helps make correct predictions of dialogue disentangling, we conduct detailed case studies to show the contribution of discourse-aware encoding module and easy-first decoding mechanism, respectively. Discourse-aware encoding As seen in Figure <ref>, different discourse structural features play varied yet crucial roles in facilitating the replying relation reasoning over the conversation discourse, i.e., by providing rich cross-utterance clues. For example, the model utilizes the speaker mention information provided by the #18 utterance to determine that its parent is the #13 utterance. In addition to this, the model also determines the #23 utterance to be a direct reply to the #22 utterance. This is primarily due to the fact that the speakers of these utterances are identical, and the contents of the utterances are coherent, making them a likely pair in the reply relation. Distance information forms another integral part of our model, especially in instances where there are multiple potential candidates. For example, the #24 utterance mentions LinuxNew, who is the speaker of the #21 and #23 utterances. If we were to judge solely based on the content of the utterances, the #24 utterance could potentially be a response to either the #21 or #23 utterance. But our model considers the distance weight, which would naturally be higher for two utterances that are closer in proximity. This results in a stronger interaction between the #23 utterance and the #24 utterance. Therefore, the model would tend to choose the #23 utterance as the parent of the #24 utterance. Reply information is also a crucial element in our model, especially when it comes to identifying the reply relationship in situations where reference information is insufficient or lacking. Let's take the #21 utterance for instance, which is quite unclear to determine its parent utterance in terms of speaker-mention and could generally be a response to several other utterances. Nevertheless, our model has already established the reply relationship between the #16 and #12 utterances. Coupled with the fact that the #21 utterance shares the same speaker as the #12 utterance, the model concludes that it is highly likely for the 21st utterance to be a reply to the #16 utterance. The ability to accurately infer such reply relationships is vital in real-life conversation scenarios characterized by brevity and precise exchanges. These are precisely the conditions under which our model operates and excels, effectively recognizing and mapping dynamic reply relationships. By understanding and utilizing this dynamic, our model successfully gathers more precise contextual information, leading to more effective dialogue disentanglement. Easy-First decoding In this section, we employ a case study to demonstrate the workings of the easy-first decoding approach, specifically focusing on representative steps in a testing dialogue instance where our system flawlessly predicts all replying relations. As seen in Figure <ref>, our system cleverly makes each decoding decision by selecting the current most confident one. For instance, at the significant juncture of step 13, the pairs that should be the simplest to decode would ideally be #24^↷#23, primarily because #24 references LinuxNew, the speaker name of #23. Leveraging such clues, our system correctly predicts the replying relation at this stage. With each passing iteration, the model outputs the reply pair with the highest score, actively incorporating it into the partial replying graph, which is used in the subsequent round of computation. Basing its predictions on the graph it has thus far constructed, our model consistently provides increasingly confident and precise predictions, even in the face of more complex cases. For instance, during the 16-th step, the model capitalizes on the dynamic replying context to accurately predict the #16 utterance as the parent of the #21 utterance. In conclusion, our system demonstrates its proficiency in dealing with varying complexities of decoding and predicting replying relations in a dialogue. Through the utilization of strategic structural features and dynamic replying context, it navigates the discourse with increasing confidence and precision, making it a promising approach in the field of dialogue disentanglement. § DISCUSSIONS §.§ Conclusion In this study, we rethink the discourse attribute of conversations, and improve the dialogue disentanglement task by taking full advantage of the dialogue discourse. From the feature modeling perspective, we build four types of dialogue-level discourse graphs, including the static speaker-role structures (i.e., speaker-utterance and speaker-mentioning structure), and the dynamic contextual structures (i.e., the utterance-distance and partial-replying structure). We then develop a structure-aware framework to encode and integrate these heterogeneous graphs, where the edge-aware graph convolutional networks are used to learn the rich structural features for better modeling of the conversational contexts. From a system optimization perspective, we first devise a hierarchical ranking loss mechanism, which groups the candidate parents of current utterance into different discourse levels, and carries model learning under three hierarchical levels, for both pair-wise and session-wise optimizations. Besides, we present an easy-first decoding algorithm, which performs utterance pairing in an easy-to-hard manner with both the precedent and subsequent context. Experimental results on two benchmark datasets show that our overall system outperforms the current best results on all levels of evaluations. Via further analyses, we demonstrate the efficacy of all the above-proposed designs, and also reveal the working mechanism of how they help advance the task. §.§ Future Work Although our model has achieved competitive performance in the dialogue disentanglement task, there is still room for improvement. We believe the model can be improved in the following directions: 1) Exploitation of Fine-grained Discourse Features: In this study, we have extensively considered utterance-level dialogue features and achieved promising results. However, we have not yet fully exploited fine-grained features, which has limited the further optimization of the model. In fact, fine-grained features, such as syntactic structure and coreference resolution <cit.>, are very beneficial to the dialogue disentanglement task. In the future, if we can integrate sentence-level and fine-grained discourse features for achieving global-local feature mining, the effectiveness of dialogue disentanglement will be further enhanced. 2) Integration with Dialogue Generation Tasks: The main focus of this study is the dialogue disentanglement task, with the goal of processing existing dialogues. In contrast, the dialogue generation task aims to generate new dialogues based on existing ones. These two processes are reversible and interdependent. Improved dialogue disentanglement can facilitate generation <cit.>, and conversely, superior generation performance indicates a deeper understanding of dialogue disentanglement. Therefore, integrating these two tasks would likely promote the synchronous enhancement of both tasks' performance. 3) Incorporation of Large Language Models: Large language models have demonstrated superiority in various natural language processing tasks. However, the experiments in this study have found that their performance in the dialogue disentanglement task remains subpar, indicating that the potential of large language models has not yet been fully exploited. In the future, we might consider using in-context learning <cit.> techniques to enable the model to learn dialogue disentanglement tasks, or apply LoRA <cit.> technology to fine-tune the model for this task, thereby fully exploiting the potential of large language models in dialogue disentanglement tasks. This approach would not only enhance the effectiveness of dialogue disentanglement tasks but also broaden the application scenarios of large language models, contributing to the realization of a universal natural language processing model. ACM-Reference-Format
http://arxiv.org/abs/2306.11072v1
20230619171742
Causal Effect Regularization: Automated Detection and Removal of Spurious Attributes
[ "Abhinav Kumar", "Amit Deshpande", "Amit Sharma" ]
cs.LG
[ "cs.LG" ]
Quantum state preparation of gravitational waves Fiona C. Speirits July 31, 2023 ================================================ tightstyle 7pt 7pt . 1em tightstyle theoremTheorem[section] claim[theorem]Claim lemma[theorem]Lemma corollaryCorollary[theorem] tightstyle definitionDefinition[section] tightstyle assumptionAssumption[section] exampleExample[section] propositionProposition[section] *remarkRemark In many classification datasets, the task labels are spuriously correlated with some input attributes. Classifiers trained on such datasets often rely on these attributes for prediction, especially when the spurious correlation is high, and thus fail to generalize whenever there is a shift in the attributes' correlation at deployment. If we assume that the spurious attributes are known a priori, several methods have been proposed to learn a classifier that is invariant to the specified attributes. However, in real-world data, information about spurious attributes is typically unavailable. Therefore, we propose a method to automatically identify spurious attributes by estimating their causal effect on the label and then use a regularization objective to mitigate the classifier's reliance on them. Compared to a recent method for identifying spurious attributes, we find that our method is more accurate in removing the attribute from the learned model, especially when spurious correlation is high. Specifically, across synthetic, semi-synthetic, and real-world datasets, our method shows significant improvement in a metric used to quantify the dependence of a classifier on spurious attributes (ΔProb), while obtaining better or similar accuracy. In addition, our method mitigates the reliance on spurious attributes even under noisy estimation of causal effects. To explain the empirical robustness of our method, we create a simple linear classification task with two sets of attributes: causal and spurious. We prove that our method only requires that the ranking of estimated causal effects is correct across attributes to select the correct classifier. § INTRODUCTION When trained on datasets where the task label is spuriously correlated with some input attributes, machine learning classifiers have been shown to rely on these attributes (henceforth known as spurious ) for prediction <cit.>. For example in a sentiment classification dataset that we evaluate (<cit.>), a demographic like race could be spuriously correlated with the sentiment of a sentence <cit.>. Classifiers trained on such datasets are at risk of failure during deployment when the correlation between the task label and the spurious changes  <cit.>. Assuming that a set of auxiliary attributes is available at training time (but not at test time), several methods have been proposed to mitigate the classifier's reliance on the spurious . The first category of methods assumes that the spurious are known a priori. They develop regularization <cit.>, optimization <cit.> or data-augmentation <cit.> strategies to train a classifier invariant to the specified spurious . The second category of methods relaxes the assumption by automatically identifying the spurious attributes and regularizing the classifier to be invariant to them. To identify spurious attributes, these methods impose assumptions on the type of spurious correlation they deal with. For example, they may assume that are modified in input via symmetry transformation <cit.> or a group transformation <cit.>, or the data-generating process follows a specific graph structure such as anti-causal <cit.>. However, all these methods consider only two possibilities—an is spurious or not—which makes them susceptible to imposing incorrect invariance whenever there is a mistake in identifying the spurious . In this paper, we propose a method that regularizes the effect of attributes on a classifier proportional to their average causal effect on the task label, instead of binning them as spurious(or not) using an arbitrary threshold. We propose a two-step method that [label=(*)] * uses an effect estimation algorithm to find the causal effect of a given ; * regularizes the classifier proportional to the estimated causal effect for each estimated in the first step. If the estimated causal effects are correct, our method is perfect in removing a classifier's reliance on spurious (i.e., attributes with causal effect close to zero), while retaining the classifier's acccuracy. But in practice it is difficult to have good estimates of the causal effect due to issues like non-identifiability, noise in the labels <cit.>, or finite sample estimation error <cit.>. Our method is resilient to such errors since it doesn't group the as spurious or causal and regularizes the classifier proportional to the estimated effect. We analyze our method both theoretically and empirically. First, we provide the conditions under which causal effect is identified. An implication of our analysis is that causal effect is identified whenever the relationship between the attribute and label is fully mediated through the observed input (e.g, text or image). This is often the case in real-world datasets where human labellers use only the input to generate the label. Even if the causal effect is not identified, we show that our method is robust to errors in effect estimation. Theoretically, on a simple classification setup with two sets of —causal and spurious—we prove that one only needs the correct ranking of the causal effect estimates to learn a classifier invariant to spurious . For the general case with multiple disentangled high-dimensional causal and spurious , under the same condition on correct causal effect rankings, we prove that the desired classifier (that does not use spurious ) is preferred by our method over the baseline ERM classifier. To confirm our result empirically, we use three datasets that introduce different levels of difficulty in estimating the causal effect of : [label=(*)] * : a synthetic dataset that includes an unobserved confounding variable and violates the identifiability conditions of causal effect estimators; * : a semi-synthetic dataset from <cit.>, where we introduce noise in the image labels; * :, which is a real-world text dataset. To evaluate the robustness of methods to spurious correlation, we create multiple versions of each dataset with varying levels of correlation with the spurious , thereby increasing the difficulty to estimate the causal effect. Even with a noisy estimate of the causal effect of , our method shows significant improvement over previous algorithms in reducing the dependence of a classifier on spurious , especially in the high correlation regime. Our contributions include, * Causal effect identifiability guarantee for two realistic causal data-generating processes, which is the first step to automatically distinguish between causal and spurious . * A method, , to train a classifier that obeys the causal effect of attributes on the label. Under a stylized setting, we show that it requires only the correct ranking of attributes. * Evaluation on three datasets showing that is effective at reducing spurious correlation even under noisy, high correlation, or confounded settings. § OOD GENERALIZATION UNDER SPURIOUS CORRELATION: PROBLEM STATEMENT For a classification task, let (x^i,y^i,a^i)_i=1^n∼𝒫_m be the set of example samples from the data distribution 𝒫, where x^i∈𝐗 are the input features, y^i∈ Y are the task labels and a^i=(a^i_1,…,a^i_k), a^i_j∈ A_j are the auxiliary , henceforth known as attributes for the example x^i. We use “x_a_j" to denote an example “x" to have a_j∈ A_j. These attributes are only observed during the training time. The goal of the classification task referred to as task henceforth, is to predict the label y^i from the given input x^i. The task classifier can be written as c(h(x)) where h:X→Z is an encoder mapping the input x to a latent representation z:=h(x) and c:Z→ Y is the classifier on top of the representation Z. TODO (later): We should have defined the family of distribution over which we want to generalize i.e. where the marginals with spurious attribute changes and shown that the optimal predictor only uses the causal attribute. Currently, we are just proposing that without proof. Generalization to shifts in spurious correlation. We are interested in the setting where the data-generating process is causal <cit.> i.e. there is a certain set of causal that affect the task label y. Upon changing these causal , the task label changes. Apart from these , there could be other defining the overall data-generating process (see fig:dgp for examples). A c ∈𝒞 is called spurious if it is correlated with the task label in the training dataset and thus could be used by a classifier trained to predict the task label, as a shortcut <cit.>. But their correlation could change at the time of deployment, affecting the classifier's accuracy. Using the attributes available at training time, our goal is to train a classifier c(h(x)) that is robust to shift in correlation between the task label and the spurious . We use the fact that changing spurious will not lead to a change in the task label i.e. they have zero causal effect on the task label y. Hence, we use the estimated causal effect of an to automatically identify its degree of spuriousness. For a spurious attribute, its true causal effect on the label is zero and hence the goal is to ensure its causal effect on the classifier's prediction is also zero. More generally, we would like to regularize the effect of each attribute on the classifier's prediction to its causal effect on the label. In other words, unlike existing methods that aim to discover a subset of attributes that are spurious <cit.>, we aim to estimate the causal effect of each attribute and match it. Since our method avoids hard categorization of attributes that downstream regularizers need to follow, we show that is a more robust way of handling estimation errors when spurious correlation may be high. § CAUSAL EFFECT REGULARIZATION: MINIMIZING RELIANCE ON SPURIOUS CORRELATION We now describe our method to train a classifier that generalizes to shift in attribute-label correlation by automatically identifying and imposing invariance w.r.t to spurious . In subsec:ce_identifiability, we provide sufficient conditions for identifying the causal effect, a crucial first step to detect the spurious . Next, in subsec:method_desc and subsec:theoretical_analysis_robustness, we present the method and its theoretical analysis. §.§ Causal Effect Identification The identifiability of the causal effect of an depends on the data-generating process (DGP). Thus, given a particular DGP, one needs to find the conditions under which the causal effect of any can be identified. Below we give sufficient conditions for two DGPs (DGP-1 and DGP-2 from fig:dgp). DGP-1 is common in many real-world datasets where the task labels are annotated based on the observed input X either automatically using some deterministic function or using human annotators <cit.>. Thus DGP-1 is applicable for all the settings where the input X has all the sufficient information for creating the label. <cit.> consider a DGP where the nodes are transformations (like rotation or vertical-flips in image) that generates both the input X and the task label Y. We adapt their graph to our setting where we have observed as a node in the graph and there is a hidden confounder. DGP-2 and DGP-3 represent two such adaptations from their work. In our empirical study (subsec:empresult_dataset), we use three datasets, each of them associated with one of the above data-generating processes. Generally for identifying the causal effect one assumes that we only have access to observational data (𝒫). But here we also assume access to the interventional distribution for the input, P(X|do(A)) where the A is set to a particular value, as in <cit.>. This is commonly available in vision datasets via data augmentation strategies over attributes like rotation or brightness, and also in text datasets using generative language models <cit.>. Having access to interventional distribution P(X|do(A)) could help us identify the causal effect in certain cases where observational data (𝒫) alone cannot as we see below. propositionidenprop Let DGP-1 and DGP-2 in fig:dgp be the causal graphs of two data-generating processes. Let A,C,S be different , X be the observed input, Y be the observed task label and U be the unobserved confounding variable. In DGP-2, x is the (unobserved) core input feature that affects the label Y. Then: * DGP-1 Causal Effect Identifiability: Given the interventional distribution P(X|do(A)), the causal effect of the A on task label Y is identifiable using observed data distribution. * DGP-2 Causal Effect Identifiability: Let C be a set of observed that causally affect the task label Y (unknown to us), S be the set of observed spuriously correlated with task label Y (again unknown to us), and let 𝒱=C∪ S be the given set of all the . Then if all the causal are observed then the causal effect of all the in V can be identified using observational data distribution alone. [label=(*)] * We show that we can identify the interventional distribution P(Y|do(A)) which is needed to estimate the causal effect of A on Y using the given observational distribution P(Y|X) and interventional distribution P(X|do(A)). * For both causal C and spurious S, we show that we can identify the interventional distribution P(Y|do(S)) or P(Y|do(C)) using purely observational data using the same identity without the need to know whether the variable is causal or spurious a priori. See appendix:identifiability_proof for proof. In DGP-3, the causal effect of A on Y is not identified. However, as we will see empirically in Section <ref>, our method still works to remove the spurious correlation. In comparison, prior methods like <cit.> provably fail under this DGP-3 (see  appendix:mouli_dgp_failure for the proof that their method would fail to detect the spurious attribute). §.§ : Causal Effect Regularization for predictive models Our proposed method proceeds in two stages. In the first stage, it uses a causal effect estimator to identify the causal effect of an on the task label. Then in the second stage, it regularizes the classifier proportional to the causal effect of every . Stage 1: Causal Effect Estimation. Given a set of 𝒜={a_1,…,a_k}, the goal of this step is to estimate the causal effect (TE_a_i)) of every a∈𝒜. If the data-generating process of the task is one of DGP-1 or DGP-2, our prop:identifiability gives us sufficient conditions needed to identify the causal effect. Then one can use appropriate causal effect estimators that work under those conditions or build their own estimators using the closed form causal effect estimand given in the proof of prop:identifiability. The causal effect of a on the label Y is defined as the expected change in the label Y when we change the (see appendix:math_preliminaries for formal definition). There is rich literature on estimating causal effects for high dimensional data <cit.>. We use the deep learning-based estimator from <cit.> to estimate the causal effect (henceforth called Riesz). Given the treatment along with the rest of the covariates, it learns a common representation that approximates backdoor adjustment to estimate the causal effect (see appendix:expt_setup for details). Even if the causal effect in the relevant dataset is identifiable, we might get a noisy estimate of the effect due to finite sample error or noise in the labels. Later in subsec:theoretical_analysis_robustness and sec:empresult we will show that our method is robust to error in the causal effect estimate of both theoretically and empirically. Finally, as a baseline effect estimator, we use the direct effect estimator (henceforth called Direct) defined as 𝔼_X ( 𝔼(Y|X,a=1) - 𝔼(Y|X,a=0) ) for a∈𝒜 that has limited identifiability guarantees (see appendix:expt_setup for details). Stage2: Regularization. Here our method regularizes the model prediction with the estimate of causal effect ={TE_a_1,…,TE_a_k} of each 𝒜={a_1,…,a_k}. The loss objective is, ℒ_𝔼_(x,y) ∼𝒫[ℒ_task(c(h(x)),y) + R ·ℒ_Reg(x,y)] where R is the regularization strength hyperparameter, 𝒫 is the training data distribution (sec:preliminaries). The first term ℒ_task(c(h(x)),y) can be any training objective e.g. cross-entropy or max-margin loss for training the encoder h and task classifier c jointly to predict the task label y given input x. Our regularization loss term ℒ_ aims to regularize the model such that the causal effect of an attribute on the classifier's output matches the estimated causal effect of the a_i on the label. Formally, ℒ_∑_i={1,2,…,|𝒜|}𝔼_(x_a_i') ∼𝒬(x_a_i)[(c(h(x)_a_i) - c(h(x_a_i')) ) - TE_a_i]^2 where x_a_i'∼𝒬(x_c_i) be a sample from counterfactual distribution 𝒬ℙ(x_c_i' | x_c_i) and x_c_i' is the input had the in input x_c_i been c_i'. maybe we can get intervetionsion from counterfactual §.§ Robustness of with noise in the causal effect estimates Our proposed regularization method relies primarily on the estimates of the causal effect of any  c_i to regularize the model. Thus, it becomes important to study the efficacy of our method under error or noise in causal effect estimation. We consider a simple setup to theoretically analyze the condition under which our method will train a better classifier than the standard ERM max-margin objective (following previous work <cit.>) in terms of generalization to the spurious correlations. Let 𝒜={ca_1,…,ca_K,sp_1,…,sp_J} be the set of available where ca_k are causal and sp_j are the spurious . For simplicity, we assume that the representation encoder mapping X to Z i.e h:X→Z is frozen and the final task classifier (c) is a linear operation over the representation. Following <cit.>, we also assume that z is a disentangled representation w.r.t. the causal and spurious attributes, i.e., the representation vector z can be divided into two subsets, features corresponding to the causal and spurious attribute respectively. Thus the task classifier takes the form c(z) = ∑_k=1^Kw_ca_k·z_ca_k + ∑_j=1^Jw_sp_j·z_sp_j. low priority: change ca –> c and sp–>s. Let ℒ_task(θ;(x,y_m)) be the max-margin objective (see appendix:math_preliminaries for details) used to train the task classifier “c" to predict the task label y_m given the frozen latent space representation z. Let the task label y and the “a" where a∈𝒜 be binary taking value from {0,1}. The causal effect of an a∈𝒜 on the task label Y is given by TE_a = 𝔼(Y|do(a)=1) - 𝔼(Y|do(a)=0) = P(y=1|do(a)=1) - P(y=1|do(a)=0) (see appendix:math_preliminaries for definition). Thus the value of the causal effect is bounded s.t. T̂Ê_a∈[-1,1] where the ground truth causal effect of spuriously correlated “sp" is TE_sp=0 and for causal “ca" is |TE_ca|>0. Given that we assume a linear model, we instantiate a simpler form of the regularization term ℒ_ for the training objective given in eq:general_objective: ℒ_∑_k={1,2,…,K}λ_ca_kw_ca_k_p + ∑_j={1,2,…,J}λ_sp_jw_sp_j_p where λ_ca_k 1/|TE_ca_k| and λ_sp_j 1/|TE_sp_j| are the regularization strength for the causal and spurious features z_ca_k and z_sp_j respectively, |·| is the absolute value operator and w_p is the L_p norm of the vector w. Since |TE_(·)|∈[0,1], we have λ_(·) = 1/|TE_(·)|≥ 1. In practice, we only have access to the empirical estimate of the causal effect TE_(·) denoted as T̂Ê_(·) and the regularization coefficient becomes λ_(·) = 1/ |T̂Ê_(·)|. at TE=0, this could blow up to infinity. Should we add some epsilon in the denominator? Would it be okay to simply assume |TE_·| > 0? Do we really need a lower bound or need to add a small ϵ? Now we are ready to state the main theoretical result that shows our regularization objective will learn the correct classifier which uses only the causal attribute for its prediction, given that the ranking of estimated treatment effect is correct up to some constant factor. Let [S] denote the set {1,…,S}. theoremtethm Let the latent space be frozen and disentangled such that z=[z_ca_1,…,z_ca_K,z_sp_1,…,z_sp_J] (assm:disentagled_latent). Let the desired classifier c^des(z)=∑_k=1^Kw^des_ca_k·z_ca_k be the max-margin classifier among all linear classifiers that use only the causal features z_ca_k's for prediction. Let c^mm(z) = ∑_k=1^Kw^mm_ca_k·z_ca_k + ∑_j=1^Jw^mm_sp_j·z_sp_j be the max-margin classifier that uses both the causal and the spurious features, and let w_sp_j^mm≠0, ∀ j ∈ [J]. We assume w_sp_j^mm≠0, ∀ j ∈ [J], without loss of generality because otherwise, we can restrict our attention only to those j ∈ [J] that have w_sp_j^mm≠0. Let the norm of the parameters of both the classifier be set to 1 i.e ∑_k=1^Kw^mm_ca_k^2_p=2+∑_j=1^Jw^mm_sp_j^2_p=2 = ∑_k=1^Kw^des_ca_k^2_p=2 = 1. Then if regularization coefficients are related s.t. mean({λ_ca_k/λ_sp_j·η_k,j}_k ∈ [K], j ∈ [J]) < J/K where η_k,j=w^des_ca_k-w^mm_ca_k_p/w^mm_sp_j_p, then * Preference: ℒ_(c^des(z))< ℒ_(c^mm(z)). Thus, our causal effect regularization objective (def:cereg_theory) will choose the c^des(z) classifier over the max-margin classifier c^mm(z) which uses the spuriously correlated feature. * Global Optimum: The desired classifier c^des(z) is the global optimum of our loss function ℒ_ when J=1, K=1, p=2, the regularization strength are related s.t. λ_ca_1<λ_sp_1 |T̂Ê_ca_1|>|T̂Ê_sp_1| and search space of linear classifiers c(z) are restricted to have the norm of parameters equal to 1. Remark. The result of theorem:lambda_sufficient holds under a more intuitive but stricter constraint on the regularization coefficient λ which states that λ_sp_j > (K/Jη_k,j)λ_ca_k |T̂Ê_sp_j| < (K/Jη_k,j) |T̂Ê_ca_k| ∀ k∈[K] and j∈[J]. Would it help convey the intuition if we coin some term for K/J similar to signal-to-noise ratio? The above constraint states that if the treatment effect of the causal feature is more than that of the spurious feature by a constant factor then the claims in theorem:lambda_sufficient hold. As we readily have Figure 1, we can refer to it as well to convey an intuitive proof sketch (especially for the proof of Global Optimum part). Proof Sketch. [label=(*)] * We compare both the classifier c^des(z) and c^mm(z) using our overall training objective to our training objective ℒ_ (eq:general_objective). Given the relation between the regularization strength mentioned in the above theorem is satisfied, we then show that one can always choose a regularization strength “R" greater than a constant value s.t the desired classifier c^des(z) has lower loss than the c^mm(z) in terms of our training objective ℒ_. * We use the result from the first claim to show that c^des(z) has a lower loss than any other classifier that uses the spurious feature z_sp_1. Then, among the classifier that only uses the causal feature z_ca_1, we show that again c^des(z) has the lower loss w.r.t. ℒ_. Thus the desired classifier has a lower loss than all other classifiers w.r.t ℒ_ with parameter norm 1 hence a global optimum. Refer appendix:proof_lambda_sufficient for proof. correct S' to S back again in the proof. We don't have lambda =0 now. § EMPIRICAL RESULTS §.§ Datasets theorem:lambda_sufficient showed that our method can find the desired classifier in a simple linear setup. We now evaluate the method on a synthetic, semi-synthetic and real-world dataset. Details are in appendix:expt_setup. . We introduce a synthetic dataset where the ground truth causal graph is known and thus the ground truth spuriously correlated feature is known apriori (but unknown to all the methods). The dataset contains two random variables (causal and confound) that cause the binary main task label y_m and the variable spurious is spuriously correlated with the confound variable. Given the values of spurious and causal features, we create a sentence as input. We define – a version of the dataset where all three variables/are observed. Next, to increase difficulty, we create a version of this dataset — — where the confound is not observed in the training dataset, corresponding to DGP-3 from fig:dgp. . We use MNIST <cit.> to compare our method on a similar semi-synthetic dataset as used in <cit.>. We define a new task over this dataset but use the same , color, rotation, and digit, whose associated transformation satisfies the lumpability assumption in their work. We define the digit (3 and 4) and the color (red or green color of digit) as the causal which creates the binary main task label using the XOR operation. Then we add rotation (0^ or 90^) to introduce spurious correlation with the main task label. This dataset corresponds to DGP-2, where C is color and digit , S is rotation and x is an empty set. <cit.>. This is a real-world dataset where the main task is to predict a binary sentiment label from a tweet's text. The tweets are associated with race of the author which is spuriously correlated with the main task label. Since this is a real-world dataset where we have not artificially introduced the spurious we don't have a ground truth causal effect of race on the sentiment label. But we expect it to be zero since changing race of a person should not change the sentiment of the tweet. We use GPT3 <cit.> to create the counterfactual example by prompting it to change the race-specific information in the given sentence (see appendix:empresult for examples). This dataset corresponds to DGP-1, where the node A is the spurious race. Is it actually DGP-1. They removed the emoji used to label Y from the tweet. Varying spurious correlation in the dataset. Since the goal is to train a model that doesn't rely on the spuriously correlated , we create multiple settings for every dataset with different levels of correlations between the main task labels y_m and spurious y_c_sp. Following <cit.>, we use a predictive correlation metric to define the label-attribute correlation that we vary in our experiments. The predictive correlation (κ) measures how informative one label or attribute (s) is for predicting the other (t), κ Pr(s=t) = ∑_i=1^N1[s= t]/N, where N is the size of the dataset and 1[·] is the indicator function that is 1 if the argument is true otherwise 0. Without loss of generality, assuming s=1 is correlated with t=1 and similarly, s=0 is correlated with t=0; predictive correlation lies in κ∈[0.5,1] where κ=0.5 indicates no correlation and κ=1 indicates that the attributes are fully correlated. For dataset, we vary the predictive correlation between the confound and the spurious ; for , between the combined causal (digit and color) and the spurious (rotation); and for , between the task label and the spurious (race). See appendix:expt_setup for details. Could plot the correlation between the main and the spurious separately in the appendix. §.§ Baselines and evaluation metrics Baselines. We call the first baseline , corresponding to training the main task classifier using cross-entropy loss without any additional regularization. For the second baseline, we consider the method proposed in <cit.> (henceforth referred to as ) to automatically detect the spurious and train a model invariant to those . Given a set of , this method computes a score for every subset of , selects the subset with a minimum score as the spurious subset, and finally enforces invariance with respect to those subsets using counterfactual data augmentation (CAD) <cit.>cite main CAD paper. Empirically, we observe that CAD is not correctly imposing invariance for a given (see appendix:empresult for discussion). Thus we add a variant of 's method where instead of using CAD it uses our regularization objective (eq:lreg_practice) to impose invariance using the causal effect TE_a=0 for some attribute “a". Henceforth we will call this method . Move the commented discussion of diff distr shift/kappa should go to appendix. too much detail Metrics. We use two metrics for evaluation. Since all datasets have binary task labels and attributes, we define a group-based metric (average group accuracy) to measure generalization under distribution shift. Specifically, given binary task label y∈{0,1} and spurious a∈{0,1} use a, or z for attribute. simpler to read than c_sp.. Following <cit.> we define 2×2 groups, one for each combination of (y,a). The subset of the dataset with (y=1,a=1) and (y=0,a=0) are the majority group S_maj while groups (y=1,a=0) and (y=0,a=1) make up the minority group S_min. We expect the main classifier to exploit this correlation and hence perform better on S_maj but badly on S_min where the correlation breaks. Thus we want a method that performs better on the average accuracy on both the groups i.e Acc(S_min)+Acc(S_maj)/2, where Acc(S_maj) and Acc(S_min) are the accuracy on majority and minority group respectively. The second metric (ΔProb) measures the reliance of a classifier on the spurious feature. For every given input x_a we have access to the counterfactual distribution (x_a')∼𝒬(x_a) (sec:preliminaries) where the A=a is changed to A=a'. ΔProb is defined as the change in the prediction probability of the model on changing the spurious a in the input, thus directly measuring the reliance of the model on the spurious . not important: could connect with the weights of linear classifier in theory based on space available For background on baselines refer appendix:math_preliminaries a detailed description of our experiment setup refer appendix:expt_setup. §.§ Evaluating Stage 1: Automatic Detection of Spurious Attributes Failure of in detecting the spurious at high correlation. In fig:mouli_score_syn_mnist_aae, we test the effectiveness of to detect the subset of which are spurious on different datasets with varying levels of spurious correlation (κ). In dataset, at low correlations (κ<0.8) correctly detects spurious (orange line is lower than blue). As the correlation increases, their method incorrectly states that there is no spurious (blue line lower than orange). In dataset, does not detect any as spurious (shown by blue line for all κ). For dataset, 's method is correctly able to detect the spuriously correlated (race) for all the values of predictive correlation, perhaps because the spurious correlation is weak compared to the causal relationship from causal features to task label. the commented out discussion above can go to appendix. is robust to error in the estimation of spurious . Unlike 's method that does a hard categorization, estimates the causal effect of every on the task label as a fine-grained measure of whether an is spurious or not. tbl:te_combined summarizes the estimated treatment effect of spurious in every dataset for different levels of predictive correlation (κ). We use two different causal effect estimators named Direct and Riesz with the best estimate selected using validation loss (see appendix:expt_setup). Overall, the Riesz estimator gives a better or comparable estimate of the causal effect of spurious than Direct, except in the dataset where the causal effect is not identified. At high predictive correlation(>=0.9), as expected, the causal effect estimates are incorrect. But as we will show next, since uses a continuous effect value to detect the spurious , it allows for errors in the first (detection) step to not affect later steps severely. move commented portion to appendix §.§ Evaluating Stage 2: Evaluation of and other baselines fig:overall_syn_mnist_aae compares the efficacy of our method with other baselines in removing the model's reliance on spurious . On dataset, our method () performs better than all the other baselines for all levels of predictive correlation (κ) on average group accuracy (first row in fig:main_syn). In addition, ΔProb (the sensitivity of model on changing spurious in input, see <ref>) for our method is the lowest and close to the correct value 0, compared to other baselines (see bottom row of fig:main_syn). For κ≥0.8, is the same as ERM since it fails to detect the spurious and thus doesn't impose invariance w.r.t the spurious (fig:mouliscore_syn for details). On dataset, the average group accuracy of all methods is comparable, but has a substantially lower ΔProb than baselines for all values of κ. Again, the main reason why fails is that it is not able to detect the spurious for all κ and thus doesn't impose invariance w.r.t them (see fig:mouliscore_mnist and subsec:empresult_stage1). On dataset, correctly detects the race as spurious and performs better in terms of Average Group Accuracy than and . But if we look at ΔProb, the gain in accuracy is not because of better invariance: in fact, the reliance on the spurious attribute (race) is worse than . In contrast, has a significantly lower ΔProb while obtaining comparable accuracy to . To summarize, in all datasets, ensures a higher or comparable accuracy to while yielding the lowest ΔProb. For details on how we select the best models and additional empirical results see appendix:empresult. Mention some results that will be there in appendix § RELATED WORK Known spurious . When the spuriously correlated are known a priori, three major types of methods have been proposed, based on worst-group optimization, conditional independence constraints, or data augmentation. Methods like GroupDRO <cit.> create multiple groups in the training dataset based on the spurious and optimize for worst group accuracy. Other methods assume knowledge of causal graph and impose conditional independence constraints for removing spurious  <cit.>. Methods based on data augmentation add counterfactual training data where only the spurious attribute is changed (and label remains the same)  <cit.>. Automatically discovering spurious . The problem becomes harder if spurious are not known. Mouli and Ribeiro's work <cit.> provides a method assuming specific kinds of transformations on the spurious attributes, either transformations that form finite linear automorphism groups <cit.> or symmetry transformations over equivalence classes <cit.>. Any changed via the corresponding transformation that does not hurt the training accuracy is considered spurious. However, they do not consider settings with correlation between the transformed attributes and the task labels. Our work considers a more realistic setup where we don't impose any constraint on the transformation or the values, and allows attributes to be correlated with the task label (at different strengths). Using conditional independencies derived from a causal graph, <cit.> propose a method to automatically discover the spurious under the anti-causal classification setting. Our work, considering the causal classification setting (features cause label) complements their work and allows soft regularization proportional to the causal effect. Add Andrew's invariance paper description too. § LIMITATIONS AND CONCLUSION We presented a method for automatically detecting and removing spurious correlation while training a classifier. While we focused on spurious attributes, estimation of causal effects can be used to regularize effect of non-spurious features too. That said, our work has limitations: we can guarantee identification of causal effect only in certain DGPs. In future work, it will be useful to characterize the datasets on which our method is likely to work and where it fails. plainnat § MATHEMATICAL PRELIMINARIES §.§ Causal Effect Estimation and RieszNet Estimator Let (X, Y, A) be the input where A is the auxiliary label and X is the input covariate and Y is the task label. Assuming binary A, the average causal effect (or equivalently called as average treatment effect) of A on the task label Y is defined as (Chapter 3 in <cit.>): TE_A = 𝔼[Y|do(A)=1] - 𝔼[Y|do(A)=0] Given X satisfies sufficient backdoor adjustment the causal effect can be estimated using (see Chapter 3 in <cit.> for details): Direct TE_A = 𝔼_X[𝔼[Y|X,A=1] - 𝔼[Y|X,A=0] ] Henceforth we will refer to the above estimate of causal effect as Direct. Often in practice, the backdoor adjustment variable X is high dimensional for e.g. vector encoding of text or images, and thus it becomes difficult to estimate the above quantity. Thus several methods have been proposed to efficiently estimate and debias the causal effect for high dimensional data <cit.>. In this paper, we use one of the recently proposed methods <cit.>, henceforth denoted as Riesz, that uses insights from the Riesz representation theorem to create a multitasking neural network-based method to get a debiased estimate of the causal effect. Since this method uses a neural network to perform estimation it is desirable in many data modalities like text and image where we need a neural network to get a good representation of the input. Let g(X,A) 𝔼[Y|X,A], then the estimator used by Riesz is given by: TE_A = 𝔼[α_0(X,A) · g(X,A)] where α_0(X, A) is called a Riesz representer (RR) whose existence is guaranteed by the Riesz representation theorem. Lemma 3.1 in <cit.> states that to estimate g(X,A)= 𝔼[Y,X,A] it is sufficient condition of α_0(X,A) s.t. g(X,A) = E(Y|α_0(X,A)). Since α_0(X,A) is a scalar quantity it could be more efficient to estimate E(Y|α_0(X,A)) than to condition on high dimensional (X,A). Thus, Riesz uses a multi-tasking objective to learn both α_0(X,A) and g(X,A) jointly. First, a neural network is used to encode the input (X,A) to a latent representation Z. Then, the architecture branches out into two heads (1) the first head trains the Riesz representer α_0(X,A) and the second branch is used to train g(X,A) (see Figure 1 and Section 3 in <cit.> for details). Let the loss used to train the Riesz representer be called as RRloss and the loss used to train g(X,A) be called as REGloss and L2Reg be the l2 regularization loss (see Section 3 in <cit.> for the exact form of loss). Then the final loss becomes: RieszLoss = REGloss + R_1 ·RRloss + R_2 ·L2RegLoss where R_1 and R_2 be the regularization hyper parameters. <cit.> also suggests using another term in the overall loss (TMLEloss) which we don't consider in the current setup. After training the whole neural network jointly with the above loss objective the causal effect estimate could be calculated using: Riesz 𝔼_X [g(X,A=1) - g(X,A=0)] <cit.> also proposes another debiased measure of the causal effect given by: DebiasedRiesz 𝔼_X[(g(X,A=1) - g(X,A=0) ) + α(X,A) (Y-g(X,A)) ] We observe that this measure gave a worse estimate that Riesz on dataset where the ground truth causal effect is known. Thus in all our experiments we only use the Riesz estimator and leave the exploration of DebiasedRiesz to future work. See subsec:app_causal_effect_estimators for the setup we use in our experiments to estimate the causal effect for different datasets. §.§ Max Margin Objective Taking inspiration from the description of the max-margin classifier from <cit.> and <cit.> we give a brief introduction to the max-margin training objective that is used to train the classifier we consider in theorem:lambda_sufficient. Given the encoder h: x→Z mapping the input to latent space representation Z and the classifier c: Z→ Y uses the latent space representation Z to predict the task label Y and be of form c(z) = w·z. The hyperplane c(z)=0 is called the decision boundary of the classifier. Let the task label be binary and takes the value 1 or -1, otherwise, the labels could be relabelled to conform to this notation. Then the points falling on one side of decision boundary c(z< 0 are assigned one predicted label (say -1) and the ones falling on the other side the other predicted label (say +1). The distance of the point “z”, having task label “y”, from the decision boundary is given by: ℳ_c(z) = y · c(z)/w_2 where w_2 is the L2 norm of the classifier. The margin of a classifier is the distance of the closest latent representation z from the decision boundary. Thus the goal of the max-margin objective is then to train a classifier that has the maximum margin. Equivalently, we want to minimize the following loss for training a classifier that has the largest margin. ℒ_mm(c(z),y) = (-1)·min_i{y^ic(z^i))/c(z)_2} where “i” indexes all the training data points. § PROOF OF PROP:IDENTIFIABILITY * First Claim: To identify the causal effect of A on the label Y, we need access to the interventional distribution P(Y|do(A)). For example, in the case when A is binary random variable the average causal effect of A on Y is given by TE_A = 𝔼[Y|do(A)=1] - 𝔼[Y|do(A)=0]. We can write P(Y|do(A)) as: P(Y|do(A)) = ∑_X∑_U P(Y,X,U|do(A)) = ∑_X∑_U{ P(U) P(X|U,do(A)) P(Y|X) } = ∑_X P(Y|X) {∑_U P(U) P(X|U,do(A)) } = ∑_X P(Y|X) P(X|do(A)) Since we have access to the interventional distribution P(X|do(A)) and access to observational distribution P(Y, X, A), we can also estimate P(Y|do(A)) using the above identity completing the first part of the proof. Second Claim: We are given the 𝒱={C, S}, but we do not know the distinction between the causal and spurious beforehand. We will show that we can identify the interventional distribution P(Y|do(A)) where A∈𝒱. If A=C, then we have: P(Y|do(C)) = ∑_S P(Y,S|do(C)) = ∑_S P(S|do(C)) P(Y|S,C) = ∑_S P(S) P(Y|S,C) = ∑_𝒱∖ A P(𝒱∖ A) P(Y|𝒱) Then, when we have A=S, we have: P(Y|do(S)) = ∑_C P(Y,C|do(S))=∑_C P(C|do(S)) P(Y|C,do(S)) = ∑_C P(C) P(Y|C,S) ∵ (Y⊥ S | C) = ∑_𝒱∖ A P(𝒱∖ A) P(Y|𝒱) Thus, P(Y|do(A))=∑_𝒱∖ A P(𝒱∖ A) P(Y|𝒱) for all A∈𝒱 and could be estimated from pure observational data. § FAILURE OF MOULI ON DGP-2 AND DGP3 <cit.> define a particular DGP for their task and proposes a score that identifies the invariant transformations. DGP-2 and DGP-3 in fig:dgp adapt the causal graph taken in their work to our setting where the unobserved variable associated with every transformation is replaced with an observed . In addition, we add an additional level of complexity to the DGP by introducing the unobserved confounding variable U that introduces a spurious correlation between different in DGP-2 and between an and task label in DGP-3. The graph in DGP-1 is different from their setting and thus cannot use their method. This shows that the method proposed in their work doesn't generalize to different DGPs. Below, we show that their method will be able to identify the spurious in the DGP-2 but fail to do so in DGP-3. corollarymoulicorol Let the observed input X be defined as the concatenation of the C,S and x in DGP-2 i.e. X=[C,S,x] and A and x in DGP-3 i.e. X=[A,x]. Then, Theorem 1 in <cit.> will correctly identify the spurious in DGP-2. For DGP-3, it will incorrectly claim that there is an edge between A and Y even if it is non-existent in the original graph. For a A to be spurious, the score proposed in Theorem 1 in <cit.> requires conditional independence of the A with the task label Y given the rest of the observed and core input feature x. For DGP-2, we show that both the spurious S correctly satisfy that condition whereas the causal C correctly doesn't satisfy satisfy the condition. Then for DGP-3, we show that even if the A is spurious, it cannot be conditionally independent to Y given x due to unblocked path A→ U → Y in the graph. See the complete proof below. Theorem 1 in <cit.> defines a score, henceforth called mouli's score, to identify whether there is an edge between any node N in the graph and the main task label Y. There is no edge between N and Y iff for all X : |P(Y|Γ_N(X),U_Y) - P(Y|X,U_Y)|_TV = 0 where Γ_N is the most-expressive representation that is invariant with respect to the node N. Case 1 – DGP-2: For DGP-2 when the node N is causal (C), we have X=[x,C,S], Γ_C(X) = [x,S] and U_Y = ϕ. Thus, the mouli's score becomes: |P(Y|x,S) - P(Y|x,C,S)|_TV≠ 0 since for at least one combination of x,C,S we have P(Y|x,S)≠ P(Y|x,C,S) since C ⊥̸Y |S,x in DGP-2. Thus, mouli's score doesn't incorrectly mark causal node C as spurious. Next for a spurious node S, we have Γ_S(X) = [x,C]; thus the mouli's score becomes: |P(Y|x,C) - P(Y|x,C,S)|_TV = 0 for all x,C,S we have P(Y|x,C) = P(Y|x,C,S) since S ⊥ Y | C,x. Thus the mouli's score correctly identifies the spurious node S. Case 2 – DGP-3: For DGP-3, when the node N=A, we have X=[x, A], Γ_A(X) = [x] , and if is spurious i.e. there is no edge from A to Y in the actual graph then the mouli's score becomes: |P(Y|x) - P(Y|x,A)|_TV≠ 0 for all x,A since A ⊥̸Y|x P(Y|x) ≠ P(Y|x,A) for atleast one setting of (x,A,Y). Thus the mouli's score will be non-zero and will fail to identify A as a spurious node. § PROOF OF THEOREM:LAMBDA_SUFFICIENT We start by formalizing the assumption we made about the disentangled latent space in subsec:theoretical_analysis_robustness. [Disentangled latent space] Let 𝒜={ca_1,…,ca_K,sp_1,…,sp_J} be the set of given to us. The latent representation z is disentangled and is of form [z_ca_1,…,z_ca_K,z_sp_1,…,z_sp_J], where z_ca_k∈ℝ^d_k is the features corresponding to causal “ca_k" and z_sp_j∈ℝ^d_j are the features corresponding to spurious “sp_j". Here d_k and d_j are the dimensions of z_ca_k and z_sp_j respectively. Now we are ready to formally state the main theorem and its proof that shows that it is sufficient to have the correct ranking of the causal effect of causal and spurious to learn a classifier invariant to the spurious using one instantiation of our regularization objective (<ref>). Our proof goes in two steps: * First Claim: To prove that the desired classifier will be preferred by our loss objective over the classifier learned by the max-margin objective (henceforth denoted as undesired), we compare the loss incurred by both the classifier w.r.t to our training objective. Next, we show that given the ranking of the causal effect of causal and spurious follows the relation mentioned in the theorem statement, there always exists a regularization strength when the desired classifier has a lower loss than the undesired one w.r.t to our training objective. * Second Claim: Using our first claim we show that the desired classifier will have a lower loss than any other classifier that uses the spurious . Thus the desired classifier is preferred over any classifier that uses the spurious using our loss objective. Then we show that among all the classifier that only uses the causal for prediction, again our loss objective will select the desired classifier. * Proof of Part 1: Since we are training our task classifier using the max-margin objective (eq:max_margin_obj_prelim) thus over training objective becomes: ℒ_(c(h(x)))ℒ_mm(c(h(x)),y) + R·∑_k=1^Kλ_ca_kw_ca_k_p + ∑_j=1^Jλ_sp_jw_sp_j_p where ·_p is the L_p norm. For ease of exposition, we will denote c(h(x)) as c(z) where z is the latent space representation of x given by z=h(x). Next, we will show that there always exists a regularization strength R≥0 s.t. ℒ_(c^des(z))< ℒ_(c^mm(z)) given mean({λ_ca_k/λ_sp_j·η_k,j}_k ∈ [K], j ∈ [J]) < J/K (from the main statement of theorem:lambda_sufficient). To have ℒ_(c^des(z))< ℒ_(c^mm(z)) we need to select the regularization strength R≥0 s.t: ℒ_mm(c^des(z),y) + R ·{∑_k=1^Kλ_ca_kw^des_ca_k_p} < ℒ_mm(c^mm(z),y) + R ·{∑_k=1^Kλ_ca_kw^mm_ca_k_p + ∑_j=1^Jλ_sp_jw^mm_sp_j_p} The max-margin objective (see subsec:app_max_margin_obj for background) given is given by ℒ_mm(c(z),y) = (-1)·min_i{y^ic(z^i))/c(z)_2} where z^i is the latent space representation of the input x^i with label y^i and c(z)_2 is the L2-norm of the weight vector of the classifier c(z). The norm of both the classifier c^des(z) and c^mm(z) is 1 (from the statement of this theorem:lambda_sufficient). Substituting the max-margin loss for both the classifier in eq:claim2_loss_compare we get: (-1) · min_i{ y^i c^des(z^i) } + R ·{∑_k=1^Kλ_ca_kw^des_ca_k_p} < (-1) ·min_j{ y^j c^mm(z^j) } + R ·{∑_k=1^Kλ_ca_kw^mm_ca_k_p + ∑_j=1^Jλ_sp_jw^mm_sp_j_p} Rearranging the above equation we get: R · {∑_k=1^Kλ_ca_k[w^mm_ca_k_p -w^des_ca_k_p] + ∑_j=1^Jλ_sp_jw^mm_sp_j_p}_LHS-term 1 > min_j{ y^j c^mm(z^j) } - min_i{ y^i c^des(z^i) } If the LHS-term 1 in the above equation is greater than 0, then we can always select a regularization strength R>0 s.t. the above inequality is always satisfied, whenever R > min_j{ y^j c^mm(z^j) } - min_i{ y^i c^des(z^i) }/{∑_k=1^Kλ_ca_k[w^mm_ca_k_p -w^des_ca_k_p] + ∑_j=1^Jλ_sp_jw^mm_sp_j_p} Next we will show that given mean({λ_ca_k/λ_sp_j·η_k,j}_k ∈ [K], j ∈ [J]) < J/K, the LHS-term1 in eq:claim2_final_r_inequality is always greater than 0. For LHS-term1 to be greater than 0 we need: ∑_k=1^Kλ_ca_k[w^mm_ca_k_p -w^des_ca_k_p] > (-1) ·∑_j=1^Jλ_sp_jw^mm_sp_j_p ∑_k=1^Kλ_ca_k[w^des_ca_k_p - w^mm_ca_k_p] < ∑_j=1^Jλ_sp_jw^mm_sp_j_p Since λ_(·) = 1/|T̂Ê_(·)| and |T̂Ê_(·)| ∈ [0,1] we have λ_(·)>0 (see subsec:theoretical_analysis_robustness for discussion). From the main statement of this theorem we have w_sp_j^mm≠0w_sp_j^mm_p >0 for all value of “p”. Next, we have the following two cases: Case 1 ( ∑_k=1^Kλ_ca_k[w^des_ca_k_p - w^mm_ca_k_p]≤ 0 ): For this case the above eq:main_lambda_inequality is trivially satisfied since ∑_j=1^Jλ_sp_jw^mm_sp_j_p>0 as ∀ j∈[J], λ_sp_j> 0 and w_sp_j^mm_p>0. Case 2 ( ∑_k=1^Kλ_ca_k[w^des_ca_k_p - w^mm_ca_k_p]>0 ): Since for all j∈[J] we have λ_sp_j> 0 and w_sp_j^mm_p>0, rearranging we get: ∑_j=1^Jλ_sp_jw^mm_sp_j_p/∑_k=1^Kλ_ca_k[w^des_ca_k_p - w^mm_ca_k_p] > 1 ∑_j=1^J1/∑_k=1^K(λ_ca_k/λ_sp_j)(w_ca_k^des_p-w_ca_k^mm_p/w_sp_j^mm_p) >1 1/∑_j=1^J{1/∑_k=1^K(λ_ca_k/λ_sp_j)(w_ca_k^des_p-w_ca_k^mm_p/w_sp_j^mm_p)_LHS-term 3}· J < J The above equation is the harmonic mean of the LHS-term 3 and LHS-term 3 is >0 for all values of j. We know that the harmonic mean is always less than equal to the arithmetic mean for a set of positive numbers (Inequalities book by GH Hardy <cit.>). Thus the above inequality is satisfied if the following inequality is satisfied: ∑_j=1^J∑_k=1^K(λ_ca_k/λ_sp_j)(w_ca_k^des_p-w_ca_k^mm_p/w_sp_j^mm_p)/J· K < J/K Since the above condition on regularization strength of features is satisfied (given in the main statement of this theorem), we have ℒ_(c^des(z))< ℒ_(c^mm(z)) thus completing the proof. The above eq:claim2_lambda_alpha_ineqality states that the mean of (λ_ca_k/λ_sp_j)(w_ca_k^des_p-w_ca_k^mm_p/w_sp_j^mm_p) should be less than J/K. Thus the above equation is also satisfied if individually all the terms considered when taking the mean are individually less than J/K. Thus, a stricter but more intuitive condition on λ's s.t eq:claim2_lambda_alpha_ineqality is satisfied is given by: (λ_ca_k/λ_sp_j)η_k,j < J/K T̂Ê_sp_j(K/Jη_k,j) < T̂Ê_ca_k where η_k,j = (w_ca_k^des_p-w_ca_k^mm_p/w_sp_j^mm_p). Proof of Part 2: Given p=2 and since K=1 and J=1, the desired classifier takes form c^des(z)=w^des_ca,1·z_ca,1 = w^des_ca·z_ca. For ease of exposition, we will drop “1" from the subscript which denotes the feature number. Let c^s(z)=w^s_ca·z_ca+w^s_sp·z_sp be any classifier which uses spurious feature z_sp for its prediction s.t. w_sp^s≠0. From statement 1 of this theorem, when the regularization strength are related such that λ_ca·η < λ_sp, we have ℒ_(c^des(z))<ℒ_(c^s(z)) where η = w_ca^des_2-w_ca^s_2/w_sp^s_2. Since the search space of the linear classifiers is constrained to have the norm of parameters equal to 1 we have w_ca^s_2^2+w_sp^s_2^2=1 and w_ca^des_2=1. Let w_ca^s=θ∈[0,1), then we have w_sp^s=√(1-θ^2). Substituting these values in η we get: η(θ) = 1-θ/√(1-θ^2) = √(1-θ/1+θ) (θ≠ 1) Thus η(θ) is a decreasing function for θ∈[0,1) and has its maximum value η_max=1 at θ=0. Thus, if the regularization strengths are related such that λ_ca·η_max<λ_spλ_ca<λ_sp we have ℒ_(c^des(z))<ℒ_(c^s(z)) for all possible c^s(z). Since we are given that λ_ca<λ_sp in the second statement of the theorem c^des(z) is preferred by our regularization objective among all possible c^s(z). Next, among the classifier c^ψ(z)≠ c^des(z) which only uses the causal feature for prediction, we have: ℒ_(c^des(z)) = ℒ_mm(c^des(z)) + R ·{λ_caw_ca^des_2} = ℒ_mm(c^des(z)) + R λ_ca· 1 ℒ_(c^ψ(z)) = ℒ_mm(c^ψ(z)) + R ·{λ_caw_ca^ψ_2} = ℒ_mm(c^ψ(z)) + R λ_ca· 1 Since the desired classifier has maximum margin when using only causal feature for prediction (by definition), we have ℒ_mm(c^des(z))<ℒ_mm(c^ψ(z)) for all other classifiers (c^ψ(z)) which only uses the causal feature for prediction. Thus the desired classifier is the global optimum for our loss function ℒ_ when the classifiers are constrained to have parameters with a norm equal to 1 thereby completing the proof. § EXPERIMENTAL SETUP §.§ Datasets We perform extensive experiments on 6 datasets spanning synthetic, semi-synthetic, and real-world datasets. In sec:empresult we give results for 3 such datasets — which is a synthetic dataset, which is a semi-synthetic dataset and which is a real-world dataset. In appendix:empresult, we further evaluate our methods and other baselines on one additional real-world dataset () which has three different subsets. Below we give a detailed description of all the datasets. Dataset. To create this dataset we first create a tabular dataset with three binary — causal, spurious and confound. The causal and confound create the task label (Y) and the confound creates the spurious . Causal and confound are independent, P(causal=0)=P(causal=1)=0.5 and P(confound=0)=P(confound=1)=0.5. The conditional probability distribution (CPD) of the task label given to the parents is given in tbl:syn_cpd. The CPD for spurious is not fixed i.e P(spurious|confound)=κ, which we vary in our experiment to change the overall predictive correlation of spurious with the task label. We then create two versions of this dataset (1) and (2) . In the dataset, we keep all the in the dataset but for to simulate the real-world setting where there is an unobserved confounding variable we remove the confound from the dataset. Post this, we use this tabular dataset to generate textual sentences for every example. For each of the values of (observed) , we sample 3 words from a fixed set of words (separate for each value of ) that we append together to form the final sentence (see tbl:syn_wordlist for the set of words corresponding to every ). The dataset corresponds to the DGP-3 in the fig:dgp where the unobserved node U is the confound and A is the spurious and there is no edge between A and task label Y. In our experiment, we sample 1k examples using the above methods and create an 80-20 split for the train and test set. Note that the predictive correlation mentioned in experiments and other tables for all versions of dataset is between the confound and spurious given by κ=P(spurious|confound). Our experiments require access to the counterfactual example x_a'∼𝒬(x_a) where the A=a is changed to A=a' in the input. To generate this counterfactual example we flip the value of the in the given input and generate the corresponding sentence using the same procedure mentioned above. Dataset. We use the MNIST to evaluate the efficacy of our method on the vision dataset. Following <cit.>, we subsample only the digits 3 (digit label=0) and 4 (digit label=1) from this dataset and create a synthetic task. To create this dataset, we first take a grayscale image (with digit either 3 or 4). Then we add background color — red labeled as 1 or green labeled as 0 — to the image uniformly randomly i.e. P(color|digit)=0.5. We create the task label using a deterministic function over the digit and color formally defined as Y = color XOR digit where XOR is the exclusive OR operator. Thus the , digit, and color are causal for this dataset. Next, to introduce spurious correlation we add rotation transformation to the image — 0^ labeled as 1 or 90^ labelled as 0. We vary the correlation between the causal (color and digit) and spurious (rotation) by varying the CPD P(rotation|color, digit). Since the combined causal label is the same as the task label the predictive correlation between the task label and spurious is given by κ = P(rotation|color, digit) which is vary in all our experiments. The above data-generating process resembles DGP-2 in the fig:dgp where the node C is the combined causal (color and digit) and the node S is the spurious (rotation). The core input feature x is an empty set in this dataset. All the combinedly create the final input image X. We sample 10k examples using the above process described above and create an 80-20 split for the train and test set. Our experiments require access to the counterfactual example x_a'∼𝒬(x_a) where the A=a is changed to A=a' in the input. To generate this counterfactual example we flip the value of the relevant the label and generate the counterfactual image. Dataset. This is a real-world dataset where given a sentence the task is to predict the sentiment of the sentence. Following <cit.>, we simplify the task to predict the binary sentiment (Positive or Negative) given the tweet. Every tweet is also associated with the demographic “race” which is correlated with the task label in the dataset. Following <cit.> and considering that changing the race of the person in the tweet should not affect the sentiment, we consider race as the spurious . We use the code made available by <cit.> to automatically label the tweet with the race that uses AAE (African-American English) and SAE (Standard American English) as a proxy for race. This data-generating process of this dataset resembles DGP-1 in fig:dgp since the sentiment (task) labels are annotated using a deterministic function given the input tweet (<cit.>). The dataset [TwitterAAE dataset could be found online at: <http://slanglab.cs.umass.edu/TwitterAAE/>] and code [The code for dataset acquisition and automatically labeling race information is available at: <https://github.com/yanaiela/demog-text-removal> ] to create the dataset are available online. We subsample 10k examples and use an 80-20 split for the train and test set. See subsec:real_dataset_kappa for details on how we vary the predictive correlation in this dataset between the task labels and spurious in our experiment. Our experiments require access to the counterfactual example x_a'∼𝒬(x_a) where the A=a is changed to A=a' in the input. Since this is a real-world dataset where we don't have access to the data-generating process. Thus we take help from GPT3.5 (text-davinci-003 model) <cit.> to automatically generate the counterfactual tweet where the race is changed. See tbl:cfgen_aae for a sample of generated counterfactual tweets. Datasets. To further evaluate our result on another real-world dataset we conduct use another dataset [Civil Comments dataset is available online at <https://www.tensorflow.org/datasets/catalog/civil_comments>] (WILDS dataset, <cit.>). Given a sentence, the task is to predict the toxicity of the sentence which is a continuous value in the original dataset. In our experiment, we binarize this task to predict whether the sentence is toxic or not by labeling the sentence with toxicity score ≥ 0.5 as toxic or otherwise non-toxic. We use a subset of the original dataset (CivilCommentsIdentities) which includes an extended set of auxiliary identity labels associated with the sentence. We finally select three different identity labels (race, gender, and religion). Race takes two values black or white, Gender takes two values male or female, and Religion also takes two values muslim or christian. We expect that these identity to have zero causal effect on the task label since changing a person's race, gender, or religion in the sentence should not change the toxicity of the sentence. For each considered identity label, we create a corresponding different subset of the dataset named , , and . This data-generating process of this dataset also follows DGP-1 since toxicity (task) labels were generated from human annotators. We subsample 5k examples for and and 4k examples to create dataset that we use in our experiments. Then, we use an 80-20 split to create a train and test set. For details on how we vary the predictive correlation see subsec:real_dataset_kappa. In our experiment, for every input x we need access to the counterfactual where the 's value is changed in the input. To create such counterfactuals we use a deterministic function that remove the words related to the from the sentence. Currently, we use a hand-crafted set of words for each of the for removal, we plan to replace that more natural counterfactual generated from generative models like GPT3. tbl:cfword_civil show the set of words associated with every that we use for removal. §.§ Dataset with varying levels of spurious Correlation. We create multiple subsets of the dataset with different levels of predictive correlation (κ) between the task label (y) and the (a). The task label and all the we consider in our work are binary taking values from 0 and 1. Following <cit.>, we define 4 subgroups in the dataset for each combination of (y,a). The subset of the dataset with (y=1,a=1) and (y=0,a=0) is defined as majority group S_maj where the task labels y are correlated with the . The remaining subset of the dataset where the correlation break is named S_min which contains the dataset with (y=1,a=0) and (y=0,a=1). Next, we artificially vary the correlation between the and the task label by varying κ = Pr(y=a) i.e. the number of examples with the same task label and . Following <cit.>, we can reformulate κ in terms of the size of the majority and minority groups: κ|S_maj|/|S_maj|+|S_min| For both and dataset, we consider 6 different settings of κ from the set {0.5,0.6,0.7,0.8,0.9,0.99} by artificially varying the size of S_maj and S_min. To do so, we keep the size of S_maj fixed, and then based on the desired value of κ we determine the number of samples to take in S_min using the above equation. Thus for different values of κ the overall training dataset size (|S_maj|+ |S_min|) changes in and datasets. Thus it is important to only draw an independent conclusion from the different settings of κ in these datasets. §.§ Encoder for the different datasets. We give the details of how we encode different types of inputs to feed to the neural network in our experiment. For specific details of the rest of the architecture see the individual setup for every method in subsec:app_causal_effect_estimators, <ref> and <ref>. Dataset. We tokenize the sentence into a list of words and use 100-dimensional pretrained GloVe word embedding <cit.> to get a vector representation for each word. Next, to get the final representation of a sentence we take the average of all the word embedding in the sentence. The word embedding is fixed and not trained with the model. Post this we add an additional trainable fully connected layer (with output dimension 50) without any activation to get the final representation for the sentence used by different methods with different loss objectives (see subsec:app_causal_effect_estimators, <ref> and <ref>). Dataset. We directly take the image as input and normalize it by dividing it with a scalar 255. Post this we use a convolutional neural network for further processing the image. Specifically, we apply the following layers in sequence to get the final representation of the image. * 2D convolution layer, activation = relu, filter size = (3,3), channels = 32 * 2D max pooling layer, with filter size (2,2) * 2D convolution layer, activation = relu, filter size = (3,3), channels = 64 * 2D max pooling layer, with filter size (2,2) * 2D convolution layer, activation = relu, filter size = (3,3), channels = 128 Next, we flatten the output of the above last layer to get the final representation of the image used by different methods with different loss objectives (see subsec:app_causal_effect_estimators, <ref> and <ref>). and Datasets. We use Hugging Face <cit.> transformer implementation of BERT <cit.> bert-base-uncased model to encode the input sentence. We use the pooled output of the [CLS] token as the encoded representation of the input for further processing by other methods (see subsec:app_causal_effect_estimators, <ref> and <ref>). We start with the pretrained weight and fine-tune the model based on the specific task. §.§ Setup: Causal Effect Estimators We use two different causal effect estimators — Direct (eq:direct_effect_estimator) and Riesz (eq:riesz_estimator) — to estimate the causal effect of a on the task label Y (see subsec:app_causal_effect_defs for details of the individual estimators). Direct estimator in practice. For Direct estimator we use a neural network to estimate 𝔼[Y|X,A]. To get the causal effect we select the best model based either based on validation loss or validation accuracy in the prediction of the task label Y given (X,A). We refer to these two versions of Direct estimators as Direct(loss) and Direct(acc) for the setting when validation loss and validation accuracy are used for selection respectively. To estimate 𝔼[Y|X,A] we use the following loss objective: [Y - g(X,A)]^2 + R ·L2loss where g(X,A) is the neural network predicting the task label Y given input (X,A), L2loss is the L2 regularization loss and R is the regularization strength hyperparameter. For dataset, we don't apply L2loss, for dataset we do a hyperparameter search over R∈{0.0,0.1,1.0,10.0,100.0,200.0,1000.0}, for we use R∈{ 0.0,10.0,100.0,1000.0 } and dataset we use R∈{ 0.0,1.0,10.0,100.0,200.0,1000.0 }. Riesz estimator in practice. To get the causal effect from Riesz we optimize the loss function defined in eq:riesz_estimator. We fix the regularization hyperparameter R_1 for RRloss to a fixed value 1 in all the experiments and search over the L2loss regularization hyperparameter (R_2). For dataset, we don't use the L2loss at all, for dataset we search over R_2 ∈{ 0.0,0.1,1.0,10.0,100.0,200.0,1000.0 }, for dataset we use R_2∈{ 0.0,10.0,100.0,200.0,1000.0 } and for dataset we use R_2 ∈{ 0.0,1.0,10.0,100.0,200.0,1000.0 }. Then similar to Direct estimator, for selecting the best model for estimating the causal effect we use validation loss and validation accuracy of g(X,A) for predicting the task label Y using input (X,A). We call the estimators Riesz(loss) and Riesz(acc) based on this selection criteria. For both the estimator we train the model for 200 epochs/iterations in dataset and 20 epochs for the rest of the datasets and use either validation loss or validation accuracy to select the best training epoch (as described above). §.§ Setup: We use the regularization objective defined in eq:general_objective and <ref> to train the classifier using our method . For each dataset, first, we encode the inputs to get a vector representation using the procedure described in subsec:encoders. We use cross-entropy objectives to train the task prediction (ℒ_task). Next, we take the causal effect estimated from Stage 1 and map it to the closest value in set { -1.0,-0.5,-0.1,0.0,0.1,0.3,0.5,1.0}. For and dataset we map the estimated treatment effect from Stage 1 to the closest value in set {-1.0, -0.7, -0.5, -0.3, -0.1, 0.0, 0.1, 0.3, 0.5, 0.7, 1.0}. The mapped causal effect is then used to regularize the model using the regularization term ℒ_Reg (see eq:general_objective and <ref>). Also, we search over the regularization strength R (eq:general_objective) to select the best model. For all the datasets we choose the value of R from set {1,10,100,1000}. We train the model for 20 epochs for all the datasets. Next, to compare the results with other methods we select the best model (across different regularization hyperparameters and training epochs) that performs the best on average over the following metric (see subsec:baseline_eval_metric for details on individual metrics): selection criteria = Majority Group Accuracy+ Minority Group Accuracy + (1-ΔProb)/3 §.§ Setup: and other related baselines As mentioned in <cit.>, given a set of (𝒜), we train two models for each subset (α) of the (α⊂𝒜) (1) model trained to predict the task label Y while being invariant to the in α, (2) a model trained to predict the random label while being invariant to the in α. Using these two models we compute the score for every α as defined in Equation 8 in <cit.>. Then we select the subset of with the lowest score as spurious and train a classifier to predict the task label and be invariant to this subset of . Similar to <cit.> we use counterfactual data augmentation (CAD) to impose invariance w.r.t to desired set of (denoted as in all our experiments). We also experiment with using to impose instead of CAD by using a causal effect equal to 0 for the for which we want to impose invariance (see subsec:baseline_eval_metric for details). When using we search over regularization hyperparameter R from the set { 1,10,100,1000}. We use the same selection criteria to select the best model for comparison with other methods as described in eq:model_selection_crit. §.§ Computing Resources We use an internal cluster of P40, P100, and V100 Nvidia GPUs to run all the experiments. We run each experiment for 3 random seeds and report the mean and standard error (using an error bar) for each of the metrics in our experiment of Stage 2 and other baselines and report the mean over 3 random runs for all the experiments in Stage 1 (Causal Effect Estimation). § ADDITIONAL EMPIRICAL RESULTS §.§ Extended Evaluation of Stage 1: Detecting Spurious Causal Effect Estimator Recommendation. In this work we evaluate the performance of 2 different causal effect estimators — Direct and Riesz — each with two different ways of selecting the best mode either using validation loss or accuracy (see subsec:app_causal_effect_estimators for details) on 6 different datasets. tbl:te_syn_dcf0.0 (), tbl:te_syn_dcf1.0, <ref> (), tbl:te_mnist() and tbl:te_aae (and ) summarizes the performance of different causal effect estimator on these datasets. Different datasets bring different data-generating processes which bring different challenges for these estimators since the causal effect may or may not be identifiable (see individual tables for each dataset for discussion). We observe that one particular setting of the Riesz estimator i.e. Riesz(loss) performs consistently better or comparable to other estimators across the dataset. There are certain setting, especially at the high predictive correlation (κ) where even Riesz(loss) doesn't perform well and have a large error in the estimation of treatment effect. But we will show in subsec:app_empresult_stage2 that our method is robust to error in the estimation of the causal effect. Baseline can incorrectly detect causal as spurious. When given a spurious we have seen that can sometimes fail to detect the spurious (see subsec:empresult_stage1 for discussions). But sometimes this can make an even more severe mistake — detect a causal as spurious. fig:app_synuc_all_topic_mouli demonstrate on such failure in dataset. When both the — causal and spurious — is given to , it incorrectly identifies both the as spurious for κ≤ 0.7 (shown by the red curve having the lowest score). For κ>0.7, again it incorrectly detects the true causal as spurious. If we only give the causal as input to , again causal (shown in orange) will be detected as spurious since it has a lower score than the subset with no (blue curve) for all values of κ. But if we look at the causal effect estimate of causal in dataset it is close to the correct value 0.29 for all values of κ even though the identifiability of causal effect for this dataset is not guaranteed (see tbl:te_syn_dcf1.0_causal). Thus using a continuous measure of the spuriousness of a using the causal effect estimates can help mitigate the limitation of discrete measures (used in ) like false positive and false negative detection of spurious . Spuriousness Score used by can be misleading. Given a set of , creates multiple classifiers where they impose invariance with respect to different subsets of . Using these invariant classifiers, they define a spuriousness score for all the subsets of . Then the subset of with a minimum score is selected as spurious. We observe in our experiment that when training the invariant classifier is not able to completely enforce invariance. Thus the score defined using the invariant model might be misleading. fig:app_mouli_cad_atereg shows one such instance of failure. The second row of fig:app_aae_cad shows that doesn't impose correct invariance w.r.t the spurious race (shown by high ΔProb of the orange curve). Next, we use to enforce correct invariance w.r.t race by regularizing the classifier with zero causal effect for race . This enforces correct invariance shown by low ΔProb in the second row of fig:app_aae_atereg. As a result, we observe that the score for the spurious (shown by the orange curve in the first row of fig:app_aae_atereg), is higher than blue. Thus enforcing the correct invariance will fail to detect the race as spurious since the blue curve has a lower score than orange for all values of κ<0.9. §.§ Extended Evaluation of Stage 2: and other baselines Evaluation of on dataset. We evaluate our method on dataset — a real-world dataset where given a sentence the task is to predict the toxicity of the sentence. We have three different subsets of this dataset, , , and where the race, gender and religion are the spuriously correlated with task label (see subsec:app_dataset for details). fig:app_overall_civil_all compares with other baselines over a number of different metrics. Unlike dataset where gave significant gains in the average group accuracy, performs comparably to and improves over over this metric on all the subsets. Although, ΔProb is low for all the baseline methods, again gives a classifier with much lower ΔProb, especially for high values of predictive correlation (κ). On all subsets of dataset, also performs comparably to other baselines over other accuracy metrics — minority group accuracy and worst group accuracy — used to evaluate the efficacy of generalization over spurious <cit.>. We extend the evaluation in <ref>, by adding the comparison of the performance of and other baselines over these accuracy metrics for , , and datasets in fig:app_overall_syn_mnist_aae_all. Overall, performs comparable or better than other baselines for and datasets. In dataset, performs better in worst group and minority group accuracy than . But has significantly lower ΔProb compared to other baselines signifying that the learned classifier is invariant to the spurious . Counterfactual Data Augmentation is not sufficient for imposing correct invariance. fig:overall_syn_mnist_aae gives us a hint that might not be sufficient for learning a model that is invariant to spurious . For dataset (see <ref>) and dataset (see fig:app_overall_civil_all), has comparable or slightly lower average group accuracy than . But has a significantly lower value of ΔProb than other baselines and thus is able to impose correct invariance w.r.t spurious . Thus it seems that counterfactual data augmentation used in might be using some other mechanism to perform better on average group accuracy than imposing invariance w.r.t to spurious . In fig:app_aae_additional we combine and to get the best out of both worlds (denoted as Mouli+CausalReg+CAD). As expected we observed that the final classifier has a significant gain in the average group accuracy and has ΔProb than classifier. We leave the task of understanding how CAD gives better gains in average group accuracy to future work. See fig:app_aae_additional for details of other additional details of experiments with CAD that might shed some light on this question. Robustness of on error in detection of spurious . follows a two-step procedure to train a classifier that generalizes to spurious correlation. First, in Stage 1, it estimates the causal effect of the given on the task label to determine a continuous degree of spuriouness for that . Then in Stage 2, it regularizes the classifier proportional to the estimated causal effect with the goal of imposing invariance with respect to spurious . Since is heavily reliant on the estimate of causal effect for correctly enforcing the invariance with respect to spurious we need to analyze its robustness to noise in the estimation of causal effect. For simpler classifiers, our <ref> shows that we don't need a correct causal effect estimate to learn a classifier invariant to spurious correlation. For the case when we have two high dimensional variables – one causal and another spurious — as long as the ranking of the causal effect of causal and spurious is correct will learn a classifier completely invariant to spurious . This result on a simpler classification task hinted that we don't need a completely correct estimate of to learn a more complicated invariant classifier trained on a real-world dataset. We conduct an experiment where we knowing use a range of non-zero causal effects for spurious (ground truth causal effect is zero) to regularize the model using . We observe that is less sensitive to even large noise in the causal effect (>0.5) on the metrics like average group accuracy and ΔProb for a large range of regularization strength (R). Even with a large error in the causal effect trains a classifier with high average group accuracy and low ΔProb. fig:syntextuc_spectrum, <ref> and <ref> summarizes the result for , and dataset. Also, we show the robustness of when using the causal effect estimates from different estimators we consider in our work in fig:synuc_te_expand, <ref> and <ref> for these datasets.
http://arxiv.org/abs/2306.02504v1
20230604233400
Scalar field instabilities in charged BTZ black holes
[ "R. D. B Fontana" ]
gr-qc
[ "gr-qc", "hep-th" ]
[email protected] Federal do Rio Grande do Sul, Campus Tramandaí-RS Estrada Tramandaí-Osório, CEP 95590-000, RS, Brazil We investigate the charged scalar field propagating in a (2+1) charged BTZ black hole. The conditions for stability are studied unveiling, for each black hole geometry, the existence of a critical scalar charge as of the evolution is unstable. The existence of growing profiles is substantiated by the deep in the effective potential that intensifies as the scalar charge increases. The phenomenum happens in every black hole geometry even for small geometry charge. In the small scalar charge regime, the field evolution is stable and in such we calculate the quasinormal modes. Scalar field instabilities in charged BTZ black holes R. D. B. Fontana July 31, 2023 ===================================================== § INTRODUCTION Lower dimensional gravity is an active field of research in the last decades in many different directions. Some of the pioneer works were launched in the 80's and 90's <cit.> and a decade later singular solutions were discovered <cit.> describing spacetimes with mass, charge, rotation and cosmological constant. Although no graviton is to be found in such theory (since no dynamical degrees of freedom are present) the non-trivial spacetime solutions of the curvature equations produce valuable dynamical consequences worth of investigation and simpler as the the 4-dimensional counterpart theory (for a detailed study of the different geometries in (2+1)-dimensions, refer to <cit.>). In particular the black holes of lower dimensional gravity exhibit interesting properties simpler then that of general relativity. As an example, demonstrated in <cit.>, the poles of the retarded correlation function of the 2-dimensional conformal field theory represent exactly the quasinormal modes of the solution, emphasizing the interpretation of their imaginary part as relaxation time in the perturbed regime. In the present work we will be concerned with the statical BTZ black hole with charge. From the perspective of thermodynamics, the Wald semiclassical limit for particles absorption was tested in <cit.>, the charged case in <cit.> and other semiclassical aspects considered in <cit.>. The geodesic motion of test particles in the nonlinear regime was examined in <cit.>, the scale dependence in <cit.> and the accelerated version of the black hole in <cit.>. In this letter we be will interested particularly in a charged scalar field that propagates in a charged BTZ geometry. The scalar perturbation considering a non-linear Maxwell term was studied e. g. in <cit.> (with/without scalar charge) and in <cit.> in the linear theory. Charged scalar perturbations and instabilities in charged black holes are expect to coexist, in strict relation to the superradiance of real waves <cit.>. The superradiance in BTZ geometries was analyzed in different studies, e. g. with rotation in <cit.> and in <cit.> with the Maxwell charge, but not considering a perturbative treatment. We will study the charged scalar field perturbation in the BTZ geometry, focusing on the instabilities that may be present. In the next section, we introduce the fundamentals of this black hole and the numerics used to evolve the scalar field in the background, following with the instability analysis in section <ref>. In section <ref> we calculate the quasinormal modes (QNMs) [For an incomplete list with other studies on QNMs in BTZ geometries, refer to <cit.>] and unstable frequencies for different scalar charges presenting our conclusions in section <ref>. § CHARGED BTZ BLACK HOLE AND NUMERICS We start by considering a charged BTZ black hole without rotation with metric ds^2= -fdt^2 + f^-1dr^2 + r^2 dx^2, in which f= -M + r^2 - Q^2 ln (r) is the lapse function. The geometry (<ref>) is spherically symmetric having two horizons (Cauchy - r_c - and event - r_h) hiding the physical singularity at r=0. For each specific value of r_h, the maximum value of the charge reads Q_max= √(2r_h) which is not extremal <cit.>. Extremal charged BTZ black holes happen only for M<1 divided in two branches according to the path for a naked singularity (relative to the addition or subtraction of charge) <cit.>. The lapse functions has its minimum value at r_min=Q/√(2) in which f_min=-M+Q^2( 1/2-ln(Q/√(2)) ). Whenever f_min≤ 0 we have a spacetime with two horizons encovering the singularity at r=0 and cosmic censorship (weak) is preserved. We shall take this condition as granted by imposing M ≥ 1. We want to study the free charged scalar field in the black hole geometry, whose motion equation is written as g^μν(∇_μ - iqA_μ )(∇_ν - iqA_ν )Φ_s =0 in which A = A_μ dx^μ = A_t dt = -Q√(2)ln (r) dt is the vector potential and Φ_s the scalar field. Choosing an usual field transformation <cit.>, Φ_s → e^-ikxψ/√(r) we obtain the Klein-Gordon equation as -∂^2/∂ t^2ψ + ∂^2/∂ r_*^2ψ + V(r) ψ + 2iqA_t∂/∂ tψ + q^2A_t^2ψ =0, in which V(r) the scalar potential, V(r) = f ( f/4r^2-∂_rf/2r - k^2/r^2), and dr_*=f^-1dr is the tortoise coordinate. We apply an integration scheme in double null coordinates similar to that described in <cit.> to achieve the quasinormal signal with initial data Ψ_r→∞ = 0, Ψ_u_0,v = gaussian and filter the field profile to obtain the frequencies with the Prony technique <cit.>. As a second method to check the data we obtain, we use the Frobenius expansion similar to <cit.>, considering the Klein-Gordon equation in the null direction v instead of t. Taking the radial vector z=1/r, the scalar equation in such coordinate system reads ( fz^4 ∂^2 /∂ z^2 + (2fz^3-z^2∂_r f + 2iω z^2) ∂/∂ z + ( fz^2/4 + 2q ω A_t/f - z∂_r f/2-k^2z^2) ) Ψ =0, in which ω = (ω )+i (ω ) is the eigenvalue to be numerically obtained through the implementation of (<ref>). As in <cit.> we apply a more suitable vector potential A in order to better converge the Frobenius series by choosing A_t = -Q√(2)ln (r/r_h), which allows us to avoid the first term of the potential (expanded around the event horizon, z_+) of the above equation. § INSTABILITIES In view of previous literature of charged scalar fields propagating in charged spherically symmetric black holes <cit.>, we apprehend two conditions for the presence of field instabilities. The first is related the behavior of (ω ) (usually referred as the superradiant condition) <cit.>. In the case of the CBTZ black hole, the necessary (but not sufficient) condition for the presence of unstable quasinormal modes is (see appendix <ref> for details) (ω ) > Φ_h where Φ_h stands for the value of the electric potential Φ = qQ √(2)ln (r) at the black hole event horizon. For the particular case where r_h =1 every mode fulfill such condition. It is worth mentioning that a similar condition can be obtained when studying the scattering of real-frequency waves in the geometry with physical boundary conditions. Following the same method of <cit.>, the superradiant condition is similar to (<ref>), ω > Φ_h. The second condition is related to the signal of the effective potential in the region between the event horizon and AdS infinity <cit.>. If ϑ≡ V(r) -q^2A_t^2>0 all modes have (ω ) < 0 being stable (see appendix <ref> for specifcs). On the other hand, if we have V(r) -q^2A_t^2<0 at last in some region between r_h and AdS infinity, (ω ) > 0 is allowed. This is the case for sufficient large scalar charges expression as it can be seen in figure <ref>. As of some particular value of q, the potential develops a deep that eventually becomes negative for larger q's. The deep grows indefinitely with increasing charge, implying the existence of a critical scalar field charge q_c for every charged black hole from which point the field is considered unstable (q>q_c). It is also worth mentioning that angular momenta k>0 bring instabilities as well, for sufficient high scalar charge (see e. g. the right panel of <ref>), although k=0 represents the most unstable case or equivalently the smallest q for stable evolutions. Integrating the differential equation (<ref>) with the method described in <cit.> (for a specific description see e. g. <cit.>), we obtain the field evolution that presents a threshold for stability for each black hole charge. In figure <ref>, left panel, we see the critical value q_c ≃ 2.05 of the scalar charge for a black hole with r_h=2𝒬=1 considering field evolutions with different q_c's and k=0. Such behavior is general and was found for every black hole investigated considering different 𝒬's. The presence of a threshold of stability is robust in charged BTZ black holes: depending on the hole Q, the frontier for stable profiles is higher or smaller in q, but instabilities are always present for sufficient high scalar charge. In figure <ref> we can see this behavior illustrated: in the left panel the scalar field profiles are stable if q<q_c ∼ 2.05 and unstable for q>q_c. In that case, the higher the scalar charge, the higher the instability (see the different frequencies for different q's in the next section). In figure <ref>, the right panel displays the threshold of stability relating the scalar potential term Qq and the absolute charge of the geometry. The panel shows that the higher the geometric charge, the smaller the scalar charge that destabilizes the geometry suggesting that as a mechanism for the black hole to lose its charge, in such cases. In general, for small scalar charges the field profiles evolve stably, decaying exponentially through the quasinormal modes. We calculate those frequencies (or the unstable growing mode when it is the case) using the Prony technique in the next section. § QUASINORMAL FREQUENCIES AND UNSTABLE MODES The quasinormal frequencies and unstable modes of the massless charged scalar field are displayed in table <ref>. The results were obtained through characteristic integration and the Prony methods. We also explore the Frobenius expansion in order to check those results for the quasinormal frequencies.That can be achieved starting from Eq. (<ref>) considering just the same relations as that of <cit.>, Eqs. (13) to (18), adding the charge term to the potential, u(z)_c=2 f ωΦ. The implementation is similar to that described in <cit.> with this small modification. The expansion is poorly convergent for high values of charge (scalar, geometric) and small r_h. The results with the method are though, less than 0.2% deviant from that of table <ref>, for q=0.1. When q increases, the deviation increases as well to a (maximal of a) few percents for the quasinormal modes with q ≤ 1. The smaller the q the smaller the deviation between both methods. It is though not possible to calculate unstable frequencies through the Frobenius series as they do not converge in that limit. Our results of table <ref> and figures <ref> are consonant with that obtained in the non-linear electrodynamical regime: the higher the black hole charge, the smaller the scalar charge at which instabilities are found. This can be seen e. g. in <cit.>, where unstable modes are present with smaller scalar charges when the geometric charge is increased (see e. g. Tables V and VI). The instabilities of the scalar field we found are robust, physically motivated and also found in the non-linear theory. § DISCUSSION In this letter we studied a charged scalar field propagating in the non-spinning charged BTZ black hole. We verify that small charge fields have stable evolutions in the fixed geometry, providing a quasinormal spectrum that depends on the geometry parameters (i. e. black hole charge, mass and AdS radius). With two different numerical methods we calculate those quasinormal spectra examining the influence of the scalar/geometric charges in it verifying that the black hole charge acts in different directions for the real and imaginary parts of ω. In this sense, the key aspect to understand the quasinormal spectrum is the presence of a particular scalar charge q_c as of the scalar field destabilizes the geometry. Whenever the propagating field has q<q_c, its evolution is dictated by a quasinormal spectrum whose fundamental mode diminishes (both imaginary and real parts) as 𝒬 increases. For high scalar charges though q ≫ q_c, not only the field profile is unstable, but the instability increases with the augmentation of 𝒬. Similarly, in such regime, (ω ) increases with 𝒬. The field instabilities are also unveiled through the potential analysis as demonstrated in figures <ref>: the higher the potential deep, the more likely the scalar field is to evolve unstably. In that sense, for every geometry configuration scalar fields with sufficient high charges are unstable. Possible lines of investigation in the topic include the study of other probe fields in the same geometry and in the rotating black hole with charge. § ACKNOWLEDGMENTS The author acknowledges Carlos Herdeiro for fruitful discussions. § INSTABILITY CONDITIONS TO THE KLEIN-GORDON CHARGED FIELD In order to obtain the first condition of section <ref>, we start by considering the scalar equation in a similar form as in (<ref>), ψ” + (ω - Φ )^2 ψ - V(r) ψ =0 in which Φ = -qA_t is the electric potential, ψ (t) → e^-i ω t (the usual temporal Ansatz for the field was used) and prime denotes derivation relative to r_*. Mutiplying (<ref>) by ψ^*, substituting ψ^* ψ” = (ψ^* ψ ')' -|ψ '|^2 and integrating the remaining equation we obtain (ψ^*ψ') |_-∞^0 + ∫_-∞^0 (ω - Φ )^2 |ψ|^2dr_* = ∫_-∞^0 (|ψ '|^2 + V(r) |ψ|^2 )dr_* Now, the rhs of equation (<ref>) is real. If we consider the imaginary part of (<ref>), we have (ψ^*ψ') |_-∞^0 + 2∫_-∞^0 (ω_i ω_r - ω_iΦ) |ψ|^2dr_* = 0 where for simplicity we rewrite (ω ) = ω_i and (ω ) = ω_r. Now, if ω_i >0, (ψ^*ψ') |_-∞^0 = (ψ^*ψ') |^0 - (ψ^*ψ') |_-∞ = i e^2ω_i r_*(ω - Φ_h) |_-∞ =0 Since Φ is a monotonically increasing function between r_h and ∞, that has its minimum value at r_h, ω_r must lie between both limits of Φ so that (<ref>) holds. Then ω_r > Φ_h is the necessary, but not sufficient condition for the existence of unstable modes. This inequality is similar to the designated "superradiant" condition for quasinormal modes as that found in <cit.>. The second condition for instability or the destabilization of the scalar field comes from the effective potential. Let us consider equation (<ref>) in a different form, ψ” - ψ̈- U ψ =0 in which dot is the derivative with relation to t, prime to r_* and U = V(r) -Φ^2 - 2ωΦ. Let us apply ψ = e^-iω r_*Ψ in (<ref>), multiply the resultant equation by Ψ^*, substitute Ψ^* (fΨ')'= (fΨ^* Ψ ')' - f|Ψ'|^2 and integrate between r_h and AdS infinity. Then equation (<ref>) turns to -∫_r_h^∞f|Ψ'|^2dr - 2iω∫_r_h^∞Ψ^*Ψ' dr - 2∫_r_h^∞f^-1ω q A_t |Ψ|^2dr - ∫_r_h^∞f^-1( V(r) + q^2 A_t^2 )|Ψ|^2dr =0 in which we take fΨ^*Ψ'|_r_h^∞=0. We will replace the second and third terms of this equation with a series of operations in what follows. Let us consider the imaginary part of (<ref>), ( - 2iω∫_r_h^∞Ψ^*Ψ' dr ) + 2ω_i q ∫_r_h^∞A_t f^-1|Ψ|^2 dr =0. Now taking the first term of the above equation (≡) as = - ∫_r_h^∞ω_r(Ψ^* Ψ' +Ψ^*' Ψ )dr - iω_i ∫_r_h^∞(Ψ^* Ψ' -Ψ^*' Ψ )dr = ω_r/ω(- ∫_r_h^∞ω(Ψ^* Ψ)'dr +2 iω_i ∫_r_h^∞Ψ^*' Ψ dr ) = ω_r ω_i/|ω|^2(|ω|^2 |Ψ|^2/ω_i|_r_h^∞ - 2iω∫_r_h^∞Ψ ' Ψ^* dr )^* and substituting eqs. (<ref>) and (<ref>) into (<ref>) we get ∫_r_h^∞(f|Ψ'|^2 + f^-1(V(r)-q^2A_t^2)|Ψ|^2 )dr = -|ω|^2|Ψ|^2/ω_i|_r_h^∞. In this relation we can see the necessary (but not sufficient) condition for the field destabilization, ω_i >0, that of equation (<ref>).
http://arxiv.org/abs/2306.01480v1
20230602120733
How strings can explain regular black holes
[ "Piero Nicolini" ]
gr-qc
[ "gr-qc", "hep-th" ]
How strings can explain regular black holes Piero Nicolini Dipartimento di Fisica, Università degli Studi di Trieste, Trieste, Italy Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main, Germany Institut für Theoretische Physik, Johann Wolfgang Goethe-Universität Frankfurt am Main, Frankfurt am Main, Germany This paper reviews the role of black holes in the context of fundamental physics. After recalling some basic results stemming from Planckian string calculations, I present three examples of how stringy effects can improve the curvature singularity of classical black hole geometries. empty § INTRODUCTION Nowadays black holes are the focus of the attention of researchers working on a variety of topics in Physics and Mathematics. Astrophysicists have recently observed the shadow of black holes that presumably are harbored in the center of galaxies <cit.>. Gravitational waves due to black hole mergers have recently been detected at LISA/Virgo facilities <cit.>. Mathematical physicists and mathematical relativists are interested in the properties of exact black hole solutions <cit.>. This activity intersects the work of those gravitational physicists that aim to circumvent the problem of dark sectors by means of theories alternative to general relativity <cit.>. The importance of black hole research, however, goes beyond the above research fields. It seems very likely that black holes are fated to be the cornerstone of our understanding of fundamental physics. §.§ Three facts about evaporating black holes To fully appreciate the significance of black holes, it is instructive to go back to the 1970's. At that time, theoretical physicists were interested in understanding nuclei and their phenomenology. Strings and dual models were formulated just few years earlier and they were already expected to die young due to the advent of QCD. Black holes and general relativity were topics of limited interest, because they were disconnected from the quantum realm. Astrophysicists, on the other hand, did not take seriously the existence of black holes, despite the growing evidence accumulated after the initial observation during the suborbital flight of the Aerobee rocket in 1964 <cit.>. Even the curvature singularity was considered just a mathematical problem, whose solution would never lead to physical consequences. Hawking, however, radically changed this perspective. He actually set new goals for theoretical physics, by initiating the study of the Universe from a quantum mechanical view point. Along such a line of reasoning, Hawking showed that, in the vicinity of a black hole, quantum field theory is strongly disturbed by gravity. Particles become an ill-defined, coordinate dependent concept <cit.>. To an asymptotic observer black holes appear like black bodies emitting particles at a temperature T∝ 1/M, i.e. inversely proportional to their mass <cit.>. The existence of a thermal radiation offered the physical support for the thermodynamic interpretation of the laws governing black holes mechanics <cit.>. It, however, left behind many open questions, such as the fate of an evaporating black hole[By black hole evaporation one indicate the process of particle emission during the full life cycle. The 1/M dependence implies an increased emission rate as the hole loses mass. Such a nasty behavior is connected to the negative heat capacity of the black hole C≡ dM/dT<0.] and the information loss paradox.[Microstates of a collapsing star are hidden behind the event horizon. The information is not lost but virtually not accessible. If the hole thermally radiates, it emits particles in a democratic way, de facto destroying the informational content of the initial star. This is the reason why the Hawking radiation is considered an effect that worsens the problem of the information in the presence of an event horizon. ] I list below some additional issues that are too often downplayed: * If an horizon forms, Minkowski space cannot result from the Schwarzschild metric in the limit M→ 0, since it is forbidden by thermodynamics <cit.>; * Quantum back reaction effects can tame a runaway temperature <cit.>, but they can lead to mass inflation effects <cit.>; * Quantum stress tensors imply violation of energy conditions <cit.>. In general, issues of this kind are mostly attributable to a breakdown of Hawking's semiclassical formalism. The last item of the above list is, however, intriguing. Without energy condition violation, standard matter would inevitably collapse into a curvature singularity <cit.>. As a result, already in the mid 1960's there were proposals, e.g. by Gliner <cit.> and Sakharov <cit.>, to improve black hole spacetimes with energy violating source terms. Such proposals culminated with the work of Bardeen, who obtained the first regular black hole solution <cit.>. The related line elements reads: ds^2=-( 1-2M^2 r^2/ (r^2 +P^2)^3/2 )dt^2 +( 1-2M^2 r^2/ (r^2 +P^2)^3/2 )^-1dr^2+r^2dΩ^2. Here the gravitational constant is written in terms of the Planck length G=^2. At short scale, the singularity is replaced by a regular quantum vacuum region controlled by a magnetic monopole P <cit.>.[Additional regular black hole metrics were proposed in the following years, e.g. <cit.>. For a review see <cit.>.] Against this background, the point <ref>) in the above list represented a novelty. The violation was the direct consequence of a major principle, namely the combination of quantum and gravitational effects at short scale. Conversely, for the Bardeen metric, the energy condition violation is the result of an ad hoc choice e.g. the presence of a magnetic monopole. For this reason, already in the 1980's semiclassical gravity seemed to pave the way to a possible short scale completion of the spacetime <cit.>. § CAN ONE PROBE LENGTH SCALES SMALLER THAN √(Α^')? As of today, Superstring Theory can be considered the major contender of the “quantum gravity war”, namely the current debate about the formulation of a consistent quantum theory of gravity. The success of string theory is probably due to its wide spectrum, that covers a vast number of topics and paradigms, from particle physics to cosmology <cit.>. For what concerns black holes, string theory has been applied in a variety of situations, including thermodynamics <cit.> and derivation of new metrics <cit.>. The theory has also interesting spin offs where black holes have a major role, e.g. large extra dimension paradigms <cit.> and the gauge/gravity duality <cit.>. There exist also proposals alternative to black holes like the fuzz ball <cit.>. String theory is notoriously not free from problems. One of the major limitations is the identification of genuine effective theories, namely the string landscape <cit.>. For the present discussion, it is important to recall just one specif character of string theory: its intrinsic non-locality. Such a property should come as no surprise, because strings were introduced to replace quantum field theory and guarantee ultraviolet finiteness in calculations. To understand the nature of such a short scale convergence, string collisions at Planckian energies were extensively studied at the end of the 1980's <cit.>. The net result was simple and, at the same time, surprising. The particle Compton wavelength turned to be modified by an additional term, namely: Δ x ≃1/Δ p + α^'Δ p. Due to the approximations for its derivation, the above uncertainty relation, known as generalized uncertainty principle (GUP), offers just the leading term of stringy corrections to quantum mechanics. Nevertheless, the GUP can capture several important new features. One can start by saying that the GUP depends on the combination of the conventional Compton wavelength λ and the gravitational radius r_g of the particle, being α^'∼ G. In practice, (<ref>) is a genuine quantum gravity result. The GUP inherits the non-local character of string theory, being Δ x ≥√(α^'). For Planckian string tension ∼ 1/^2, this is a equivalent to saying that the Planck length is actually the smallest meaningful length scale in nature. The GUP also shows that quantum gravity has a peculiar characteristic: Quantum gravity effects shows up only in the vicinity of the Planck scale. At energies lower than , there is the conventional particle physics. At energies higher than , there are conventional (classical) black holes with mass M∼Δ p. For this reason, one speaks of “classicalization” in the trans-Planckian regime <cit.>. Particles (strings) and black holes are, therefore, two possible phases of matter. The relation between them is evident by the fact that black holes have a constant “tension”, M/r_g∼^2, like a (Planckian) string <cit.>. In practice, the GUP suggests that matter compression has to halt due to the gravitational collapse into a Planckian black hole. Such a scenario is often termed gravity “ultraviolet self-completeness” and corresponds to the impossibility of probing length scale below in any kind of experiment <cit.>. The diagram of self-completeness can be seen in Fig. <ref>. Despite the great predictive power of the relation (<ref>), many things remain unclear. For instance, the details of the collapse at the Planck scale is unknown. It is not clear whether the Lorentz symmetry is actually broken or deformed, prior, during and after the collapse <cit.>. Also the nature of the confluence of the two curves λ and r_g is debated. One could speculate that there exist a perfect symmetry between λ and r_g and actually particles and black holes coincide <cit.>. Such a proposal, known as “Black Hole Uncertainty Principle Correspondence”, is currently under investigation and requires additional ingredients for being consistent with observation <cit.>. Nevertheless, there could be some room for sub-Planckian black holes, as far as there exists a lower bound for the black hole mass <cit.>. It is also possible to imagine that the confluence is non-analytic <cit.>. Finally, it has been shown that the number of the dimensions, charge and spin can drastically affect the self-completeness paradigm <cit.>. §.§ How to derive a consistent “particle-black hole” metric There exists at least one thing one knows for sure about gravity self-completeness. The Schwarzschild metric simply does not fit in with the diagram in Fig. <ref>. The problem is connected to the possibility of having a black hole for any arbitrarily small mass, i.e., M<. This implies a potential ambiguity since to a given mass, one could associate both a particle and a black hole. More importantly, Schwarzschild black holes for sub-Planckian masses have radii ∼ M/^2, smaller than , a fact that is in contrast with the very essence of self-completeness. The formation of black holes in such a mass regime is the natural consequence of mass loss during the Hawking emission. Customarily, one circumvents the problem by saying that there is a breakdown of semiclassical gravity. Black holes would explode even before attaining sub-Planckian masses <cit.>. The problem, however, persists when one considers alternative formation mechanism, like early Universe fluctuations <cit.> and quantum decay <cit.>. The most natural way to solve the puzzle is to postulate the existence of an extremal black hole at the confluence of λ and r_g. Degenerate horizons are zero temperature asymptotic states that can guarantee the switching off of black hole evaporation.[The switching off is also known as SCRAM phase, in analogy with the terminology in use for nuclear power plants <cit.>.] For instance, Denardo and Spallucci considered charged black holes and determined the parameters to obtain stable configurations <cit.>. Microscopic black holes can, however, share their charge and angular momentum very rapidly both via Hawking and Schwinger emissions <cit.>. What one actually needs is a SCRAM phase following the Schwarzschild phase, similarly to what predicted by Balbinot and Barletta within the semiclassical approximation <cit.>. In conclusion, the issue can be solved only if one is able to derive a metric admitting a Planckian extremal horizon for M=. Such a metric exists and it is known as holographic screen metric or simply holographic metric <cit.>. Its line element reads: ds^2=-( 1-2M^2 r/ r^2 +^2 )dt^2 +( 1-2M^2 r/ r^2+^2 )^-1dr^2+r^2dΩ^2. Eq. (<ref>) is a prototype of a quantum gravity corrected black hole spacetime. Indeed, the holographic metric offers a sort of “preview” of the characteristics of string corrected black hole metrics, I will present in the next sections. In summary one notice that: * For M≫, (<ref>) becomes the Schwarzschild metric up to some corrections, that are consistent with the predictions of Dvali and Gomez's quantum N-portrait <cit.>; * For M≃ 2.06, the Hawking temperature reaches a maximum and the black hole undergoes a phase transition to a positive heat capacity cooling down (SCRAM phase); * For M=, one has r_g= and T=0, namely the evaporation stops and leaves a Planckian extremal black hole as a remnant; * For M<, (<ref>) describes a horizonless spacetime due to a particle sitting at the origin. In practice (<ref>) perfectly separates the two phases of matter, i.e. particles and black holes, and protects the region below in Fig. <ref> under any circumstances. In addition, (<ref>) does not suffer from quantum back reaction, being T/M≪ 1 during the entire evaporation process. Also the issue of the mass inflation at point <ref>) in Sec. <ref> is circumvented. For M>, there are actually an event horizon r_g=r_+ and a Cauchy horizon r_-, r_± = ^2 ( M ±√(M^2 -^2) ), but the latter falls behind the Planck length and it is actually not accessible. From (<ref>), one notices that the horizon structure is the same of the Reissner-Nordström black hole, provided one substitutes the charge with the Planck mass Q/G⟶. Eq. (<ref>) is another key aspect of self-completeness. Gravity does not need the introduction of a cut off. The completeness is achieved by exploiting the coupling constant G as a short scale regulator. At this point, there is, however, a caveat: The spacetime (<ref>) does have a curvature singularity. The regularity was not the goal of the derivation of such a metric. The basic idea has been the introduction of fundamental surface elements (i.e. holographic screens), as building blocks of the spacetime. Each of such surface elements is a multiple of the extremal configuration, that becomes the basic information capacity or information bit. Indeed for the holographic metric the celebrated area law reads S(𝒜_+)=π/𝒜_0( 𝒜_+-𝒜_0 ) +πln( 𝒜_+/𝒜_0 ) where 𝒜_0=4π^2 is the area of the extremal event horizon, and 𝒜_+=n𝒜_0. If one accepts that surfaces (rather than volumes) are the fundamental objects, the question of the regularity of the gravitational field inside the minimal surface is no longer meaningful. The spacetime simply ceases to exist inside the minimal holographic screen. This interpretation is reminiscent of spacetime dissolution observed in quantum string condensates within Eguchi's areal quantization scheme <cit.>. [The classical spacetime is a condensate of quantum strings. At distances approaching √(α^'), long range correlations of the condensate are progressively destroyed. For α^'=G, the whole spacetime boils over and no trace of the string/p-brane condensate is left over.] § WHAT T-DUALITY CAN TELL US ABOUT BLACK HOLES Suppose one has a physical system living on a compact space, whose radius is R. Suppose there exists another physical system defined on another compact space, whose radius is proportional to 1/R. If the observables of the first system can be identified with that of the second system, one can say that such systems are equivalent or dual with respect to the transformation[The duality is termed T-duality, or target space duality.] R⟶ 1/R. For example, by setting R∼ 1/Δ p in (<ref>) one finds Δ x≃ R + α^'/R. The above relation actually maps length scales shorter than √(α^') to those larger that √(α^'), being Δ x(R) = Δ x (1/R), for suitable values of α^'. From this viewpoint, one can say that the GUP is a T-duality relation. This fact is per se intriguing because it offers an additional argument for a stringy interpretation of the holographic metric. The good part is that T-duality allows for an even more genuine contact between string theory and a short scale corrected metric. To do this, one needs to go back to a basic result due to Padmanabhan <cit.>. Standard path integrals can be thought as the sum of amplitudes over all possible particle trajectories. In the presence of gravity, the scenario is slightly modified. Indeed, there exist paths that cannot contribute to the path integral. If paths are shorter than the particle gravitational radius, they must be discarded in the computation of the amplitude. A simple way to achieve this is to introduce a damping term, e^-σ(x,y)/λ⟶ e^-σ(x,y)/λ e^-r_g/σ(x,y), for each path contribution in the sum over the paths.[We temporarily assume Euclidean signature for the ease of presentation.] The above relation implies that the path length σ(x,y) admits a minimum. Interestingly, Padmanabhan performed the sum over the above path contributions and derived a modified propagator <cit.> G(x,y; m^2)=∫d^D p/(2π)^De^-ip·(x-y) G(p), with G(p)= -l_0/√(p^2+m^2) K_1 (l_0 √(p^2+m^2)), where K_1(x) is a modified Bessel function of the second kind, and l_0 is called “zero point length” <cit.>. Eq. (<ref>) is intrinsically non-local: The Bessel function has a damping term for momenta larger than 1/l_0. Therefore l_0 is the minimal length that can be resolved over the manifold. Conversely, for small arguments one finds the conventional quantum field theory result. The virtue of Padmanabhan's calculation (<ref>) is two fold: The propagator is a robust result that descends from general considerations; The functional form in terms of the Bessel function exactly coincides with the correction of string theory to standard, “low energy” quantum field theory. To better understand such a crucial point we briefly sketch the line reasoning at the basis of series of papers authored by Spallucci and Padmanabhan in collaboration with Smailagic <cit.> and Fontanini <cit.>. Let us start by considering a closed bosonic string in the presence of just one additional dimension, that is compactified on circle of length l_0=2π R. The string mass spectrum can be written as M^2=1/2α^'(n^2α^'/R^2+w^2 R^2/α^')+harmonic excitations, where n labels the Kaluza-Klein excitations and w is the winding number of the string around the compact dimension.[In the process of path integral quantization, harmonic oscillators are irrelevant. Therefore we consider them frozen without unwanted consequences.] As expected the above relation enjoys T-duality. It is invariant under simultaneous exchange R↔α^' /R and n↔ w and leads to the identification of √(α^') as invariant length scale. Strings are intrinsically non perturbative objects. As a result, any perturbative expansion destroys the very essence of the theory. The only way to extrapolate a nonpertubative character that can be “adapted” to the field theoretic concept of particle is the study of the string center of mass (SCM) dynamics. From the propagation kernel of the SCM in five dimensions, one can integrate out the fifth dimension to obtain an effective four dimensional propagator K(x-y, 0-nl_0; T)=∑_n∫ [𝒟z][𝒟p][𝒟x^5][𝒟p_5]exp(...)→ K_reg(x-y; T) where x-y and 0-nl_0 are respectively the four dimensional interval and the separation along the fifth dimension. Already at this point, one can observe the regularity due to l_0, being K_reg(x-y; T)∼∑_n e^(iμ_0/2T)[(x-y)^2+n^2 l_0^2] where μ_0 is a parameter which will not appear in the final result. Additional integrations on T and w lead to Green's function G_reg(x-y)∼∑_w∫ dT e^(iT/2μ_0)m_0^2 e^(...w^2) K_reg(x-y; T), where m_0 is the mass of the particle in the limit l_0→ 0. If one considers the leading term of the above expression, namely n=w=1, one finds (<ref>) upon the condition l_0=2π√(α^'). In other words, the zero point length in four dimensions has a T-duality origin and coincides with the minimum length in string theory. The above result can be easily generalized to the case of more than one compact dimension. The conclusion is unaffected: (<ref>) is both general and fundamental! §.§ How to implement T-duality effects Starting from (<ref>), we expect important deviations from conventional Green's function equation {Differential Operator} G(x,y) = Dirac Delta, when x≈ y. For the specific case of black holes, we recall that, in the absence of spin, there are both spherical symmetry and static conditions. It is, therefore, instructive to consider the interaction potential between two static sources with mass m and M due to (<ref>), V(r) = -1/m W[J]/T = -GM ∫d^3 k/(2π)^3 .G(k)|_k^0=0 exp(i k⃗·r⃗) = -GM/√(r^2 + l_0^2). The fact that V≈ -GM/l_0 for r→ 0, is the first signal of a possible removal of the curvature singularity. To verify this is the case, one has to construct an effective energy momentum tensor for the r.h.s. of Einstein equations. The procedure is equivalent to the derivation of black hole solutions by means of non-local gravity actions S=1/2κ∫𝔣(R, , …)√(-g) d^4 x + ∫𝔏( M , F^2, , …)√(-g) d^4 x with κ=8π G, =∇_μ∇^μ, F is the gauge field and … stand for higher derivative terms. Eq. (<ref>) is a compact notation for a class of actions that have been studied to obtain ghost free, ultraviolet finite gravity field equations <cit.>. For the present discussion, the details of such an action are not relevant, since it is only an effective description of the full string dynamics. Accordingly, also the problem of the pathology of the action (e.g. ghosts, anomalies) is of secondary concerns, if one believes in the consistency of Superstring Theory. In conclusion, one can adopt a truncated version of the full non-local action <cit.> and derive the non-local Einstein equations. For F^2 =0, they read R_μν-1/2g_μν R= κ 𝔗_μν where 𝔗_μν= O^-1() T_μν, while the Einstein tensor and T_μν are the conventional Einstein gravity tensors. The only thing that is important to know is the degree of ultraviolet convergence of the theory, encoded in the operator 𝒪(). At this point, one can observe that (<ref>) is consistent with Green's function equation for (<ref>) <cit.>, namely ∇ ^2 G( z, z^ ') = - l_0√( - ∇ ^2) K_1( l_0√( - ∇ ^2))δ ^( 3 )( z - z^ '). The operator can be simply read off from the above equation, taking into account that 𝒪()=𝒪(∇ ^2), if the source is static. In practice, the r.h.s. of (<ref>) is equivalent to the T_t^t of the 𝔗_μν, namely 𝔗_t^t≡ -ρ(𝐱)= (4π)^-1 M l_0√( - ∇ ^2) K_1( l_0√( - ∇ ^2))δ ^( 3 )( x). The effective energy density can be analytically derived and reads ρ(𝐱)=3l_0M/4π(|𝐱|^2+l_0^2)^5/2. For large distances, the above density quickly dies off as ∼ 1/|𝐱|^5. Conversely, at short scales |𝐱|≲ł_0, one finds the “Sea of Tranquility”, i.e., a regular quantum region characterized by creation and annihilation of virtual particles at constant, finite energy. In such a sea, gravity becomes repulsive and prevents the full collapse of matter into a singularity. With a geometric description in terms of differential line elements, the quantum fluctuations of such a sea are not visible. One can only capture the average effect, namely a local de Sitter ball around the origin, whose cosmological constant is ∼ GM/l_0^3. Local energy condition violations certify the correctness of such a scenario. After the above prelude, one can analytically solve (<ref>) and display the full metric <cit.> ds^2=-( 1-2M^2 r^2/ (r^2 +l_0^2)^3/2 )dt^2 +( 1-2M^2 r^2/ (r^2 +l_0^2)^3/2 )^-1dr^2+r^2dΩ^2. The magic of the above result is that it coincides with the Bardeen solution (<ref>), provided P⟶ l_0. This is reminiscent of the relation between the holographic metric and the Reissner-Nordström geometry (<ref>): this time, however, one can say that the Dirac string has been traded with a closed bosonic string. The general properties of the horizon structure and thermodynamics are similar to what seen in the context of the holographic metric – see <ref>) – <ref>) in Sec. <ref>. Horizon extremization allows for a SCRAM phase at the end of the evaporation, making the hole a stable system from a thermodynamic viewpoint. The Hawking temperature reads T= 1/4π r_+ (1 -3 ł_0^2/r_+^2 +ł_0^2), while the entropy is S = 𝒜_+/4[ (1 -8πł_0^2/𝒜_+) √(1 +4πł_0^2/𝒜_+)+12πł_0^2/𝒜_+(arsinh√(𝒜_+/4πł_0^2) -arsinh√(2))], with 𝒜_+ = 4π r_+^2. The great advantage of the metric (<ref>) is the stability. This is a property in marked contrast to the case of the Bardeen metric, than can be, at the most a transient state. Even by postulating the existence of magnetic monopoles at some point of the history of the Universe <cit.>, their coupling has to be much stronger than the QED coupling <cit.> α_m≫α_e∼ 137^-1. This would imply for the Bardeen metric a sudden decay into the Schwarzschild black hole. Charged and charged rotating regular T-duality black holes have recently been derived. The novelty is the replacement of the ring singularity with a finite tension rotating string – for further details see <cit.>. § A SHORT GUIDE TO BLACK HOLES IN NONCOMMUTATIVE GEOMETRY We are going to present a family of black hole solutions, that represents a sort of coronation of the program of regular black holes in string theory. Indeed, after almost 20 years since their derivation, their good properties are still unmatched. One should start by recalling that noncommutative geometry (NCG) is a field in Mathematics whose goal is the study of noncommutative algebras on certain topological spaces. In Physics, NCG has well known applications. For example, at the heart of quantum mechanics there is a noncommutative geometry, i.e., the algebra of quantum operators. The idea that further physically meaningful results can be obtained from NCG, however, remained dormant at least until the 1990's. At that time, Connes proposed the study of fundamental interactions from the spectral triple principle <cit.>.[The spectral triple is made of three items, a real, associative, noncommutative algebra 𝒜, a Hilbert space ℋ and a self adjoint operator ƪ on it.] As a main goal, Connes aimed to construct a “quantum version” of the spacetime, by establishing a relation similar to that between quantum mechanics and classical phase space <cit.> (see also <cit.> for a pedagogical introduction). The most simple way to construct a noncommutative geometry is based on the replacement of conventional coordinates with noncommutative operators [x^i, x^j]=iθ^ij where θ^ij is a constant, real valued, antisymmetric D× D matrix. The above commutator implies a new kind of uncertainty Δ x^i Δ x^j ≥1/2|θ^ij|, that can be used to improve the bad short distance behavior of fields propagating on the noncommutative geometry. To achieve this goal, one can deform field Lagrangians by introducing a suitable non-local product. For instance, a realization of noncommutive algebra of functions is based on the Moyal-produced (also known as star product or Weyl–Groenewold product) f⋆ g ≡. e^(i/2)θ^ij∂/∂ξ^i∂/∂η^jf(x+ξ)g(x+η)|_ξ=η=0, that can be used as a starting point to obtain a noncommutative field theory – for reviews see e.g. <cit.>. Probably the biggest push to the popularity of noncommutative field theory was given by its connection to string theory. Open strings ending on D-branes display a noncommutative behavior in the presence of a non vanishing, (constant) Kalb-Ramond B-field <cit.> θ^ij∼ (2πα^')^2(1/g+2πα^' BB1/g-2πα^' B)^ij, where g is the metric tensor. Noncommutative gravity follows a similar procedure for the metric field, defined over the underlying noncommutative manifold. The program of noncommutative gravity is, however, still in progress. Apart from some specific examples, one still misses a consistent noncommutative version of general relativity. In addition, the existing attempts to derive noncommutative corrections to classical black hole solutions run into the general difficulty of improving curvature singularities – see <cit.>. In 2003, the possibility of obtaining from noncommutative geometry something meaningful for the physics of black hole physics was still perceived as quite remote. This was a time, that followed the “explosive” predictions about the possibility of a plentiful production of mini black holes in particle detectors <cit.>. Operations at the LHC, however, began only five year later. As a result, there was a huge pressure to predict the experimental signatures of such black holes. If the terascale quantum gravity paradigm was correct, it was expected to have repercussions on mini black hole cross section, evaporation and detection <cit.>. Given this background, there was an unconventional attempt to study noncommutative geometry stripped of all elements, apart from its nonlocal character. From (<ref>) one can guess that NCG introduces Gaussian damping terms. To prove this, Smailagic and Spallucci considered the average of noncommutative operators ⟨ x^i ⟩, on states of minimal uncertainty, namely coherent states similar to those introduced by Glauber in quantum optics <cit.>. Such averages were interpreted as the closest thing to the conventional concept of coordinate. Initial results for path integrals on the noncommutative plane led to the conclusion that Dirac delta distributions are smeared out and become Gaussian functions, whose width is controlled by the noncommutative parameter θ <cit.>.[The matrix θ^ij in (<ref>) can be written as θ^ij=θε^ij. The parameter θ has the dimension of a length squared.] The result was later formalized in terms of a nonlocal field theory formulation <cit.>. Green's function equation (<ref>) was determined by applying a non-local operator to the source term namely[Here the signature is Euclidean.] δ^(D)(x-y)⟶ f_θ(x,y) = e^θδ^(D)(x-y). To derive a spacetime that account for noncommutative effects, one has to recall that the metric field can be seen as a “thermometer” that measures the average fluctuations of the manifold. From (<ref>), one can derive the effective energy density, ρ(𝐱)=M/(4πθ)^3/2 e^-|𝐱|^2/4θ, and follow the procedure presented in Sec. <ref>. There are, however, two caveats: * The resulting spacetime is an effective description that captures just one single character of NCG, i.e. non-locality.[For this reasons, one speaks of “noncommutative geometry inspired” solution. Other authors have termed it as “minimalistic approach” <cit.>.] * The matrix θ^ij is assumed to behave like a field to preserve Lorentz symmetry.[Lorentz violation associated to (<ref>) is a debated issue in the literature, e.g., see <cit.>. ] At this point, one can display the central result <cit.> ds^2=-[ 1-2M^2/rγ(3/2; r^2/4θ)/Γ(3/2) ] dt^2 +[ 1-2M^2/rγ(3/2; r^2/4θ)/Γ(3/2) ]^-1dr^2+r^2dΩ^2. Here γ ( 3/2, x)≡∫_0^xdu/u u^3/2e^-u is the incomplete Gamma function. It guarantees regularity of the manifold and quick convergence to the Schwarzschild metric for r≫√(θ). While the horizon structure and the thermodynamics are similar to those of the other quantum gravity improved metrics (<ref>)(<ref>), the above result has some specific characters. The Gaussian function (<ref>) is a non-polynomial smearing, in agreement to what found by Tseytlin in <cit.>. On the other hand, polynomial functions (like GUP, T-duality) can be seen as the result of a truncation of the expansion over the theta parameter <cit.>. The above metric has been obtained also in the context of non-local gravity actions <cit.> and has been extended to the case of additional spatial dimensions <cit.>, charged <cit.> and rotating <cit.> solutions. From the emission spectra of the higher dimensional extension of (<ref>), one learns that mini black holes tend to radiate soft particles mainly on the brane. This is in marked contrast with results coming from the Schwarzschild-Tangherlini metric <cit.>. § CONCLUSIONS The very essence of the message I want to convey is the relation between particles and black holes in Fig. <ref>. It has already been noticed that strings and black holes share common properties <cit.>. In this work, however, the argument is reinforced and employed to improve classical black hole solutions. From this perspective the regularity of black hole metrics is the natural consequence of non-locality of particles, when described in terms of strings. Another key point concerns the particle-black hole at the intersection of the curves for λ and r_g. The nature of this object is probably one of the most important topics in current research in quantum gravity. Indeed, the particle-black hole is essential to guarantee a self-complete character of gravity. Its mass and radius are related to the fundamental units of quantum gravity and string theory, along the common denominator of non-locality. This is evident also from the correspondence between cut offs, √(α^') (string, GUP), l_0 (T-duality, GUP), self completeness, √(θ) (NCG). In this work, we have also mentioned some of the existing difficulties, e.g., the details of the collapse at the Planck scale, the absence of an actual “quantum manifold”. This means that the program of quantum gravity is far from being complete. It is also not clear if the predictions emerging from string theory will have experimental corroboration in the future. The ideas here presented, however, tend to support a less pessimistic scenario. Black holes could offer a testbed for fundamental physics, that is alternative to conventional experiments in high energy particle physics. §.§.§ Acknowledgments The work of P.N. has partially been supported by GNFM, Italy's National Group for Mathematical Physics. P.N. is grateful to Cosimo Bambi for the invitation to submit the present contribution to the volume “Regular Black Holes: Towards a New Paradigm of Gravitational Collapse”, Springer, Singapore. P.N. is grateful to Athanasios Tzikas for the support in drawing the picture. 125 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [EHT(2019)]EHT19PR https://www.eso.org/public/news/eso1907/ title Astronomers Capture First Image of a Black Hole, (year 2019)NoStop [Abbott et al.(2016)Abbott et al.]LIV16 author author B. P. Abbott et al. (collaboration LIGO Scientific, Virgo), 10.1103/PhysRevLett.116.061102 journal journal Phys. Rev. Lett. volume 116, pages 061102 (year 2016), http://arxiv.org/abs/1602.03837arXiv:1602.03837 [gr-qc]NoStop [Rendall(2000)]Ren00 author author A. D. Rendall, 10.12942/lrr-2000-1 journal journal Living Rev. Rel. volume 3, pages 1 (year 2000), http://arxiv.org/abs/gr-qc/0001008arXiv:gr-qc/0001008NoStop [Sotiriou and Faraoni(2010)]SoF10 author author T. P. Sotiriou and author V. Faraoni, 10.1103/RevModPhys.82.451 journal journal Rev. Mod. Phys. volume 82, pages 451 (year 2010)NoStop [Capozziello and De Laurentis(2011)]CaD11 author author S. Capozziello and author M. De Laurentis, 10.1016/j.physrep.2011.09.003 journal journal Phys. Rept. volume 509, pages 167 (year 2011), http://arxiv.org/abs/1108.6266arXiv:1108.6266 [gr-qc]NoStop [Clifton et al.(2012)Clifton, Ferreira, Padilla, and Skordis]CFP++12 author author T. Clifton, author P. G. Ferreira, author A. Padilla, and author C. Skordis, 10.1016/j.physrep.2012.01.001 journal journal Phys. Rept. volume 513, pages 1 (year 2012), http://arxiv.org/abs/1106.2476arXiv:1106.2476 [astro-ph.CO]NoStop [Bowyer et al.(1965)Bowyer, Byram, Chubb, and Friedman]BBCF65 author author S. Bowyer, author E. T. Byram, author T. A. Chubb, and author H. Friedman, 10.1126/science.147.3656.394 journal journal Science volume 147, pages 394 (year 1965)NoStop [Fulling(1973)]Ful73 author author S. A. Fulling, 10.1103/PhysRevD.7.2850 journal journal Phys. Rev. D volume 7, pages 2850 (year 1973)NoStop [Davies(1975)]Dav75 author author P. C. W. Davies, 10.1088/0305-4470/8/4/022 journal journal J. Phys. A Math. Gen. volume 8, pages 609 (year 1975)NoStop [Unruh(1976)]Unr76 author author W. G. Unruh, 10.1103/PhysRevD.14.870 journal journal Phys. Rev. D volume 14, pages 870 (year 1976)NoStop [Hawking(1975)]Haw75 author author S. W. Hawking, 10.1007/BF02345020 journal journal Comm.Math. Phys. volume 43, pages 199 (year 1975)NoStop [Bekenstein(1973)]Bek73 author author J. D. Bekenstein, 10.1103/PhysRevD.7.2333 journal journal Phys. Rev. volume D7, pages 2333 (year 1973)NoStop [Bardeen et al.(1973)Bardeen, Carter, and Hawking]BCH73 author author J. M. Bardeen, author B. Carter, and author S. W. Hawking, 10.1007/BF01645742 journal journal Commun. Math. Phys. volume 31, pages 161 (year 1973)NoStop [Wald(1984)]Wal84 author author R. M. Wald, 10.7208/chicago/9780226870373.001.0001 title General Relativity (publisher Chicago Univ. Pr., address Chicago, USA, year 1984)NoStop [Balbinot and Barletta(1988)]BaB88 author author R. Balbinot and author A. Barletta, 10.1088/0264-9381/5/1/004 journal journal Class. Quant. Grav. volume 5, pages L11 (year 1988)NoStop [Balbinot and Poisson(1993)]BaP93 author author R. Balbinot and author E. Poisson, 10.1103/PhysRevLett.70.13 journal journal Phys. Rev. Lett. volume 70, pages 13 (year 1993)NoStop [Birrell and Davies(1984)]BiD84 author author N. D. Birrell and author P. C. W. Davies, 10.1017/CBO9780511622632 title Quantum Fields in Curved Space, Cambridge Monographs on Mathematical Physics (publisher Cambridge Univ. Press, address Cambridge, UK, year 1984)NoStop [Penrose(1965)]Pen65 author author R. Penrose, 10.1103/PhysRevLett.14.57 journal journal Phys. Rev. Lett. volume 14, pages 57 (year 1965)NoStop [Gliner(1966)]Gli66 author author E. B. Gliner, @noop journal journal Sov. J. Exp. Th. Phys. volume 22, pages 378 (year 1966)NoStop [Sakharov(1966)]Sak66 author author A. D. Sakharov, @noop journal journal Sov. Phys. JETP volume 22, pages 241 (year 1966)NoStop [Bardeen(1968)]Bar68 author author J. M. Bardeen, in @noop booktitle Proceedings of International Conference GR5 (Tbilisi, USSR) (year 1968) p. pages 174NoStop [Ayon-Beato and Garcia(2000)]AyG00 author author E. Ayon-Beato and author A. Garcia, 10.1016/S0370-2693(00)01125-4 journal journal Phys. Lett. volume B493, pages 149 (year 2000), http://arxiv.org/abs/gr-qc/0009077arXiv:gr-qc/0009077 [gr-qc]NoStop [Dymnikova(1992)]Dym92 author author I. Dymnikova, 10.1007/BF00760226 journal journal Gen. Rel. Grav. volume 24, pages 235 (year 1992)NoStop [Ayon-Beato and Garcia(1999a)]AyG99a author author E. Ayon-Beato and author A. Garcia, 10.1016/S0370-2693(99)01038-2 journal journal Phys. Lett. volume B464, pages 25 (year 1999a), http://arxiv.org/abs/hep-th/9911174arXiv:hep-th/9911174 [hep-th]NoStop [Ayon-Beato and Garcia(1999b)]AyG99b author author E. Ayon-Beato and author A. Garcia, 10.1023/A:1026640911319 journal journal Gen. Rel. Grav. volume 31, pages 629 (year 1999b), http://arxiv.org/abs/gr-qc/9911084arXiv:gr-qc/9911084 [gr-qc]NoStop [Ayon-Beato and Garcia(1998)]AyG99c author author E. Ayon-Beato and author A. Garcia, 10.1103/PhysRevLett.80.5056 journal journal Phys. Rev. Lett. volume 80, pages 5056 (year 1998), http://arxiv.org/abs/gr-qc/9911046arXiv:gr-qc/9911046 [gr-qc]NoStop [Bronnikov(2001)]Bro01 author author K. A. Bronnikov, 10.1103/PhysRevD.63.044005 journal journal Phys. Rev. volume D63, pages 044005 (year 2001), http://arxiv.org/abs/gr-qc/0006014arXiv:gr-qc/0006014 [gr-qc]NoStop [Mbonye and Kazanas(2005)]Mbo05 author author M. R. Mbonye and author D. Kazanas, 10.1103/PhysRevD.72.024016 journal journal Phys. Rev. D volume 72, pages 024016 (year 2005), http://arxiv.org/abs/gr-qc/0506111arXiv:gr-qc/0506111NoStop [Hayward(2006)]Hay06 author author S. A. Hayward, 10.1103/PhysRevLett.96.031103 journal journal Phys. Rev. Lett. volume 96, pages 031103 (year 2006), http://arxiv.org/abs/gr-qc/0506126arXiv:gr-qc/0506126 [gr-qc]NoStop [Dymnikova(2023)]Dym23 author author I. Dymnikova, @noop title Regular rotating black holes and solitons with the de Sitter/phantom interiors, (year 2023), note contribution to the volume “Regular Black Holes: Towards a New Paradigm of Gravitational Collapse”, Springer, SingaporeNoStop [Bronnikov(2023)]Bro23 author author K. A. Bronnikov, @noop title Regular black holes sourced by nonlinear electrodynamics, (year 2023), note contribution to the volume “Regular Black Holes: Towards a New Paradigm of Gravitational Collapse”, Springer, SingaporeNoStop [Ansoldi(2008)]Ans08 author author S. Ansoldi, in https://inspirehep.net/record/778724/files/arXiv:0802.0330.pdf booktitle Conference on Black Holes and Naked Singularities Milan, Italy, May 10-12, 2007 (year 2008) http://arxiv.org/abs/0802.0330arXiv:0802.0330 [gr-qc]NoStop [Nicolai(2014)]Nicolai13 author author H. Nicolai, title Quantum Gravity: the view from particle physics, in 10.1007/978-3-319-06349-2_18 booktitle General Relativity, Cosmology and Astrophysics: Perspectives 100 years after Einstein's stay in Prague, series Fundam. Theor. Phys., Vol. volume 177, editor edited by editor J. Bičák and editor T. Ledvinka (publisher Springer International Publishing, address Switzerland, year 2014) pp. pages 369–387, http://arxiv.org/abs/1301.5481arXiv:1301.5481 [gr-qc]NoStop [Maldacena(1999)]Mal99 author author J. M. Maldacena, 10.1023/A:1026654312961, 10.4310/ATMP.1998.v2.n2.a1 journal journal Int. J. Theor. Phys. volume 38, pages 1113 (year 1999), http://arxiv.org/abs/hep-th/9711200arXiv:hep-th/9711200 [hep-th]NoStop [Stelle(1998)]Ste98 author author K. S. Stelle, in @noop booktitle ICTP Summer School in High-energy Physics and Cosmology (year 1998) http://arxiv.org/abs/hep-th/9803116arXiv:hep-th/9803116NoStop [Antoniadis et al.(1998)Antoniadis, Arkani-Hamed, Dimopoulos, and Dvali]AAD+98 author author I. Antoniadis, author N. Arkani-Hamed, author S. Dimopoulos, and author G. Dvali, 10.1016/S0370-2693(98)00860-0 journal journal Phys. Lett. B volume 436, pages 257 (year 1998), http://arxiv.org/abs/hep-ph/9804398arXiv:hep-ph/9804398NoStop [Arkani-Hamed et al.(1998)Arkani-Hamed, Dimopoulos, and Dvali]ADD98 author author N. Arkani-Hamed, author S. Dimopoulos, and author G. Dvali, 10.1016/S0370-2693(98)00466-3 journal journal Phys. Lett. B volume 429, pages 263 (year 1998), http://arxiv.org/abs/hep-ph/9803315arXiv:hep-ph/9803315NoStop [Arkani-Hamed et al.(1999)Arkani-Hamed, Dimopoulos, and Dvali]ADD99 author author N. Arkani-Hamed, author S. Dimopoulos, and author G. Dvali, 10.1103/PhysRevD.59.086004 journal journal Phys. Rev. D volume 59, pages 086004 (year 1999), http://arxiv.org/abs/hep-ph/9807344arXiv:hep-ph/9807344NoStop [Randall and Sundrum(1999a)]RaS99a author author L. Randall and author R. Sundrum, 10.1103/PhysRevLett.83.3370 journal journal Phys. Rev. Lett. volume 83, pages 3370 (year 1999a), http://arxiv.org/abs/hep-ph/9905221arXiv:hep-ph/9905221NoStop [Randall and Sundrum(1999b)]RaS99b author author L. Randall and author R. Sundrum, 10.1103/PhysRevLett.83.4690 journal journal Phys. Rev. Lett. volume 83, pages 4690 (year 1999b), http://arxiv.org/abs/hep-th/9906064arXiv:hep-th/9906064NoStop [Gogberashvili(2000)]Gog98a author author M. Gogberashvili, 10.1209/epl/i2000-00162-1 journal journal Europhys. Lett. volume 49, pages 396 (year 2000), http://arxiv.org/abs/hep-ph/9812365arXiv:hep-ph/9812365NoStop [Gogberashvili(2002)]Gog98b author author M. Gogberashvili, 10.1142/S0218271802002992 journal journal Int. J. Mod. Phys. D volume 11, pages 1635 (year 2002), http://arxiv.org/abs/hep-ph/9812296arXiv:hep-ph/9812296NoStop [Gogberashvili(1999)]Gog99 author author M. Gogberashvili, 10.1142/S021773239900208X journal journal Mod. Phys. Lett. A volume 14, pages 2025 (year 1999), http://arxiv.org/abs/hep-ph/9904383arXiv:hep-ph/9904383NoStop [Banks and Fischler(1999)]BaF99 author author T. Banks and author W. Fischler, @noop title A Model for high-energy scattering in quantum gravity, (year 1999), note unpublished paper, http://arxiv.org/abs/hep-th/9906038arXiv:hep-th/9906038NoStop [Mathur(2005)]Mat05 author author S. D. Mathur, 10.1002/prop.200410203 journal journal Fortsch. Phys. volume 53, pages 793 (year 2005), http://arxiv.org/abs/hep-th/0502050arXiv:hep-th/0502050NoStop [Vafa(2005)]Vaf05 author author C. Vafa, @noop title The String landscape and the swampland, (year 2005), note Based on talks given at the Einstein Symposium in Alexandria, at the 2005 Simons Workshop in Mathematics and Physics, and the talk to have been presented at Strings 2005., http://arxiv.org/abs/hep-th/0509212arXiv:hep-th/0509212NoStop [Amati et al.(1987)Amati, Ciafaloni, and Veneziano]ACV87 author author D. Amati, author M. Ciafaloni, and author G. Veneziano, 10.1016/0370-2693(87)90346-7 journal journal Phys. Lett. B volume 197, pages 81 (year 1987)NoStop [Amati et al.(1988)Amati, Ciafaloni, and Veneziano]ACV88 author author D. Amati, author M. Ciafaloni, and author G. Veneziano, 10.1142/S0217751X88000710 journal journal Int. J. Mod. Phys. A volume 3, pages 1615 (year 1988)NoStop [Amati et al.(1989)Amati, Ciafaloni, and Veneziano]ACV89 author author D. Amati, author M. Ciafaloni, and author G. Veneziano, 10.1016/0370-2693(89)91366-X journal journal Phys. Lett. B volume 216, pages 41 (year 1989)NoStop [Dvali et al.(2011a)Dvali, Folkerts, and Germani]DFG11 author author G. Dvali, author S. Folkerts, and author C. Germani, 10.1103/PhysRevD.84.024039 journal journal Phys. Rev. D volume 84, pages 024039 (year 2011a), http://arxiv.org/abs/1006.0984arXiv:1006.0984 [hep-th]NoStop [Dvali et al.(2011b)Dvali, Giudice, Gomez, and Kehagias]DGGK11 author author G. Dvali, author G. F. Giudice, author C. Gomez, and author A. Kehagias, 10.1007/JHEP08(2011)108 journal journal JHEP volume 08, pages 108 (year 2011b), http://arxiv.org/abs/1010.1415arXiv:1010.1415 [hep-ph]NoStop [Aurilia and Spallucci(2013a)]AuS13 author author A. Aurilia and author E. Spallucci, 10.1155/2013/531696 journal journal Adv. High Energy Phys. volume 2013, pages 531696 (year 2013a), http://arxiv.org/abs/1309.7741arXiv:1309.7741 [hep-th]NoStop [Garay(1995)]Gar95 author author L. J. Garay, 10.1142/S0217751X95000085 journal journal Int. J. Mod. Phys. A volume 10, pages 145 (year 1995), http://arxiv.org/abs/gr-qc/9403008arXiv:gr-qc/9403008NoStop [Aurilia and Spallucci(2013b)]AuS02 author author A. Aurilia and author E. Spallucci, @noop title Planck's uncertainty principle and the saturation of Lorentz boosts by Planckian black holes, (year 2013b), note unpublished essay submitted to the Gravity Research Foundation for the 2002-03 competition., http://arxiv.org/abs/1309.7186arXiv:1309.7186NoStop [Dvali and Gomez(2010)]DvG10 author author G. Dvali and author C. Gomez, @noop title Self-Completeness of Einstein Gravity, (year 2010), note unpublished paper, http://arxiv.org/abs/1005.3497arXiv:1005.3497 [hep-th]NoStop [Adler(2010)]Adl10 author author R. J. Adler, 10.1119/1.3439650 journal journal Am. J. Phys. volume 78, pages 925 (year 2010), http://arxiv.org/abs/1001.1205arXiv:1001.1205 [gr-qc]NoStop [Carr(2016)]Car13 author author B. J. Carr, in 10.1007/978-3-319-20046-0_19 booktitle 1st Karl Schwarzschild Meeting on Gravitational Physics, series Springer Proc. Phys., Vol. volume 170, editor edited by editor P. Nicolini, editor M. Kaminski, editor J. Mureika, and editor M. Bleicher (publisher Springer International Publishing, address Switzerland, year 2016) pp. pages 159–167, http://arxiv.org/abs/1402.1427arXiv:1402.1427 [gr-qc]NoStop [Padmanabhan(2020)]Pad20 author author T. Padmanabhan, 10.1016/j.physletb.2020.135774 journal journal Phys. Lett. B volume 809, pages 135774 (year 2020)NoStop [Carr et al.(2023)Carr, Mureika, and Nicolini]CMN23 author author B. Carr, author J. Mureika, and author P. Nicolini, @noop title Elementary Particles as Black Holes: Linking Experimental Tests in the Microscopic and Macroscopic Regimes , (year 2023), note in preparationNoStop [Carr et al.(2015)Carr, Mureika, and Nicolini]CMN15 author author B. J. Carr, author J. Mureika, and author P. Nicolini, 10.1007/JHEP07(2015)052 journal journal JHEP volume 07, pages 052 (year 2015), http://arxiv.org/abs/1504.07637arXiv:1504.07637 [gr-qc]NoStop [Carr et al.(2020)Carr, Mentzer, Mureika, and Nicolini]CMMN20 author author B. Carr, author H. Mentzer, author J. Mureika, and author P. Nicolini, 10.1140/epjc/s10052-020-08706-0 journal journal Eur. Phys. J. C volume 80, pages 1166 (year 2020), http://arxiv.org/abs/2006.04892arXiv:2006.04892 [gr-qc]NoStop [Mureika and Nicolini(2013)]MuN13 author author J. Mureika and author P. Nicolini, 10.1140/epjp/i2013-13078-0 journal journal Eur. Phys. J. Plus volume 128, pages 78 (year 2013), http://arxiv.org/abs/1206.4696arXiv:1206.4696 [hep-th]NoStop [Knipfer et al.(2019)Knipfer, Köppel, Mureika, and Nicolini]KKM+19 author author M. Knipfer, author S. Köppel, author J. Mureika, and author P. Nicolini, 10.1088/1475-7516/2019/08/008 journal journal JCAP volume 08, pages 008 (year 2019), http://arxiv.org/abs/1905.03233arXiv:1905.03233 [gr-qc]NoStop [Nicolini(2018)]Nic18 author author P. Nicolini, 10.1016/j.physletb.2018.01.013 journal journal Phys. Lett. volume B778, pages 88 (year 2018), http://arxiv.org/abs/1712.05062arXiv:1712.05062 [gr-qc]NoStop [Hawking(1974)]Haw74 author author S. W. Hawking, 10.1038/248030a0 journal journal Nature volume 248, pages 30 (year 1974)NoStop [Hawking(1971)]Haw71 author author S. Hawking, @noop journal journal Mon. Not. Roy. Astron. Soc. volume 152, pages 75 (year 1971)NoStop [Carr and Hawking(1974)]CaH74 author author B. J. Carr and author S. W. Hawking, @noop journal journal Mon. Not. Roy. Astron. Soc. volume 168, pages 399 (year 1974)NoStop [Bousso and Hawking(1995)]BoH95 author author R. Bousso and author S. W. Hawking, 10.1103/PhysRevD.52.5659 journal journal Phys. Rev. volume D52, pages 5659 (year 1995), http://arxiv.org/abs/gr-qc/9506047arXiv:gr-qc/9506047 [gr-qc]NoStop [Nicolini(2009)]Nic09 author author P. Nicolini, 10.1142/S0217751X09043353 journal journal Int. J. Mod. Phys. volume A24, pages 1229 (year 2009), http://arxiv.org/abs/0807.1939arXiv:0807.1939 [hep-th]NoStop [Denardo and Spallucci(1978)]DeS78 author author G. Denardo and author E. Spallucci, 10.1007/BF02726800 journal journal Nuovo Cim. B volume 44, pages 381 (year 1978)NoStop [Gibbons(1975)]Gib75 author author G. W. Gibbons, 10.1007/BF01609829 journal journal Commun. Math. Phys. volume 44, pages 245 (year 1975)NoStop [Page(2006)]Pag06 author author D. N. Page, 10.1086/508858 journal journal Astrophys. J. volume 653, pages 1400 (year 2006), http://arxiv.org/abs/astro-ph/0610340arXiv:astro-ph/0610340NoStop [Nicolini and Spallucci(2014)]NiS14 author author P. Nicolini and author E. Spallucci, 10.1155/2014/805684 journal journal Adv. High Energy Phys. volume 2014, pages 805684 (year 2014), http://arxiv.org/abs/1210.0015arXiv:1210.0015 [hep-th]NoStop [Dvali and Gomez(2012)]DvG12 author author G. Dvali and author C. Gomez, @noop title Black Hole Macro-Quantumness, (year 2012), note unpublished paper, http://arxiv.org/abs/1212.0765arXiv:1212.0765 [hep-th]NoStop [Dvali and Gomez(2013a)]DvG13 author author G. Dvali and author C. Gomez, 10.1002/prop.201300001 journal journal Fortsch. Phys. volume 61, pages 742 (year 2013a), http://arxiv.org/abs/1112.3359arXiv:1112.3359 [hep-th]NoStop [Dvali and Gomez(2013b)]DvG13+ author author G. Dvali and author C. Gomez, 10.1016/j.physletb.2013.01.020 journal journal Phys. Lett. volume B719, pages 419 (year 2013b), http://arxiv.org/abs/1203.6575arXiv:1203.6575 [hep-th]NoStop [Ansoldi et al.(1999)Ansoldi, Aurilia, and Spallucci]AAS99a author author S. Ansoldi, author A. Aurilia, and author E. Spallucci, 10.1016/S0960-0779(98)00115-5 journal journal Chaos Solitons Fractals volume 10, pages 197 (year 1999), http://arxiv.org/abs/hep-th/9803229arXiv:hep-th/9803229NoStop [Nicolini(2022)]Nic22 author author P. Nicolini, 10.1007/s10714-022-02995-4 journal journal Gen. Rel. Grav. volume 54, pages 106 (year 2022), http://arxiv.org/abs/2208.05390arXiv:2208.05390 [hep-th]NoStop [Padmanabhan(1997)]Pad97 author author T. Padmanabhan, 10.1103/PhysRevLett.78.1854 journal journal Phys. Rev. Lett. volume 78, pages 1854 (year 1997), http://arxiv.org/abs/hep-th/9608182arXiv:hep-th/9608182NoStop [Padmanabhan(1998)]Pad98 author author T. Padmanabhan, 10.1103/PhysRevD.57.6206 journal journal Phys. Rev. D volume 57, pages 6206 (year 1998)NoStop [Smailagic et al.(2003)Smailagic, Spallucci, and Padmanabhan]SSP03 author author A. Smailagic, author E. Spallucci, and author T. Padmanabhan, @noop title String theory T duality and the zero point length of space-time, (year 2003), note unpublished paper, http://arxiv.org/abs/hep-th/0308122arXiv:hep-th/0308122NoStop [Spallucci and Fontanini(2005)]SpF05 author author E. Spallucci and author M. Fontanini, title Zero-point length, extra-dimensions and string T-duality, in @noop booktitle New Developments in String Theory Research, editor edited by editor S. A. Grece (publisher Nova Publishers, year 2005)NoStop [Fontanini et al.(2006)Fontanini, Spallucci, and Padmanabhan]FSP06 author author M. Fontanini, author E. Spallucci, and author T. Padmanabhan, 10.1016/j.physletb.2005.12.039 journal journal Phys. Lett. B volume 633, pages 627 (year 2006), http://arxiv.org/abs/hep-th/0509090arXiv:hep-th/0509090NoStop [Krasnikov(1987)]Kra87 author author N. V. Krasnikov, 10.1007/BF01017588 journal journal Theor. Math. Phys. volume 73, pages 1184 (year 1987)NoStop [Tomboulis(1997)]Tom97 author author E. T. Tomboulis, @noop title Superrenormalizable gauge and gravitational theories, (year 1997), note unpublished paper, http://arxiv.org/abs/hep-th/9702146arXiv:hep-th/9702146NoStop [Modesto(2012)]Mod12 author author L. Modesto, 10.1103/PhysRevD.86.044005 journal journal Phys. Rev. D volume 86, pages 044005 (year 2012), http://arxiv.org/abs/1107.2403arXiv:1107.2403 [hep-th]NoStop [Biswas et al.(2012)Biswas, Gerwick, Koivisto, and Mazumdar]BGKM12 author author T. Biswas, author E. Gerwick, author T. Koivisto, and author A. Mazumdar, 10.1103/PhysRevLett.108.031101 journal journal Phys. Rev. Lett. volume 108, pages 031101 (year 2012), http://arxiv.org/abs/1110.5249arXiv:1110.5249 [gr-qc]NoStop [Barvinsky(2003)]Bar03 author author A. O. Barvinsky, 10.1016/j.physletb.2003.08.055 journal journal Phys. Lett. B volume 572, pages 109 (year 2003), http://arxiv.org/abs/hep-th/0304229arXiv:hep-th/0304229NoStop [Hamber and Williams(2005)]HaW05 author author H. W. Hamber and author R. M. Williams, 10.1103/PhysRevD.72.044026 journal journal Phys. Rev. D volume 72, pages 044026 (year 2005), http://arxiv.org/abs/hep-th/0507017arXiv:hep-th/0507017NoStop [Moffat(2011)]Mof10 author author J. W. Moffat, 10.1140/epjp/i2011-11043-7 journal journal Eur. Phys. J. Plus volume 126, pages 43 (year 2011), http://arxiv.org/abs/1008.2482arXiv:1008.2482 [gr-qc]NoStop [Modesto et al.(2011)Modesto, Moffat, and Nicolini]MMN11 author author L. Modesto, author J. W. Moffat, and author P. Nicolini, 10.1016/j.physletb.2010.11.046 journal journal Phys. Lett. volume B695, pages 397 (year 2011), http://arxiv.org/abs/1010.0680arXiv:1010.0680 [gr-qc]NoStop [Giacchini and de Paula Netto(2023)]GiN23 author author B. L. Giacchini and author T. de Paula Netto, @noop title Regular black holes from higher-derivative effective delta sources, (year 2023), note contribution to the volume “Regular Black Holes: Towards a New Paradigm of Gravitational Collapse”, Springer, SingaporeNoStop [Gaete and Nicolini(2022)]GaN22 author author P. Gaete and author P. Nicolini, 10.1016/j.physletb.2022.137100 journal journal Phys. Lett. B volume 829, pages 137100 (year 2022), http://arxiv.org/abs/2202.09311arXiv:2202.09311 [hep-th]NoStop [Nicolini et al.(2019)Nicolini, Spallucci, and Wondrak]NSW19 author author P. Nicolini, author E. Spallucci, and author M. F. Wondrak, 10.1016/j.physletb.2019.134888 journal journal Phys. Lett. B volume 797, pages 134888 (year 2019), http://arxiv.org/abs/1902.11242arXiv:1902.11242 [gr-qc]NoStop [Preskill(1979)]Pre79 author author J. Preskill, 10.1103/PhysRevLett.43.1365 journal journal Phys. Rev. Lett. volume 43, pages 1365 (year 1979)NoStop [Preskill(1984)]Pre84 author author J. Preskill, 10.1146/annurev.ns.34.120184.002333 journal journal Ann. Rev. Nucl. Part. Sci. volume 34, pages 461 (year 1984)NoStop [Gaete et al.(2022)Gaete, Jusufi, and Nicolini]GJN22 author author P. Gaete, author K. Jusufi, and author P. Nicolini, 10.1016/j.physletb.2022.137546 journal journal Phys. Lett. B volume 835, pages 137546 (year 2022), http://arxiv.org/abs/2205.15441arXiv:2205.15441 [hep-th]NoStop [Connes(1995)]Con95 author author A. Connes, 10.1063/1.531241 journal journal J. Math. Phys. volume 36, pages 6194 (year 1995)NoStop [Connes(1996)]Con96 author author A. Connes, 10.1007/BF02506388 journal journal Commun. Math. Phys. volume 182, pages 155 (year 1996), http://arxiv.org/abs/hep-th/9603053arXiv:hep-th/9603053NoStop [Schucker(2005)]Sch05 author author T. Schucker, 10.1007/978-3-540-31532-2_6 journal journal Lect. Notes Phys. volume 659, pages 285 (year 2005), http://arxiv.org/abs/hep-th/0111236arXiv:hep-th/0111236NoStop [Szabo(2003)]Sza01 author author R. J. Szabo, 10.1016/S0370-1573(03)00059-0 journal journal Phys. Rept. volume 378, pages 207 (year 2003), http://arxiv.org/abs/hep-th/0109162arXiv:hep-th/0109162NoStop [Douglas and Nekrasov(2001)]DoN01 author author M. R. Douglas and author N. A. Nekrasov, 10.1103/RevModPhys.73.977 journal journal Rev. Mod. Phys. volume 73, pages 977 (year 2001), http://arxiv.org/abs/hep-th/0106048arXiv:hep-th/0106048NoStop [Seiberg and Witten(1999)]SeW99 author author N. Seiberg and author E. Witten, 10.1088/1126-6708/1999/09/032 journal journal JHEP volume 09, pages 032 (year 1999), http://arxiv.org/abs/hep-th/9908142arXiv:hep-th/9908142NoStop [Dimopoulos and Landsberg(2001)]DiL01 author author S. Dimopoulos and author G. L. Landsberg, 10.1103/PhysRevLett.87.161602 journal journal Phys. Rev. Lett. volume 87, pages 161602 (year 2001), http://arxiv.org/abs/hep-ph/0106295arXiv:hep-ph/0106295NoStop [Giddings and Thomas(2002)]GiT02 author author S. B. Giddings and author S. D. Thomas, 10.1103/PhysRevD.65.056010 journal journal Phys. Rev. D volume 65, pages 056010 (year 2002), http://arxiv.org/abs/hep-ph/0106219arXiv:hep-ph/0106219NoStop [Mureika et al.(2012)Mureika, Nicolini, and Spallucci]MNS12 author author J. Mureika, author P. Nicolini, and author E. Spallucci, 10.1103/PhysRevD.85.106007 journal journal Phys. Rev. D volume 85, pages 106007 (year 2012), http://arxiv.org/abs/1111.5830arXiv:1111.5830 [hep-ph]NoStop [Nicolini et al.(2015)Nicolini, Mureika, Spallucci, Winstanley, and Bleicher]NMSW15 author author P. Nicolini, author J. Mureika, author E. Spallucci, author E. Winstanley, and author M. Bleicher, in 10.1142/9789814623995_0478 booktitle 13th Marcel Grossmann Meeting on Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories (year 2015) pp. pages 2495–2497, http://arxiv.org/abs/1302.2640arXiv:1302.2640 [hep-th]NoStop [Glauber(1963)]Gla63 author author R. J. Glauber, 10.1103/PhysRev.131.2766 journal journal Phys. Rev. volume 131, pages 2766 (year 1963)NoStop [Smailagic and Spallucci(2003a)]SmSp03 author author A. Smailagic and author E. Spallucci, 10.1088/0305-4470/36/39/103 journal journal J. Phys. volume A36, pages L517 (year 2003a), http://arxiv.org/abs/hep-th/0308193arXiv:hep-th/0308193 [hep-th]NoStop [Smailagic and Spallucci(2003b)]SmSp03b author author A. Smailagic and author E. Spallucci, 10.1088/0305-4470/36/33/101 journal journal J. Phys. A volume 36, pages L467 (year 2003b), http://arxiv.org/abs/hep-th/0307217arXiv:hep-th/0307217NoStop [Spallucci et al.(2006)Spallucci, Smailagic, and Nicolini]SSN06 author author E. Spallucci, author A. Smailagic, and author P. Nicolini, 10.1103/PhysRevD.73.084004 journal journal Phys. Rev. D volume 73, pages 084004 (year 2006), http://arxiv.org/abs/hep-th/0604094arXiv:hep-th/0604094NoStop [Vassilevich(2010)]Vas09 author author D. V. Vassilevich, in 10.1142/9789814277839_0017 booktitle Fundamental Interactions: A Memorial Volume for Wolfgang Kummer, editor edited by editor D. Grumiller, editor A. Rebhan, and editor D. Vassilevich (publisher World Scientific, address Singapore, year 2010) pp. pages 293–302, http://arxiv.org/abs/0902.07670902.0767NoStop [Carroll et al.(2001)Carroll, Harvey, Kostelecky, Lane, and Okamoto]CHJK01 author author S. M. Carroll, author J. A. Harvey, author V. A. Kostelecky, author C. D. Lane, and author T. Okamoto, 10.1103/PhysRevLett.87.141601 journal journal Phys. Rev. Lett. volume 87, pages 141601 (year 2001), http://arxiv.org/abs/hep-th/0105082arXiv:hep-th/0105082NoStop [Carlson et al.(2002)Carlson, Carone, and Zobin]CCZ02 author author C. E. Carlson, author C. D. Carone, and author N. Zobin, 10.1103/PhysRevD.66.075001 journal journal Phys. Rev. D volume 66, pages 075001 (year 2002), http://arxiv.org/abs/hep-th/0206035arXiv:hep-th/0206035NoStop [Morita(2003)]Mor03 author author K. Morita, 10.1143/PTP.108.1099 journal journal Prog. Theor. Phys. volume 108, pages 1099 (year 2003), http://arxiv.org/abs/hep-th/0209234arXiv:hep-th/0209234NoStop [Nicolini et al.(2006)Nicolini, Smailagic, and Spallucci]NSS06 author author P. Nicolini, author A. Smailagic, and author E. Spallucci, 10.1016/j.physletb.2005.11.004 journal journal Phys. Lett. volume B632, pages 547 (year 2006), http://arxiv.org/abs/gr-qc/0510112arXiv:gr-qc/0510112 [gr-qc]NoStop [Tseytlin(1995)]Tse95 author author A. A. Tseytlin, 10.1016/0370-2693(95)01228-7 journal journal Phys. Lett. B volume 363, pages 223 (year 1995), http://arxiv.org/abs/hep-th/9509050arXiv:hep-th/9509050NoStop [Kober and Nicolini(2010)]KoN10 author author M. Kober and author P. Nicolini, 10.1088/0264-9381/27/24/245024 journal journal Class. Quant. Grav. volume 27, pages 245024 (year 2010), http://arxiv.org/abs/1005.3293arXiv:1005.3293 [hep-th]NoStop [Rizzo(2006)]Riz06 author author T. G. Rizzo, 10.1088/1126-6708/2006/09/021 journal journal JHEP volume 09, pages 021 (year 2006), http://arxiv.org/abs/hep-ph/0606051arXiv:hep-ph/0606051 [hep-ph]NoStop [Spallucci et al.(2009)Spallucci, Smailagic, and Nicolini]SSN09 author author E. Spallucci, author A. Smailagic, and author P. Nicolini, 10.1016/j.physletb.2008.11.030 journal journal Phys. Lett. volume B670, pages 449 (year 2009), http://arxiv.org/abs/0801.3519arXiv:0801.3519 [hep-th]NoStop [Ansoldi et al.(2007)Ansoldi, Nicolini, Smailagic, and Spallucci]ANS++07 author author S. Ansoldi, author P. Nicolini, author A. Smailagic, and author E. Spallucci, 10.1016/j.physletb.2006.12.020 journal journal Phys. Lett. volume B645, pages 261 (year 2007), http://arxiv.org/abs/gr-qc/0612035arXiv:gr-qc/0612035 [gr-qc]NoStop [Smailagic and Spallucci(2010)]SmS10 author author A. Smailagic and author E. Spallucci, 10.1016/j.physletb.2010.03.075 journal journal Phys. Lett. volume B688, pages 82 (year 2010), http://arxiv.org/abs/1003.3918arXiv:1003.3918 [hep-th]NoStop [Modesto and Nicolini(2010)]MoN+10 author author L. Modesto and author P. Nicolini, 10.1103/PhysRevD.82.104035 journal journal Phys. Rev. volume D82, pages 104035 (year 2010), http://arxiv.org/abs/1005.5605arXiv:1005.5605 [gr-qc]NoStop [Nicolini and Winstanley(2011)]NiW11 author author P. Nicolini and author E. Winstanley, 10.1007/JHEP11(2011)075 journal journal JHEP volume 11, pages 075 (year 2011), http://arxiv.org/abs/1108.4419arXiv:1108.4419 [hep-ph]NoStop ['t Hooft(1990)]tHo90 author author G. 't Hooft, 10.1016/0550-3213(90)90174-C journal journal Nucl. Phys. B volume 335, pages 138 (year 1990)NoStop
http://arxiv.org/abs/2306.09028v1
20230615104016
A simplest modular $S_3$ model for leptons
[ "Davide Meloni", "Matteo Parriciatu" ]
hep-ph
[ "hep-ph", "hep-th" ]
0cm 0cm -1cm 17cm 22.5cm 1.06 equationsection
http://arxiv.org/abs/2306.12168v1
20230621104313
Decisions & Disruptions 2: Decide Harder
[ "Benjamin Shreeve", "Joseph Gardiner", "Joseph Hallett", "David Humphries", "Awais Rashid" ]
cs.CR
[ "cs.CR", "68M25" ]
Shreeve et al. Decisions & Disruptions 2: Decide Harder A custom cyber security incident response exercise Benjamin Shreeve University of Bristol Joseph Gardiner University of Bristol Joseph Hallett University of Bristol David Humphries City of London Police Awais Rashid University of Bristol ========================================================================================================================================================================================================================= Cyber incident response is critical to business continuity—we describe a new exercise that challenges professionals to play the role of CISO for a major financial organisation. Teams must decide how organisational team and budget resources should be deployed across EA upgrades and cyber incidents. Every choice made has an impact—some prevent whilst others may trigger new or continue current attacks. We explain how the underlying platform supports these interactions through a reactionary event mechanism that introduces events based on the current attack surface of the organisation. We explore how our platform manages to introduce randomness on top of triggered events to ensure that the exercise is not deterministic and better matches incidents in the real world. We conclude by describing next steps for the exercise and how we plan to use it in the future to better understand risk decision making. § INTRODUCTION Major cyber security incidents regularly disrupt businesses, and in extreme circumstance have even bankrupted them. We have created a major new incident response exercise to help businesses. We have worked with specialists from law enforcement and major financial organisations to create an exercise that challenges teams to handle a major, responsive cyber security incident. The aim of the exercise is to expose senior managers and incident response teams to the time, resource and political pressures they will encounter whilst handling a major crisis whilst at the same time gathering granular decision-making data to inform cyber response handling research. This work builds upon a freely available previous game released under a CC-BY-NC license: D-D [<decisions-disruptions.org>]. D-D is a highly successful tabletop exercise utilised by police forces across the UK and businesses across the world. It was designed to explore how people make risk decisions around cyber-physical infrastructures. Whilst D-D has provided valuable practical and research insights <cit.>, we argue that it is limited by a deterministic mechanism, fixed and tied to one sector. We, therefore, propose a new game: D-D2 which provides an extensible engine for risk decision making exercises that incorporates randomness and more complicated threat relationships that can be targeted towards any industry, rather than just for critical national infrastructure. This paper introduces our proposed new game D-D2 and the new features it incorporates. § RELATED WORK Cyber security exercises are a popular way of raising awareness of the subject matter. A number of exercises have been developed specifically as part of University courses (e.g, <cit.>). However, such exercises tend to have a relatively narrow scope related to the content of specific University courses and they are rarely validated with or used outside of academia. Other existing exercises have been created to help raise awareness of cyber security issues in industry (e.g., <cit.>). All of these exercises are tabletop exercises and with the exception of Frey et al. <cit.> all use card-based mechanisms. Whilst the card-based mechanisms are valuable for raising awareness they often limit how well exercises can reflect real-world scenarios. For example, such mechanisms rely on a mixed deck of cards (or several decks) which are then drawn from at random to provide game events. As such it is hard to such mechanisms to capture the way that events in the real-world may be related with one event causing another to occur. This is not to say they are not of value, but they often emphasise learning of specific aspects rather than emulation of scenarios. For example, Hart et al. <cit.> introduce the Riskio serious game—their exercise is aimed at non-technical participants and challenges them to consider potential threat vectors and then identify possible countermeasures. This provides a no doubt valuable learning experience, but is not the same as exposing participants to an emulation of decision-making under pressure which we is the aim of our exercise. Hart et al. <cit.> have taken lessons learned from the development of Riskio <cit.> and used it, along with a careful analysis of a wide range of cyber security games (including D-D), to create the MOTENS design model for serious games. This model suggests that the most effective cyber security games include: Multiple modes of learning—exposing players to a wide range of cyber security aspects; Ownership Self-Learning—Providing a range of options to help meeting learning objectives;Theory—that supports the design;Environment—creating an appropriate environment where people feel they can learn;Negotiation—moving toward a coaching and problem-based learning style;Self learning—enabling participants to build upon their base knowledge. Frey et al. <cit.> provide a different approach—their exercise Decisions & Disruptions provides teams with a Lego representation of a hydro-electric company with an plant (operations) site and separate office site. Teams play through 4 rounds, investing a budget each round to implement security controls on these sites. At the end of each round they discover what cyber attacks have befallen the business as a result of their choices. The game mechanism makes it clear to players that there is a direct correlation between the investment choices they have made and the events they have suffered. This is a significant improvement in terms of realism, enabling more complex attack scenarios to be developed. However, there are limitations to the mechanism created by Frey et al. <cit.>, the game mechanism has hard-coded paths through it (e.g., event A will occur if security control B has not been purchased by round 3). This means that teams cannot replay the exercise as they will always experience the same set consequences for their actions. We seek to build on the success of D-D by developing an exercise where the players affect not only the landscape in which they play but also the consequences of their choices. An exercise where events that have occurred and choices that have been made increase the likelihood of (or even directly cause) specific events to occur, and where events in turn can change the landscape. We work closely with CISO and CSIRT to develop a unique, replayable exercise, inspired by D-D that challenges teams to make cyber security decisions under the time, resource and political pressures of a series of unfolding cyber incidents. § THE EXERCISE The exercise challenges teams to help the CISO of a fictitious financial organisation handle a very bad cyber security week—each day of the week is represented by a 20 minute time-limited round. Each day, a series of cyber-related events will occur, which can be handled in various ways. Players have to decide which tasks the CISO's team will tackle each day to stay on top of what's happening to their company. Teams have the ability to affect the overall attack-surface of the business and therein the type and effectiveness of possible attack vectors used against the business by updating assets through actions such as patching and staff training. How attacks (or symptoms) are handled affect whether attack(s) are prevented, continue or whether new related attacks are triggered. The exercise is designed to be played by CSIRT, CISO and senior executives. §.§ A Reactionary Event Mechanism In order to be able to represent the complexities of real-world decision-making a complex and reactionary event mechanism was developed (see figure <ref>). Teams are tasked with protecting the EA of the business (see figure <ref>). They are able to affect the company's attack surface by investing staff hours each day into a wide range of possible upgrades (and in some cases by sacrificing profit). The attack surface is evaluated at the start of each round and used to identify which events can occur (out of a library of 120 possible events). Of those that can occur, 5 are randomly selected and added to the event list for the next round. Events can be specified to only occur given a particular set of criteria—such as a particular EA upgrade having been purchased or a previous event having occurred. Each choice that a team makes has an associated cost in terms of staff hours and company profitability. Events have been carefully designed in tandem with cyber security law enforcement officers to provide a realistic representation of the threats facing financial organisations. The range of events reflect the major MITRE ATT&CK [<https://attack.mitre.org/matrices/enterprise/>] areas highlighted through interviews with Chief Information Security Officers (CISOs) in major financial organisations as problematic. For many choices there are also consequences. These are either explicit feedback to teams as to the impact of their choices—including additional impact to hours/profitability/shareprice—or an in-game consequence which can include triggering other events in the future. These performance indicators were flagged up through interviews with CISOs and board members of major financial organisations as vital indications of performance during a cyber incident <cit.>. These triggered events are then added to the event list for the next round (possibly bypassing any criteria evaluation). Events can also trigger other events to occur if they are ignored; for example if a team decides an event is not important in one round it can still have an impact on their next round by queuing up an event to penalise their neglect. Some events are designated `on-draw' events—they affect the round as soon as they are drawn, deliberately introducing variance to the game. For example, an event may tell a team that a member of the CSIRT is ill that day; they therefore will start the round 8 hours down, with only 72 hours available to them to utilise. These `on-draw' events can force hours, profitability, shareprice and even make EA upgrade choices for teams before a round starts. For example, as part of the game an event may occur where a staff member reports finding a USB stick on the ground. The CISO can decide whether to ignore it, to forensically analyse it (at a cost of time) or to destroy the device. Each decision will have impacts (and could in turn lead to more events occurring. To visualise these kinds of decisions we created an automated tool to create attack trees from the engine's database of events and explore what happens. This gives a quick pictorial guide to the consequences of any decision in the game and the threat landscape of any particular configuration. §.§ Resources Teams have to negotiate various resource constraints. Firstly, they have to contend with the challenge that a day has a finite length—in this case a configurable 20 minute limit. If they fail to utilise their resources effectively in that time then the game automatically moves forward to the next day. The primary unit of daily resources are the number of `hours' available to the team each day—that is, how many of staff in the CSIRT team that they are managing are available that day (the exercise assumes that there are 10 staff members, each with a maximum 8 hours per day, 80 hours in total per day). These hours are then allocated to event/EA actions. The businesses performance is represented by two broad financial metrics that are both affected by choices made and consequences. Firstly, the team have to manage the overall long-term `projected profit' for the business—this starts at 500,000. However, it can be reduced through loss of business or fines that occur as a result of choices made. Teams can also choose to reinvest some of this projected profit into the business through certain EA upgrades or event choices which have associated financial costs. Secondly, teams have to consider the short-term performance of the business in the form of its `share price'—which can be affected both positively and negatively as a result of choices made from an initial value of 100. If the players actions result in the `projected profit' or `share price' reaching zero then the game ends. §.§ Exercise Management The exercise comprises two parts: a game master tool that tracks and evaluates the state of play and an associated physical card deck (see figure <ref>). The game master utilises a digital tool, written in Java, that reads a database containing details on the possible events (and their relationship to one another) as well as the assets and upgrades that possible for the EA. Each event/choices and asset/upgrade are also printed to a deck of cards providing session flexibility. §.§ Typical Play Through At the start of each day (round) the exercise interface updates to reflect the status of the business (see example in appendix <ref>). This right hand panel consists of a list of possible actions that can be undertaken to help improve the cyber security of their assets. The left hand panel shows a list of the most pertinent cyber events that have hit the business in the last 24 hours and that need resolving (a minimum of 5 occur each day) plus any that have yet to be actioned in some way that are remaining from previous rounds. The EA upgrades (see example in appendix <ref>) can be purchased at any point during a round and affect the attack surface of the game in the next round. Each event presents presents teams with several choices about how they might choose to resolve each of these events (see example in appendix <ref>). All events by default include `ignore this round' which defers making a choice for that event to the next round. All choices are final—once a choice has been made (other than ignore), then it cannot be undone. Teams work together to identify which events and EA upgrades they should dedicate resources towards in the round. As they make each choice the game provides them with instant feedback which may result in their hours, profitability or share price being immediately hit, which may in turn then limit the choices they had planned. When the round concludes the game utilises the reactionary event mechanism logic (see section <ref> and figure <ref>) to evaluate which events will occur in the next round (as a result of changes the team have made to the businesses attack surface), or that the team have directly triggered (because of choices made in during the round). The exercise continues this way through multiple rounds with teams starting each round with at least 5 events to address and a starting number of hours for the day affected by `on-draw' events. The exercise concludes either when the round limit is reached (7 rounds) or when one of the failure criteria are triggered. The game has 3 failure criteria: the profitability of the business can reach zero, the share price can reach zero or the team can have 10 or more concurrent events open. We consider a team that has failed to resolve (or has triggered) 10 (or more) events to have reached a point of event saturation that could never be resolved in the real world. The extent to which consequences of event choices affect the profitability and shareprice of the organisation vary based on the magnitude of the event. Some consequences may have a positive impact whilst others may be severe, or even sufficient to bankrupt the organisation. § DESIGN CHOICES We have created a mechanism that can identify if an attack is viable based upon not only the current state of the organisations attack surface, but also on whether specific previous events have occurred and particular choices made. This means that events that are presented are far more representative of real-world scenarios. We have taken this further by treating certain defensive approaches as `resistances'. Defensive mechanisms, like firewalls and antivirus, can never be 100% effective. Instead, our game mechanism captures the current success rate of these mechanisms (for example the Firewall is only 60% effective) and then uses these values to establish if an attack that could be prevented by a firewall will occur by incorporating chance. We also provide teams with the ability to improve these resistances by investing in specific EA upgrades—and punish negligence for not doing so in a timely manner. In doing so we create an exercise where there is no longer a 1:1 relationship between events and defences and instead there are combinations of defences that can limit the likelihood of specific attack vectors being exploited—just like in the real world. The tool is itself also extensible. The events for a given game are taken from a database and can be rewritten for different sectors or markets. Penalties for different events and how events interlink is configurable allowing for a wide range of variations of the game to be created with relative ease. Our design choices fulfil Hart et al.'s <cit.> MOTENS pedagogical design framework for serious cyber games: Multiple Modes of Learning: The mechanism enables participants to experience not only a range of events, but also to explore the relationship between EA and system security and attacks. Ownership and Self-Learning: The underlying platform is flexible, allowing sessions to be tailored to specific learning objectives. Theory: The overarching game principle is informed by experiential learning theory <cit.>. Environment: The exercise environment is designed so teams have more forgiving opening rounds to help them familiarise themselves with the exercise mechanics and expectations. Negotiation: The promotion of shared decision-making amongst participants and the immediacy of feedback through the reactionary event mechanism move learning from static presentation to an immersive exploratory experience. Finally, Self-Learning: the facilitation of these exercises enables participants to ask for clarifications when needed, enabling each participant to start from their own knowledge baseline. § NEXT STEPS AND FUTURE WORK The exercise is ready for testing with real-world CSIRT and executives. This stage will be used to identify UI bugs, and refine the content of the event and asset database. Once this is complete the exercise will be made freely available under a Creative Commons License for organisations to use. We will continue to work with our industry partners using the exercise to gather data around how organisations prioritise and respond to different incidents and triggers. Future work will focus on the creation of new asset and event databases for different sectors, making it possible to explore how different sectors and different technology stacks handle incident scenarios. Decisions and Disruptions has always helped organisations better understand how they make risk decisions: but now they have to decide harder. § ACKNOWLEDGMENTS We would like to thank Cyber Griffin, the City of London Police and the City of London Corporation for their funding and support. plain § EXAMPLE PHYSICAL CARDS § EXERCISE UI
http://arxiv.org/abs/2306.04017v1
20230606211725
Time-reversal switching responses in antiferromagnets
[ "Satoru Hayami", "Hiroaki Kusunose" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mes-hall" ]
^1Graduate School of Science, Hokkaido University, Sapporo 060-0810, Japan ^2Department of Physics, Meiji University, Kawasaki 214-8571, Japan ^3Quantum Research Center for Chirality, Institute for Molecular Science, Okazaki 444-8585, Japan We propose emergent time-reversal switching responses in antiferromagnets, which is triggered by an accompanying magnetic toroidal monopole, i.e., time-reversal odd scalar distinct from electric and magnetic monopoles. We show that simple collinear antiferromagnets exhibit unconventional responses to external electric and/or magnetic fields once magnetic symmetry accommodates the magnetic toroidal monopole. We specifically demonstrate that the emergence of the magnetic toroidal monopole in antiferromagnets enables us to control rotational distortion by an external magnetic field, switch vortex-type antiferromagnetic structure by an external electric field, and convert right/left-handedness in chirality by a composite electromagnetic field. We also present the symmetry conditions to induce the magnetic toroidal monopole and exhibit candidate materials including noncollinear antiferromagnets in order to stimulate experimental observations. Time-reversal switching responses in antiferromagnets Satoru Hayami^1 and Hiroaki Kusunose^2,3 July 31, 2023 ======================================================== Introduction.— Monopole is the most fundamental object in electromagnetism. An electric (magnetic) monopole Q_0 (M_0) corresponds to an elementary electric (magnetic) charge. Although the magnetic monopole as an elementary particle has never been observed so far, similar objects with the same symmetry have been found in condensed matter physics in the context of spin ice <cit.>, multiferroics <cit.>, and topological insulators <cit.>. Since the electric (magnetic) monopole is characterized by a time-reversal (𝒯) even scalar (𝒯-odd pseudoscalar) from the viewpoint of the space-time inversion symmetry, their counterparts with opposite parities can be also considered: One is an electric toroidal monopole (ETM) G_0 corresponding to the 𝒯-even pseudoscalar and the other is a magnetic toroidal monopole (MTM) T_0 corresponding to the 𝒯-odd scalar <cit.>. Recently, the former ETM has been recognized as a microscopic degree of freedom to describe the chirality <cit.>, which becomes the origin of the cross-correlation phenomena between polar and axial quantities, such as current-induced magnetization (Edelstein effect) <cit.> and electric-field-induced rotational distortion <cit.>. Meanwhile, the latter MTM has been still an enigmatic monopole, whose realization and physical nature have been unclear. In the present study, we theoretically propose the emergent MTM in antiferromagnets and elucidate electromagnetic responses driven by its ordering. We show that the MTM gives rise to a variety of cross-correlation responses between polar (axial) quantities by switching their 𝒯 parity, such as magnetic-field-induced rotational distortion, electric-field-induced spin vortex, and electromagnetic-field-induced chirality. Moreover, we show all the magnetic point groups to accommodate the MTM and exhibit candidate materials in both collinear and noncollinear antiferromagnets. We also demonstrate such physical phenomena under the MTM ordering by considering a minimal collinear antiferromagnetic model. Our results provide a guideline to search for unconventional antiferromagnets with the MTM. Magnetic toroidal monopole.— Magnetic toroidal (MT) multipole is characterized by a 𝒯-odd polar tensor, which shows a different spatial parity from magnetic multipole <cit.>. Among them, the dipole component, i.e., magnetic toroidal dipole (MTD) T, which is expressed as a vector product of the magnetic dipole M (or spin S) and the position vector, i.e., T∝r×M (S) [Fig. <ref>(a)], has been extensively studied, since it becomes the origin of the linear magnetoelectric effect <cit.> and nonreciprocal transport <cit.>. By using T, the MTM T_0 is expressed as T_0 = r·T. The schematic picture of T_0 is shown in Fig. <ref>(b). Although T_0 identically vanishes in the single atomic wave function owing to r⊥T <cit.>, it survives in a magnetic cluster like antiferromagnets, as discussed below. It is noted that T_0 is totally independent of the other three monopoles (Q_0, M_0, and G_0), since T_0 exhibits different spatial and 𝒯 parities from them. Cross-correlation phenomena.— T_0 corresponding to a 𝒯-odd polar scalar plays a role to convert between two polar-vector quantities with opposite 𝒯 parity. Considering that r is symmetry-equivalent to the electric dipole Q, one can find a correspondence between T_0, Q, and T from Eq. (<ref>) as T_0 ↔Q·T, where Q and T corresponds to 𝒯-even and 𝒯-odd polar vectors, respectively. Similarly, noting the relation of T∝ (Q×M), Eq. (<ref>) is rewritten as T_0 ↔G·M, where G=r×Q represents an electric toroidal dipole (ETD) corresponding to a 𝒯-even axial vector <cit.>. Thus, T_0 can also convert between two axial-vector quantities with opposite 𝒯 parity. The conversion properties among dipoles (Q, M, T, G) via T_0 are summarized in Fig. <ref>(c). The above symmetry argument indicates emergent time-reversal switching responses under the MTM ordering, which can be seen in an expansion of the free energy in terms of the electric field E and magnetic field H as F = F_0 - α_1 G·H - α_2 T·E - α_3 Q· (∇×H) - α_4 M· (∇×E) + ⋯, where α_1–α_4 are coefficients, which can be finite when the thermal average of T_0 remains. It is noted that H, E, ∇×H, and ∇×E become the conjugate fields of G, T, Q, and M, respectively, in the MTM ordering. Especially, ∇×H and ∇×E correspond to the rotational distortion in terms of the spin and charge degrees of freedom, respectively, and have the same symmetry as the electric current and time derivative of H. Thus, unusual cross-correlation responses occur under external fields; a homogeneous magnetic (electric) field gives rise to the ETD (MTD) corresponding to the vortex of Q (M) (Fig. <ref>), while an inhomogeneous magnetic (electric) field or electric current (time derivative of magnetic field) with finite rotation leads to the electric polarization (magnetization). Accordingly, one can experimentally control the rotational distortion by applying the external magnetic field and switch the vortex-type antiferromagnetic domain by the external electric field, as demonstrated below. Symmetry conditions.— Let us discuss the symmetry condition for the existence of the MTM. Since the MTM is equivalent to a 𝒯-odd scalar without spatial anisotropy, the necessary symmetry breaking is only the 𝒯 symmetry with keeping the original point group symmetry <cit.>. Among 122 magnetic point groups, 32 crystallographic point groups without 𝒯 operation satisfy this condition, as summarized in Table <ref> <cit.>. Moreover, we classify the above 32 point groups into 6 types according to the activation of the z-component magnetic dipole M_z, z-component of the MT dipole T_z, and magnetic monopole M_0, as shown in Table <ref>. When considering the point groups where M_z belongs to the totally symmetric irreducible representation, i.e., C_n h, S_4, C_3 i, C_ i, C_ s, C_n, and C_1 (n=2,3,4,6), one can control the MTM domain by using the magnetic field H_z. In the case of C_n v, C_ s, C_n, and C_1 with nonzero T_z, applying the electric field enables us to select the MTM domain. For O, T, D_n, C_n, and C_1 with nonzero M_0, a further cross-correlation response between polar and axial quantities, e.g., Q ↔ G and Q ↔ M, is expected like enantiomorphic point groups. Lastly, the point groups, O_ h, T_ d, T_ h, D_n h, and D_m d (m=2,3), accompany neither M_z, T_z, nor M_0, whose system exhibits a pure MTM ordering and its related physical responses discussed above. The MTM ordering can be realized by antiferromagnetic phase transitions satisfying the above symmetry condition. We exhibit candidate antiferromagnetic materials accompanying the MTM in Table <ref>, which are referred from MAGNDATA <cit.>, magnetic structures database. One can find that various materials possess the MTM irrespective of the lattice and antiferromagnetic structures, e.g., collinear magnetic structure under the tetragonal point group KMnF_3 <cit.> and noncollinear magnetic structure under the cubic point group Mn_3IrGe <cit.>. In these materials, physical phenomena characteristic of the MTM, such as the magnetic-field-induced rotational distortion and electric-field-induced spin vortex, can be expected. We show several antiferromagnetic structures to accommodate the MTM under different point groups in Supplemental Material <cit.>. Model calculations.— To demonstrate the MTM ordering in antiferromagnets and its cross-correlation coupling in Eq. (<ref>), we analyze a minimal four-orbital s-p model; the physical space spanned by four orbitals and spin includes all the dipoles (Q, G, M, T), which needs to describe physical responses in Eq. (<ref>) <cit.>. We consider a bilayer lattice structure consisting of a cuboid with eight sublattices A–H under the space group Pmmm (D^1_ 2h), as shown in Fig. <ref>(a); we set a=a'=b=b'=0.5 and c=1 (c is the bond length between sublattices A and E) for simplicity. The Hamiltonian is given by ℋ=∑_kγασγ'α'σ'c^†_kγασ ( δ_σσ'H^t + δ_γγ' H^ SOC + δ_αα'H^ M) c_kγ'α'σ', where c^(†)_kγασ represents the creation (annihilation) operator of electrons at wave vector k, sublattice γ= A–H, orbital α=s, p_x, p_y, and p_z, and spin σ. In Eq. (<ref>) H^t includes the nearest-neighbor hopping for the intra- and inter-unit cuboid. We adopt the Slater-Koster parameter for the intra-cuboid hopping parameters: for the x-bond direction, t^x for the hopping between s orbitals (α,α'=s), t^x_p for that between (p_x, p_y) orbitals (α,α'=p_x, p_y), t^x_z for that between p_z orbitals (α,α'=p_z), and t^x_sp for that between different s-(p_x, p_y) orbitals (α=s and α'=p_x, p_y and vice versa). We regard t^x=-1 as the energy unit of the model and set t^x_p=0.7, t^x_z=0.2, and t^x_sp=0.3. Similarly, we set the intra-cuboid hopping parameters along the y and z directions by multiplying 0.9 and 0.5 by that along the x direction. In addition, we set the inter-cuboid hopping parameters along the x and y directions by multiplying 0.8 by intra-cuboid ones. It is noted that the choice of the hopping parameters does not affect the following results qualitatively. H^ SOC in Eq. (<ref>) means the atomic spin–orbit coupling for three p orbitals with the amplitude λ=0.5. H^ M appearing in the third term in Eq. (<ref>) denotes the mean-field term to describe the antiferromagnetic ordering. We consider the collinear antiferromagnetic ordering in Fig. <ref>(b), where H^ M is explicitly given by H^ M= -h δ_γγ' p(γ) σ_z. Here, p(γ)=+1 (-1) for sublattices A, B, E, and F (C, D, G, and H), and σ_z represents the z-component Pauli matrix in spin space. We set the amplitude of antiferromagnetic molecular field as h=2 and consider the low-electron filling per site n_ e=(1/8N)⟨∑_kγασc^†_kγασc^_kγασ⟩=0.2, where N=1600^2 is the total sites and n_ e=8 represents the full filling. The eight-sublattice collinear magnetic structure in Fig. <ref>(b) satisfies the symmetry condition to accommodate the MTM; inversion, three two-fold rotation, and three mirror symmetries under the space group Pmmm remain and only the time-reversal symmetry is broken. Indeed, by closely looking into the collinear spin configuration denoted by the blue arrows in Fig. <ref>(b) on each plaquette of the cuboid, the MTD, which is defined by the vector product of spins and the position vector measured from the center of each plaquette, becomes nonzero for the sides: the outward x-component MTD emerges on the plaquettes ADHE and CBFG and the inward y-component MTD emerges on the plaquettes ACGE and DBFH, as shown by the red arrows in Fig. <ref>(b). Since the xz and yz planes are inequivalent in the orthorhombic structure, the amplitudes of the x- and y-component MTDs are different from each other. The distribution of the MTD in Fig. <ref>(b) is decomposed into the linear combination of the MTM T_0 and MT quadrupole T_v=x T_x - y T_y as shown in Fig. <ref>(c), which means that there is a net component of the MTM in the unit cuboid. In this way, the collinear antiferromagnetic structure in Fig. <ref>(b) accompanies the MTM. Similar collinear magnetic structures have been identified in materials, such as Fe_2PO_5 <cit.>, XCrO_3 (X= Sc, In, Tl, La) <cit.>, and YFeO_3 (Y= Ce, Nd, Dy) <cit.>, which indicates that these materials are the potential candidates hosting the MTM. Using the model in Eq. (<ref>), we demonstrate the cross-correlation phenomena due to the effective coupling in Eq. (<ref>). First, we discuss the magnetic-field-induced rotational distortion by introducing the Zeeman Hamiltonian ℋ^ Z coupled to spin as ℋ^ Z= -H_x∑_kγασσ'c^†_kγασσ^x_σσ' c^_kγασ'. Since the microscopic degree of freedom corresponding to the rotational distortion is the ETD, we calculate its expectation values in the atomic and cluster forms, ⟨ G^ (a)_x ⟩ and ⟨ G^ (c)_x ⟩, against the applied magnetic field H_x <cit.>. Here, G^ (a)_x is the atomic-scale definition using (l×σ)_x (l is the orbital angular momentum) and G^ (c)_x is the cluster definition consisting of the vortex structure of the local electric dipoles as shown by the orange arrows in Fig. <ref>(a); the detailed expressions are given in Supplemental Material <cit.>. As shown in the left panel of Fig. <ref>(a), both quantities become nonzero for H_x ≠ 0; their sign is reversed by reversing the magnetic-field direction. It is noted that this response coming from the interband process is non-dissipative within the linear response, which occurs in both metals and insulators. We also discuss the magnetic-field-induced rotational distortion for noncollinear spin textures in Supplemental Material <cit.>. Next, let us consider the electric-field-induced spin vortex (the time-reversal counterpart of the previous example), which means that the MTD is induced along the external electric-field direction. We introduce the local s-p_x hybridized Hamiltonian as ℋ^ E= -E_x∑_kγσ(c^†_kγ s σ c^_kγ p_x σ+ H.c.) corresponding to the coupling between the electric dipole moment and the applied electric field. Figure <ref>(b) shows the E_x dependence of the atomic-scale MTD ⟨ T^ (a)_x ⟩ and cluster MTD ⟨ T^ (c)_x ⟩; T^ (a)_x is represented by the local imaginary s-p_x hybridization and T^ (c)_x is represented by the spin vortex, as shown in the right panel of Fig. <ref>(b) <cit.>. Similarly to the result in Fig. <ref>(a), both ⟨ T^ (a)_x ⟩ and ⟨ T^ (c)_x ⟩ becomes nonzero for E_x ≠ 0, and their sign is reversed when the sign of E_x is changed. Thus, the vortex spin configuration can be switched by applying the electric field under the MTM ordering. Similarly to the magnetic-field-induced rotational distortion, this response also arises from the non-dissipative interband process within the linear response. Furthermore, we find that the system acquires the chirality, i.e., finite average of G_0, when both H_x and E_x are applied simultaneously. We show the behaviors of atomic-scale and cluster ETMs, ⟨ G^ (a)_0 ⟩ and ⟨ G^ (c)_0 ⟩, in Fig. <ref>(c), which are the microscopic measure of chirality; the former is described by the atomic spin-dependent imaginary s-p hybridization and the latter is described by the source of the ETD flux in the cuboid <cit.>. As shown in Fig. <ref>(c), the result indicates nonzero ⟨ G^ (a)_0 ⟩ and ⟨ G^ (c)_0 ⟩ are induced by H_x=E_x ≠ 0, and their sign is reversed when the direction of either H_x or E_x is reversed. This result is consistent with the symmetry of the system in the presence of H_x and E_x; there are no inversion and mirror symmetries. It is noted that the ETM does not appear in the paramagnetic Pmmm system without T_0 under a nonconjugate field of G_0, H_x and E_x, since the time-reversal parity of H_x E_x is opposed to that of the ETM. In other words, the induction of ⟨ G_0 ⟩ by the composite field H_x E_x is one of the characteristic features of the MTM ordering. Conclusion.— We proposed the time-reversal odd scalar order parameter, i.e., the MTM, in antiferromagnets. We found that the MTM becomes a source of various time-reversal switching responses against external fields, such as magnetic-field-induced rotational distortion, electric-field-induced spin vortex, and electromagnetic-field-induced chirality, which are qualitatively different from other known multipole orderings like magnetic monopole and MTD. Furthermore, we showed the symmetry condition of the MTM as well as the candidate materials. Finally, we demonstrated the minimal model to host the MTM in collinear antiferromagnets. This research was supported by JSPS KAKENHI Grants Numbers JP21H01031, JP21H01037, JP22H04468, JP22H00101, JP22H01183, JP23K03288, JP23H04869, JP23H00091 and by JST PRESTO (JPMJPR20L8). Parts of the numerical calculations were performed in the supercomputing systems in ISSP, the University of Tokyo. apsrev
http://arxiv.org/abs/2306.05365v1
20230608170824
Improving structural damage tolerance and fracture energy via bamboo-inspired void patterns
[ "Xiaoheng Zhu", "Jiakun Liu", "Yucong Hua", "Ottman A. Tertuliano", "Jordan R. Raney" ]
physics.app-ph
[ "physics.app-ph", "cond-mat.mtrl-sci" ]
inst1]Xiaoheng Zhufn1 inst1]Jiakun Liufn1 inst1]Yucong Hua inst1]Ottman A. Tertulianocor1 inst1]Jordan R. Raneycor1 [cor1]Corresponding authors [fn1]These authors contributed equally to this work. [inst1] organization=Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania, addressline=220 S 33rd St, city=Philadelphia, postcode=19104, state=Pennsylvania, country=United States Bamboo has a functionally-graded microstructure that endows it with a combination of desirable properties, such as high failure strain, high toughness, and a low density. As a result, bamboo has been widely used in load-bearing structures. In this work, we study the use of bamboo-inspired void patterns to geometrically improve the failure properties of structures made from brittle polymers. We perform finite element analysis and experiments on 3D-printed structures to quantify the effect of the shape and spatial distribution of voids on the fracture behavior. The introduction of periodic, uniformly distributed voids in notched bend specimens leads to a 15-fold increase in the work of fracture relative to solid specimens. Adding a gradient to the pattern of voids leads to a cumulative 55-fold improvement in the work of fracture. Mechanistically, the individual voids result in crack blunting, which suppresses crack initiation, while neighboring voids redistribute stresses throughout the sample to enable large deformation before failure. In addition, we conduct qualitative, low-energy impact experiments on PMMA plates with laser-cut void patterns, illustrating the broader potential for this strategy to improve damage tolerance and energy absorption in a wide range of materials systems. Toughening Mechanisms Architected Structures Bioinspiration Bamboo § INTRODUCTION Catastrophic brittle failure is encountered in many engineering contexts, ranging from transportation and infrastructure to biomedical devices, bringing with it significant societal costs <cit.>. Catastrophic failure of brittle materials can be attributed to their significant flaw sensitivity. While brittle materials can be reasonably resistant to crack initiation, they do not appreciably resist crack propagation <cit.>. Engineers have few options for avoiding catastrophic failure in these materials, such as relying on large safety factors and on well-established design principles like increasing fillet radii to avoid large stress concentration factors. Additional strategies have also been developed to improve flaw or crack tolerance, including toughening via stress-induced transformations <cit.>, microcracks in process zones <cit.>, and grain-localized macro-crack bridging <cit.>. Recent advances in additive manufacturing and multi-material fabrication have enabled additional approaches for designing composites with enhanced toughness and impact resistance <cit.>. Examples include beetle-inspired helicoidal composites <cit.>, laminated matrix materials <cit.>, and tough nacre-inspired layered oxide glasses <cit.>. However, these strategies tend to involve materials-specific optimization, since the toughness enhancements strongly depend on the intrinsic interfacial properties of the specific materials used in the composites. More recently, researchers in architected materials have begun to explore how internal geometric features can be used to improve fracture resistance in lightweight materials <cit.>. For example, architecting a hexagonal array of holes in aluminum alloy plates can lead to a 50% reduction in weight while maintaining the same fracture toughness as the solid material <cit.>. The layer-by-layer design freedom of 3D printing has enabled improved material failure properties by controlling micro-structures within a single material <cit.>. These approaches use internal geometry to improve fracture resistance and may be more easily generalizable than methods which rely on unique multimaterial interfaces. Nature provides a number of interesting examples of lightweight materials, such as bone and bamboo, that exhibit both high fracture toughness and strength <cit.>. In all cases, this combination of properties is the result of complex microstructures, including both multimaterial interfaces and geometric features such as voids and internal interfaces. For example, the special composition <cit.> and gradient-based void patterns seen in bamboo significantly affect its fracture behavior and energy dissipation <cit.>. The interfacial areas along the fibers and the boundaries around parenchyma cells are the preferred route for crack propagation in both radial and longitudinal directions. This plays a significant role in determining energy dissipation near the crack tip <cit.>. Beyond bamboo's complex set of composite toughening mechanism, its unique spatial gradient in void geometry can provide insight for developing void-patterning strategies that improve fracture resistance in brittle materials. In this work, inspired by the void patterns in the axi-symmetric longitudinal sections of natural bamboo (Fig. <ref>A), we investigate how gradients in the spatial arrangement and size of voids can lead to higher damage tolerance and energy absorption in brittle materials. We find that the work of fracture in brittle polymers can be significantly improved via crack blunting and stress redistribution due to spatially graded voids. Since these results depend merely on the internal geometry of the material system, rather than on intrinsic interfacial properties, this strategy is potentially applicable to a wide range of materials. § EXPERIMENTS To quantify the effect of different bamboo-inspired void patterns, we 3D print single-edge notched bend (SENB) specimens with elongated voids using a linear elastic photopolymer named R11. This polymer has a flexural modulus of 2450 MPa and flexural strength of 75 MPa. This polymer fails in a brittle manner at a fracture strain 0.06. The dimensions of the SENB specimens are 67.5 mm × 16 mm × 6 mm. The edge crack is printed together with the specimen during the printing process. The void patterns are printed in front of the crack tip to predictably induce crack growth (Fig. <ref>B). The bamboo-inspired void pattern comprises three layers of voids with individual void lengths L, interlayer void offset d, and interlayer distance L_s. The voids are rectangles with two semi-circular fillets of radii 0.5 mm at the two ends (see Fig. S1). We conducted three-point bend experiments using a commercial quasistatic materials test system (Instron-65SC) using a custom fixture producing three-point loading with a span of 60 mm between supports. The displacement rate was 0.1 mm/s. § RESULTS The bamboo-inspired voids affect the load-displacement response of the SENB specimens in multiple ways. Fig. <ref>C shows representative load-displacement data for solid and bamboo-inspired specimens. As shown, the addition of the bamboo-inspired void patterns can significantly improve the displacement at failure relative to the solid specimen. There is a characteristic first (small) peak in the load-displacement response of specimens with bamboo-inspired voids, associated with the propagation of the crack from the pre-existing notch tip into the nearest void in the first layer, which blunts the crack tip and arrests its propagation. In contrast, the solid specimen exhibits unstable crack growth and rapid catastrophic failure, as indicated by the single peak in the load-displacement curve. Below, we systematically investigate the effects of the void size and the spatial arrangement on these failure characteristics. §.§ Effect of void length on fracture energy To quantify the effect of void geometry on damage tolerance, we first consider specimens with only one layer of voids (Fig. <ref>). As in previous work <cit.>, the fracture energy can be calculated as the area under the load-displacement curve: U = ∫_0^Δ F dΔ Crack blunting occurs when the initial crack propagates into the voids. As the crack tip enters the void, the effective radius of curvature of the crack tip greatly increases. In this context, crack blunting is quantified by the crack tip opening displacement (CTOD or δ). For a linear elastic polymer, the relationship between δ and the energy release rate, taken as the J-integral in the limit of small-scale yielding, is given as: J =-∂ U/∂ A = mσ_yδ where m is a dimensionless empirical constant that depends on the material strain hardening behavior. The yield strength σ_y used in this work is 75 MPa. The void induces blunting, and we take the void length as the CTOD, L=δ. Figure <ref>B shows the effect of void length L (ranging from 1 mm to 3 mm) on the fracture energy, as measured experimentally. In accordance with the energy release rate of Eq. (<ref>), the fracture energy measured in the experiments is nominally linear with L, i.e., δ, suggesting crack blunting by the elongated voids as the main mechanism for increasing fracture energy. All samples with voids exhibited larger fracture energy than the solid specimens (i.e., specimens with no voids). Specifically, the bamboo-inspired elongated voids distribute stress over a larger area, which can reduce the stress concentration at the two fillet points. To quantify this effect further, we used finite element analysis (FEA) to compute the maximum principal stress around the crack tip, a predictor of crack extension direction immediately before initiation <cit.>. To mimic crack blunting in numerical models, we pre-cut the crack from the initial crack tip to the first layer of voids (see Fig. <ref>C). Then, we applied three-point loading in the model and extracted the maximum principal stresses around the void. Figure <ref>C shows the maximum principal stress contour (around the void) with different void lengths L. The stress contour is symmetric about the center line of the middle void because of the symmetry of loading and boundary conditions. As expected, the local maximum of the maximum principal stress occurs at the fillet point of the void. The maximum value of the stress contour is inversely related to the length of voids; samples with larger voids have a lower maximum principal stress at the crack tip. This stress analysis is consistent with the fracture energy measurements, reinforcing the described crack blunting. Optical images of a representative crack surface are shown in Fig. <ref>D. The fracture surface contains the mirror (α), mist (β), and hackle (γ) zones, with increasing roughness. The roughness of the fracture surface can be treated as an indicator of the velocity of the crack propagation <cit.>. The mirror zone is an indicator of slow crack propagation velocity, whereas the hackle zone corresponds to rapid crack propagation. The mirror zone (denoted as α) observed in the image is associated with crack blunting, which significantly reduces the speed of crack propagation. Small amounts of experimental misalignment of the loading head on the sample may cause slight asymmetry of the observed fracture zones with respect to crack propagation direction. The fracture energy measurements still demonstrate systematic differences with respect to varying values of L, despite the observed asymmetry (Fig <ref>D, inset). §.§ Effect of second layer of voids on crack propagation In addition to crack blunting, the interaction of voids between layers can also affect the fracture toughness of micro-architected materials <cit.>. It is known that interaction between microcracks affects the stress intensity factor and stress distribution in front of the crack tip, depending on their positions and orientations <cit.>. In this section we consider the effect of void interactions in SENB specimens with two layers of voids. As shown in Fig. <ref>A, the lengths of the voids in the two layers are L_1 and L_2, respectively, with L_i defining the void length in the i^th layer of voids. Moreover the distance between the adjacent layers of voids is defined as L_s; the voids between layers may also be offset with respect to one another by a distance d. We first consider the effect of interlayer distance L_s. First, numerically we pre-cut the crack (in FEA) from the initial crack tip to the first layer of voids, and extracted the contour plot of the maximum principal stress around the first void in front of the crack tip. Figure  <ref>B shows the effect of layer distance L_s on the maximum principal stress contour. The maximum principal stress at the fillet point slightly increases with increasing layer distance L_s. To better understand the interactions between voids, we plot the distribution of the maximum principal stress from the fillet edge of the first void to the void in the second layer using FEA. The maximum principal stress decreased along the path and slightly increased again near the second layer of voids. The maximum principal stress begins to converge for L_s>3 mm (Fig. <ref>C). Our simulation results suggest that the layer distance should be larger than 3 mm for this fillet radius to prevent the voids from interacting and to minimize stress concentrations. Figure <ref>D shows the fracture energy for experimental SENB measurements as a function of void lengths L_1 and L_2 and of interlayer void spacing L_s. Consistent with the numerical results, the fracture energy increases with layer distance L_s and converges at L_s=3 mm. Considering that we do not increase the overall size of the SENB specimens, decreasing the layer distance reduces the thickness of the material between the two layers, resulting in lower fracture energy. However, given the fixed total size of the SENB specimen, when the layer distance is too large, the second layer of voids gets close to the top surface of the specimen, i.e., near the displacement head of the mechanical test system. This alters the boundary conditions and may affect the structural integrity and fracture energy. Another parameter we used to control the void interaction effect is the layer offset, d. The FEA model predicts the maximum principal stress at the fillet point of the first layer void as a function of the second layer's horizontal offset, as presented in Fig. <ref>E. The result indicates a periodic relation between maximum principal stress with respect to the layer offset d. A second layer of voids with any offset reduces the maximum principal stress when the crack is arrested by the first layer, with respect to samples with no offset. This result indicates the effectiveness of reducing the maximum principal stress at the crack tip through controlling layer offset d. §.§ Parametric study of multi-layer systems Based on the characterization of crack blunting and void interactions (Fig.<ref>,<ref>), multiple layers of voids can be further used to redistribute stress, delocalize strain, and achieve higher fracture energy. Natural materials, like bamboo, are generally not uniform; they use heterogeneity and gradients to improve mechanical and physical properties under the loading scenarios critical to their survival. We performed a parametric study to explore the effect of graded, bamboo-inspired void patterns on the fracture energy of SENB specimens (Fig.<ref>A). To mitigate the negative void interaction effect brought by layers of voids, we first found appropriate interlayer void spacing L_s for the multi-layer system without gradients. Then we kept the L_s constant for the remaining study of bamboo-inspired graded void patterns. Figure <ref>B shows the fracture energy of a multi-layer system with uniform void sizes as a function of layer distance. We chose void size L = 4 mm to maximize crack blunting. Similar to the trend for two-layer systems, the peak fracture energy is achieved with interlayer distance L_s=3 mm. We then fixed the interlayer distance to L_s=3 mm for the remainder of the study. After found appropriate interalyer distance, we parameterized graded, bamboo-inspired void patterns. With the given void length in the first layer, the parameters for the second and third layers were determined by defining a gradient along the interlayer spacing L_s in Eq.(<ref>). ∂ L/∂ y = L_i-1-L_i/L_s ∂ d/∂ y = d_i-1-d_i/L_s Figure <ref>C shows the effect of layer offset d on the fracture energy of multi-layer systems. Adding layers of voids with offsets (∂ d/∂ y= 0) to SENB specimens with a single layer of voids produces very little improvement in the fracture energy, and may actually decrease it. In contrast, multiple layers with offset defined by ∂ d/∂ y= L/6 mm can improve the fracture energy with various void lengths. Laterally offsetting voids from layer to layer reduces the maximum principal stress at the crack tip after the crack propagates into the first layer of voids (as shown in Fig. <ref>E). This offset affects the second peak in the load-displacement curve, and, therefore, the fracture energy. After noticed the improvement brought by void patterns with offset, we did the parametric study on graded, bamboo inspired void pattern using both experimental and numerical methods (see Fig. <ref>A). We chose the void length in the first layer to be L_1=8 mm, resulting in five periodically spaced voids in each layer of voids. Given the fixed dimensions of the SENB specimens, this choice balances the desire for a larger number of voids per layer with the need for a void size that is sufficiently large so as to avoid manufacturing errors. Then, we set the difference in L and d from one layer to the next to have an upper bound equal to the interlayer spacing, L_s, i.e., the gradients in Eq.(<ref>) are defined by (∂ L/∂ y) and (∂ d/∂ y) ∈ [0,1]. The simulations suggest that for SENB samples with multiple layers of voids, adding layer offset can decrease the maximum principal stress at the fillet point, except for the uniform case (∂ L/∂ y=∂ d/∂ y = 0 ). Our experiments show that larger gradients in void size and offset result in larger fracture energies (see Fig.<ref>A). We note that the maximum principal stresses computed from the simulations can only provide insight about the trends in fracture energy observed experimentally. The symmetry in the simulation suggests equal probability of crack propagation at the void edges. However, any amount of misalignment in the position of the applied load or imperfections in the printing process would cause crack propagation at one void edge rather than both. The bamboo-inspired void patterns in SENB specimens cause a significant improvement in fracture energy relative to the SENB specimens without any voids (Fig. <ref>C). The introduction of periodic void arrays in notched specimens can lead to a 15-fold increase in the fracture energy relative to solid SENB specimens. Adding a gradient to the void patterns can additionally increase the fracture energy, resulting in a 55-fold increase over solid samples. Figure <ref>C shows a comparison of loading curves of the solid SENB specimen and the SENB specimen which achieved the highest fracture energy in the parametric study. The latter is composed of three layers of voids with a large gradient in void length and large layer offsets, ∂ L/∂ y and ∂ d/∂ y, respectively. The long first layer of voids can effectively magnify the crack blunting effect. The second and the third layers of voids can reduce the maximum principal stress at the crack tip and improve the fracture energy. In three-point bending experiments, the specimen demonstrated remarkable fracture strain (maximum displacement), which contributes to the improvement in fracture energy. § DISCUSSION AND CONCLUDING REMARKS While this work has focused on the effect of bamboo-inspired void patterns on the energy associated with failure, given the well-known trade-off between toughness and strength it is worth briefly discussing the effect of these void patterns on strength. In marked contrast with notched specimens, the force-displacement behavior of unnotched samples exhibits only one peak (compare Fig. <ref>C with Fig. <ref>A). Unnotched specimens with bamboo-inspired, gradient voids show 20% less strength than solid specimens, despite the large improvement in fracture energy measured for the notched specimens. This is due to the fact that the strength of the brittle material itself controls the fracture of the unnotched samples (and any voids will concentrate stress). In contrast, the crack tolerance of the structure/material controls the work of fracture of the notched samples. The bamboo-inspired void patterns improve the structural compliance. As shown in Fig. <ref>B, which plots fracture energy vs. relative density, bamboo-inspired void patterns cause order-of-magnitude improvement in fracture energy despite a roughly 10% decrease in relative density. To better understand why these architectures achieve such a large increase in work of fracture (at the cost of a small decrease in strength), we take the case of samples with two layers of voids as an example: the stress distribution ahead of the crack tip is not monotonic (see Fig. <ref>C). Starting from the crack tip (the fillet point), the stress decreases first and increases again for the region near the second layer of voids. In other words, multiple layers of voids act as multiple stress concentration points that redistribute the stress distribution of the sample. Similarly, Fig. <ref>Ciii shows the same effect in samples with three layers of voids: the second and third layers of voids cause the deformation to distribute throughout the sample during loading, i.e., load sharing <cit.>. The voids increase structural compliance, which increases the strain energy prior to failure. As a conceptual demonstration of the potential utility of these architectures, we produced bamboo-inspired void patterns in PMMA plates via a laser cutter. These were subjected to low-energy impact experiments. Figure <ref>C shows optical images before and after impact for both solid plates (left) and the plates with bamboo-inspired void patterns (right). The solid plates catastrophically fractured into multiple pieces. In contrast, the plates with bamboo-inspired void patterns trapped the cracks as they extended radially from the impact site, thereby preventing the plate from breaking into separate pieces. As mentioned previously, natural bamboo consists of densified vascular bundles and parenchyma cells, resulting in constitutive behaviors far more complex than what we have explored here. In this study, we limited our focus to geometric effects in brittle materials. The ductility of materials would certainly affect the failure characteristics of structures with architected void patterns. Combining materials-intrinsic ductility with the geometrically-enabled structural compliance may be a promising direction for future research. In summary, the introduction of periodic, uniformly distributed voids in notched bend specimens leads to a 15-fold increase in the fracture energy relative to notched specimens with no voids. Adding a gradient to the pattern of voids can lead to a cumulative 55-fold increase in fracture energy relative to the solid specimen. Furthermore, this order-of-magnitude improvement in the fracture energy results in only a 20% reduction of strength. This work demonstrates the potential of using the internal architecture to improve the failure performance of brittle materials. These principles can be applied to other brittle material systems which are susceptible to catastrophic failure, and could lead to important practical improvements in applications such as infrastructure, semiconductors, and protective gear. § MATERIALS AND METHODS §.§ Fabrication of samples The structures were printed with a commercial digital light projection 3D printer (EnvisionT EC Vida HD). The printer has a build volume of 96 × 54 × 100 mm with XY resolution of 25 μm and a z step size of 25 μ m. Due to the size limit of the 3D printer, the dimensions of SENB specimens tested in this study were 67.5 mm × 16 mm × 6 mm (length × width × thickness). The aspect ratio of the loading span to the length of the sample is 1.25. The edge crack was printed together with the specimen during the printing process. The printer takes grayscale images (1920x1080) as inputs and projects these into the resin. The material used in this work is a photopolymer named R11, a brittle material. The mechanical properties of R11 were measured via uni-axial tensile tests at a strain rate of 0.002/s using an Instron-65SC. §.§ Fracture tests Fracture testing was performed by conducting three-point bend tests under displacement control with a loading rate of 1 mm/s using an Instron-65SC at room temperature. The span between two supporting points was 60 mm. At least three specimens were tested for each type of experiment. §.§ Finite element analysis Finite element analysis for static loading of linear elastic materials was carried out using Abaqus. The specimen geometries were generated in Python and SOLIDWORKS. The geometry was modeled using four-noded, plane strain elements. Example meshes for the case are shown in supplementary for reference. elsarticle-num-names
http://arxiv.org/abs/2306.05954v1
20230609151121
Energy-dependent periodicities of LS I +61$^\circ$ 303 in the GeV band
[ "M. Chernyakova", "D. Malyshev", "A. Neronov", "D. Savchenko" ]
astro-ph.HE
[ "astro-ph.HE" ]
Chernyakova et al]M. Chernyakova^1,2, D. Malyshev^3, A. Neronov^4,5, D. Savchenko^4,6,7 ^1 School of Physical Sciences and Centre for Astrophysics & Relativity, Dublin City University, D09 W6Y4 Glasnevin, Ireland ^2 Dublin Institute for Advanced Studies, 31 Fitzwilliam Place, D02 XF86 Dublin 2, Ireland ^3 Institut für Astronomie und Astrophysik, Universität Tübingen, Sand 1, D 72076 Tübingen, Germany ^4Université de Paris, CNRS, Astroparticule et Cosmologie, F-75006 Paris, France ^5Laboratory of Astrophysics, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland ^6Bogolyubov Institute for Theoretical Physics of the NAS of Ukraine, 03143 Kyiv, Ukraine ^7Kyiv Academic University, 03142 Kyiv, Ukraine Energy-dependent periodicities of LS I +61^∘ 303 in the GeV band [ ================================================================ is a rare representative of the gamma-ray binaries with a compact object known to be a pulsar. We report on the periodicity and spectral analysis of this source performed with more than 14 years of ♭data. The periodicity of is strongly energy dependent. Two periods P_1 = 26.932± 0.004 (stat)± 0.008 (syst) and P_2 = 26.485 ± 0.004 (stat)± 0.007 (syst) are detected only at E>1 GeV and at E<0.3 GeV correspondingly. Within 1σ (stat+syst) the periods are consistent with orbital (P_2) and beat orbital/superorbital (P_1) periods. We present the orbital light curves of the system in several energy bands and the results of the spectral analysis. We discuss the possible origin of the change in the variability pattern between 0.1 and 1 GeV energy. § INTRODUCTION Gamma-ray-loud binary systems (GRLB) are X-ray binaries which emit very-high energy (VHE) γ-rays. While about a thousand of X-ray binaries are known, only about a dozen of systems have been detected as persistent or regularly variable GeV-TeV emitters <cit.>. The power engine (accretion or rotation powered), the physical conditions allowing the acceleration of charged particles to the very high energies (and consequent very high energy photon emission) and even the nature of the compact object (CO) are not well established for almost all GRLBs. E.g. the absence of the pulsed radio emission from some systems can point to the black hole nature of the compact object. On the other hand the detection of the pulsed radio emission can be complicated by a strong absorption of such an emission in the dense layers of the stellar decretion disk. Among all γ-ray binaries the type of the compact object was firmly established to be a pulsar only for three systems. Until recently, the CO was identified (through detection of the pulsed radio emission) as a pulsar only in and  <cit.>. In 2022 FAST radio observations allowed the detection of the pulsed radio emission from  <cit.>, increasing the number of pulsar-hosting systems to three. consists of a Be star and a pulsar on the eccentric orbit. Two decade-long radio observations of demonstrated that the emission is modulated on timescales of P_o∼ 26.5 d and P_so∼ 1667 d <cit.>, referred hereafter as orbital and superorbital periods. Similar periods have been detected in optical <cit.>, X-ray <cit.> and gamma-ray bands <cit.>. In radio to X-ray bands, the orbital light curve is characterized by a single peak with a wavelength-dependent position and drifting on the superorbital time scale. With the change of superorbital phase the peak demonstrates rapid transition to another orbital phase <cit.>. At the same time, the behaviour of the system in the GeV band seems to differ significantly, with the structure of the orbital light curve changing from a regular with a single peak at certain superorbital phases to erratic at others <cit.>. In this paper, we reconsider public GeV-band observations of aiming for detailed studies of the variability of this system on orbital and superorbital time scales. We find that the periodicity properties of the system are strongly energy-dependent in the energy range accessible to the ♭telescope. The paper is organised as follows: in Sec. <ref> we discuss the ♭data and methods used for its analysis; in Sec. <ref> we present the obtained results and discuss the possible origin of the observed energy-dependent periodicity. In Sec. <ref> we shortly summarize the obtained results and their possible interpretation. Where applicable in what below, we adopt the following parameters – the orbital and superorbital periods P_orb = 26.496 ± 0.0028 d and P_sorb = 1667 d <cit.>. The values for the eccentricity of e = 0.537 ± 0.034 and the phase of the periastron of ϕ = 0.275 ± 0.010 are adopted from <cit.>, see however <cit.>. Historically the phase of ϕ = 0 corresponds to Julian Date (JD) 2,443,366.775 <cit.>. § ♭DATA AND DATA ANALYSIS The results described below are based on the analysis of more than 14 years of the ♭data (Aug. 4th, 2008 – Oct. 26th, 2022) with the latest available software. The analysis was carried out using the latest Pass 8 reprocessed data (P8R3, <cit.>) for the CLEAN event class taken at the region centered at coordinates. Further details specific to the performed analysis are summarized in corresponding subsections. §.§ ♭data: aperture photometry analysis §.§.§ Periodicity searches Aiming in periodicity studies in data we build the light curves of with the standard aperture photometry analysis[See e.g. https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/aperture_photometry.html♭aperture photometry analysis manual] in several energy intervals. In each of the considered energy intervals (0.1–0.3 GeV; 0.3–1 GeV; 1-10 GeV; 0.3-300 GeV) we selected the photons with the corresponding energies, detected within 1^∘-radius around . We binned the selected photons into lightcruves with 30 min long time bins and calculated the exposure for each time bin with the help of and routines. §.§.§ Lomb-Scargle analysis We performed the Lomb-Scargle analysis of the obtained light curves using the implementation provided within python module. The time bins with zero ♭exposure were explicitly removed during the analysis. The periodograms were built for 2000 trial periods linearly distributed between 26-28 days. A zoom of the periodogram in the energy range (0.3–300 GeV) to 26.25-27.25 d period range is shown in Fig. <ref> (left). Two periods P_1 = 26.937 d and P_2=26.4845 d corresponding to local maxima at the periodogram are clearly visible in the Figure. In order to estimate the uncertainty of these periods we performed the Lomb-Scargle analysis of 10^3 randomly-generated datasets. Each random dataset was generated according to a Poisson distribution around the real dataset. In each of the generated datasets, we determined the positions of local maxima close to P_1 and P_2 positions. The distribution of maxima allowed us to estimate (1σ) uncertainty for these periods as P_1 = 26.932± 0.0042 d and P_2 = 26.4845 ± 0.0046 d. We note that P_2 period is consistent with P_orb at ∼ 2σ level. The period P_1 is at ∼ 2σ level consistent with the orbital-superorbital beat-period (P_beat=P_orbP_sorb/(P_sorb-P_orb)≃ 26.924 d). The shaded regions around P_1 and P_2 positions in the left panel of Fig. <ref> correspond to 2σ statistical uncertainty regions derived from the random datasets as described above. The Lomb-Scargle periodograms built in narrower energy intervals (0.1–0.3 GeV; 0.3–1 GeV; 1-10 GeV) are shown in Fig. <ref>. These periodograms demonstrate clear energy dependence of the Lomb-Scargle power in the peaks corresponding to P_1 and P_2 periods. While at the lowest (0.1–0.3 GeV) energies the P_2 period dominates the periodogram, at highest energies (1–10 GeV) the periodogram is dominated by P_1. At intermediate energies (0.3–1 GeV) both periods are clearly seen. §.§.§ Self-Similar Log-Likelihood analysis To cross-check the results of Lomb-Scargle analysis discussed in the previous subsection we additionally performed the self-similar log-likelihood analysis. A similar analysis was shown to be effective for the blind search of the periodicity in the ♭data of gamma-ray binary 1FGL J1018.6-5856 <cit.>. For the analysis, we first defined a range of the test periods (1000 periods linearly distributed between 26 d and 28 d). We convolved the ♭light curve with each of the test periods defining corresponding “test orbital light curves” assuming 20 linearly distributed phase-bins per orbit. Based on each test orbital light curve we defined the predicted (“model”) number of counts in each time bin of the original light curve. In the next step, we calculated the log-likelihood to observe the detected number of photons in each time bin N_i given the model number of photons m_i: log LL = ∑_i log P_i(≥ N_i| m_i) Here P_i(≥ N_i| m_i) stands for the Poisson probability to observe ≥ N_i photons if the model predicts m_i photons in the time bin i. Figure <ref> (right panel) shows the Δ LL = -2·(log LL - min(log LL)) profile for the considered test periods. The quantity Δ LL follows the χ^2 distribution with 1 d.o.f. <cit.> and can be used thus for the estimation of the periods present in the data and uncertainties on these periods. The self-similar log-likelihood analysis resulted in the detection of two periods similar to the Lomb-Scargle approach. The corresponding periods are P_1=26.940 ± 0.006 d and P_2=26.478 ± 0.003 d. We use this independent from Lomb-Scargle analysis to estimate the level of systematic uncertainty of the performed analysis. Namely, we require the best-fit P_1 and P_2 values obtained from Lomb-Scargle and self-similar log-likelihood analysis to be consistent within 1σ systematic uncertainty. This results in the following estimations of the periods: P_1 = 26.932± 0.004 (stat)± 0.008 (syst) d and P_2 = 26.485 ± 0.004 (stat)± 0.007 (syst) d. §.§ ♭data: likelihood analysis To study the details of the variability of flux and spectral characteristics of the source on the orbital and beat period time scales, we additionally performed the standard binned likelihood analysis of ♭data. Contrary to the aperture photometry analysis such analysis relies on the fitting of the available data to the spatial/spectral model of the analysed region and allows to account for possible flux variations of the nearby sources. For the binned likelihood analysis <cit.> we consider the photons (CLEAN class, P8R3 IRFs, zmax=90^∘) within a circular region of 12^∘-radius centered at position. The spatial/spectral model of the region included the standard galactic and isotropic diffuse emission components as well as all known gamma-ray sources within 17^∘ of the ROI centre from the 4FGL catalogue <cit.>. Namely, the positions and spectral models for each catalogue source were selected according to the ones provided in the catalogue. During the fitting of the model to the data in each of the time bins only the normalisations of spectral models were left free for all sources, while all other spectral parameters were fixed to their catalogue values. We additionally fixed all spectral parameters (including normalisations) of all sources in a ring 12^∘-17^∘ from position. §.§.§ Orbital light curves In order to build the folded orbital light curves we split the data into energy bins (0.1–0.3 GeV and 1–10 GeV) and time bins (according to orbital phase with respect to P_orb or P_beat) and the analysis was performed in each of such bins. Zero phases in both cases were selected to be T_0 = MJD 43366.275 as in previous studies of orbital and superorbital periodicity of . The orbital light curves produced in this way are shown in Fig. <ref>. The left panel shows the folded light curves for the energy range 0.1–0.3 GeV, with the period P_orb (blue points) and P_beat (green points). The right panel shows the results for the 1-10 GeV energy range. Vertical dashed lines correspond to the periastron (ϕ=0.275) and the apastron phases. In addition to the folded orbital light curves we performed studies of the long-term variations of the orbital light curve (with respect to P_orb) with the P_sorb. For this, we have performed the binned likelihood analysis in the specified energy intervals and time bins as short as 2.6496 d. (i.e. 0.1 orbital phase duration) aiming to determine flux in each of the specified time bins. Fig. <ref> shows the fractional variability of the flux (i.e. flux normalized to 1 at each orbit by the maximal flux observed at that orbit) in 0.1-0.3 GeV and 1-10 GeV energy intervals as a function of orbital and superorbital phases. In case of no significant detection of the source in a time bin, we explicitly set the flux in this time bin to 0. The white gaps correspond to the period in March-April 2018. During this period ♭was in the safe hold mode due to issues with the solar array and did not take scientific data [See e.g. https://www.nasa.gov/feature/goddard/2018/fermi-status-update♭status report]. Dotted horizontal cyan lines correspond to the phase of the periastron (ϕ=0.275). Dashed diagonal cyan lines illustrate the position of flux maximum seen above 1 GeV energies. This maximum is drifting with respect to P_orb and can be connected to the beat period P_beat. Overall, we identify several distinct orbital/superorbital phase periods. These are: * periastron maximum (ϕ=0.275± 0.1); observed in 0.1-0.3 GeV range; * “2nd peak” or beat-period maximum (frac(Φ)=(frac(ϕ)-0.35)± 0.1) along dashed diagonal lines, seen above 1 GeV; * “minima”: periods of low GeV emission at E<0.3 GeV (frac(Φ)>0.4 AND frac(ϕ)>0.75). Here frac stands for the fractional part of the orbital(ϕ) or superobital(Φ) phase. For each of these time intervals, we performed the binned likelihood analysis for a set of energy bins. The best-fit fluxes for the corresponding energy/time bins as a function of energy are shown in Fig. <ref>. The upper limits on the flux are shown for the energy bins where is not detected with at least 2σ significance. The upper limits correspond to 95% false-chance probability and are calculated with the help of python module, provided within . § RESULTS AND DISCUSSION Our analysis of more than 14 yr of ♭data on has revealed the presence of two close-by periods P_1 = 26.932± 0.004 (stat)± 0.008 (syst) d and P_2 = 26.485 ± 0.004 (stat)± 0.007 (syst) d, see Fig. <ref>. Within 1σ (stat+syst) uncertainties these periods coincide with the orbital period (P_2≈ P_orb=26.496 d) and the orbital-superorbital beat-period (P_1≈ P_beat=P_orbP_sorb/(P_sorb-P_orb)≃ 26.924 d). The periodicity of the signal is strongly energy dependent. The low-energy (0.1-0.3 GeV) light curve is strongly modulated with an orbital period and peaks at close to periastron phases. At higher, 1-10 GeV energies, the modulation with the orbital period is strongly suppressed and only orbital/superorbital beat period is detected, see Fig. <ref>. The light curves folded with P_orb and P_beat periods are shown in Fig. <ref>. The lower energy band orbit-folded light curve has a clear peak at the periastron (ϕ=0.275) and a secondary peak at the orbital phase ϕ∼ 0.6. The 1-10 GeV orbit-folded light curve still possibly features the peak at the periastron, even though the amplitude of the orbital variability is strongly decreased. Instead, a pronounced peak is found in the beat-period folded light curve. The energy-dependent variability pattern is also clearly seen in Fig. <ref> showing the level of the orbital variability as a function of orbital and superorbital phases. In this figure, the orbital phase of the periastron peak is shown with the dotted horizontal line and the beat period seen at higher energies corresponds to the diagonal dashed cyan lines. It corresponds to a gradual drift of the phase of the maximum of the E>1 GeV light curve throughout the super-orbital cycle. The spectra extracted at phases around the periastron peak (ϕ=0.275± 0.1), beat-period maximum (“2nd peak”, frac(Φ)=(frac(ϕ)-0.35)± 0.1), minimal low-energy flux (frac(Φ)>0.4 AND frac(ϕ)>0.75) and the all-data averaged spectra are shown in Fig. <ref>. One can see that the source is most variable in the energy range below 1 GeV. The peak flux energy changes from about 0.1 GeV at the periastron to ∼ 0.4 GeV for the beat-period maximum and minimal low-energy flux periods. Surprisingly, the flux in the energy range above the peak is more stable than below the peak. The presence of the peak in 0.1-0.3 GeV orbital light curve at close to periastron orbital phases can be explained in a straightforward manner. The 0.1 GeV emission could be produced via synchrotron or the IC mechanisms that naturally result in the enhanced level of γ-ray emission close to periastron due to the increased magnetic and/or soft photon fields densities at these orbital phases. The drift of the maximum of the E>1 GeV light curve may possibly be explained as being due to the precession of the system components. There are multiple rotating components in the system. The fastest rotator is the pulsar that spins with the period P_p≈ 270 ms. The Be star spins much slower, with a period P_* that should be close to the period of Keplerian orbits at the surface of the star, P_*≃ 2π R_*^3/2/(GM)^1/2≃ 0.7 (R_*/10R_⊙)^3/2(M/30M_⊙)^-1/2 d The star is surrounded by the deccretion disk that spins with nearly Keplerian velocity. This Keplerian velocity decreases with the distance from the star and is lowest at the truncation radius of the disk, which is close to the size of the binary orbit. The period of rotation at the disk truncation radius is P_disk≃ 2π R_disk^3/2/(GM)^1/2≃ 36.45(R_disk/10^13cm)^3/2(M/30M_⊙)^-1/2 d Finally, the binary orbit has the period P_orb≃ 26.5 d. It can be close to the period of the disk if the truncation radius of the disk is close to the binary separation between the pulsar and the star. The periods P_1 and P_2 may be potentially associated to P_*, P_disk, P_orb or a certain combination of these periods. <cit.> have discussed the possibility that the periods P_1 and P_2 correspond to the orbital period and precession period, presumably, of a jet emitted by the black hole. Evidence for the presence of a pulsar in the system <cit.> disfavors the hypothesis of a precessing black hole jet. Nevertheless, precession can well be relevant also for the system where the compact object is a pulsar. One possibility is precession due to a misalignment of the orbital plane and the equatorial plane of the Be star, which is the middle plane of the deccretion disk of the Be star. In this case, the gravitational pull of the pulsar produces a torque on the disk, which forces it to precess around the axis perpendicular to the orbital plane. The effect is similar to the precession of the rotation axis of the Earth due to the gravitational pull of the Sun. If the disk spins with the angular velocity ω⃗, with the spin axis (the rotation axis of the Be star) inclined at an angle θ with respect to the orbital plane of the binary, the precession angular frequency is <cit.> Ω_pr=3ϵΩ_orb^2/2ωcosθ where ϵ=(I_||-I_)/I_|| is the ellipticity of the disk that depends on its momenta of inertia parallel and perpendicular to the rotation axis. The decretion disk is rotating with a frequency close to the frequency of rotation along Keplerian orbits at the disk truncation radius, which is close to the binary separation distance. Thus, ω is close to Ω_orb and Ω_pr can be close to both ω and Ω_orb for certain θ if cosθ≃ 2/(3ϵ). In this case, the disk precession is almost synchronized with the pulsar rotation and a small mismatch between the disk precession and binary orbit periods leads to a gradual change of orientation of the disk with respect to the orbital plane on a long superorbital time scale. This slow change of mutual orientation of the decretion disk and pulsar may lead to the slow superorbital variability with the period P_so≃ P_orb^2/(P_pr-P_orb). An alternative explanation may be the periodic growth and decay of the Be star disk, as suggested by <cit.>, based on the X-ray variability pattern. In this model, the shift of the maximum of the orbital X-ray light curve has been attributed to the confinement of the pulsar wind nebula by the Be star disk. The gradual growth of the disk leads to longer confinement of synchrotron-emitting electrons in the nebula and more pronounced synchrotron emission maximum at a later orbital phase, just before the nebula is deconfined when the pulsar exits from the disk. In this model, the system operates in two different modes. During a certain fraction of the orbit, the pulsar wind nebula is confined inside the disk and all emission coming from the system is from a rather compact region around the pulsar position. The second mode is when the pulsar wind nebula is deconfined and relativistic electrons can escape from the system, presumably along a bow-shaped contact surface between the pulsar and stellar winds, as in the model of Ref. <cit.>. The change of the variability pattern within a relatively narrow range between 0.1 and 1 GeV is surprising. The s at these two energies are most probably produced by the same mechanism, as indicated by the absence of any pronounced break in the spectral energy distribution. It is thus not clear what effect might "erase" the orbital periodicity with the increase of the energy. The emission comes from an extended source that may have different spatial components, say the head and tail of the compact pulsar wind nebula. Particle acceleration conditions in these different components may be slightly different, so that the fractional contribution of these components to the overall flux would be energy dependent. It is possible that the change in the periodicity is explained by the different relative contributions of the different emission regions at different energies. For example, the head of the compact pulsar wind nebula may have a maximum emission in the periastron and provide a dominant contribution to the 100 MeV flux. The properties of the tail of the nebula may have a flux that depends on the characteristics of the Be star disk. In this case, the orbital phase of the maximum flux from the tail may change in function of the disk size and orientation. If the tail component provides a sizeable contribution to the flux above 1 GeV, it would explain the appearance of the variability with the beat period in this energy range. § CONCLUSIONS In this paper, we report the energy dependence of γ-ray variability of . We have shown that two different periods, P_1 = 26.932± 0.004 (stat)± 0.008 (syst) d and P_2 = 26.485 ± 0.004 (stat)± 0.007 (syst) d, are detected in two energy ranges, E>1 GeV and E<0.3 GeV. Within 1σ (stat+syst) the periods are consistent with orbital (P_2) and beat orbital/superorbital (P_1) periods. We have discussed the possible origin of the observed change in the periodicity over a factor of ten change in the energy. The presence of the maximum of the 0.1-0.3 GeV orbit-folded light curve at the periastron can be explained, if this emission is produced via synchrotron or IC mechanisms. The re-appearance in 1-10 GeV light curve of a new maximum at a phase that shifts cyclically on the superorbital timescale may point to the precession of one of the system components, for example, of the equatorial disk of the Be star. Emission from a part of the compact pulsar wind nebula that provides a sizeable contribution to the GeV band flux may be affected by this precession. Alternatively, such variability can be connected to the process of a gradual build-up and decay of the Be star's disk on the superorbital time scale. §.§.§ Acknowledgements DM is supported by DFG through the grant MA 7807/2-1 and DLR through the grant 50OR2104. The authors acknowledge support by the state of Baden-Württemberg through bwHPC. The research conducted in this publication was jointly funded by the Irish Research Council under the IRC Ulysses Scheme 2021 and ministères français de l’Europe et des affaires étrangères (MEAE) et de l'enseignement supérieur et de la recherche (MESR). §.§.§ Data Availability The data underlying this article will be shared on reasonable request to the corresponding authors.
http://arxiv.org/abs/2306.04114v1
20230607025509
Manga Rescreening with Interpretable Screentone Representation
[ "Minshan Xie", "Chengze Li", "Tien-Tsin Wong" ]
cs.CV
[ "cs.CV", "eess.IV" ]
Manga Rescreening with Interpretable Screentone Representation Minshan Xie1 Chengze Li2 Tien-Tsin Wong1Corresponding author. 1 The Chinese University of Hong Kong 2 Caritas Institute of Higher Education {msxie, ttwong}@cse.cuhk.edu.hk [email protected] =============================================================================================================================================================================================================================== empty The process of adapting or repurposing manga pages is a time-consuming task that requires manga artists to manually work on every single screentoned region and apply new patterns to create novel screentones across multiple panels. To address this issue, we propose an automatic manga rescreening pipeline that aims to minimize the human effort involved in manga adaptation. Our pipeline automatically recognizes screentone regions and generates novel screentones with newly specified characteristics (e.g., intensity or type). Existing manga generation methods have limitations in understanding and synthesizing complex tone- or intensity-varying regions. To overcome these limitations, we propose a novel interpretable representation of screentones that disentangles their intensity and type features, enabling better recognition and synthesis of screentones. This interpretable screentone representation reduces ambiguity in recognizing intensity-varying regions and provides fine-grained controls during screentone synthesis by decoupling and anchoring the type or the intensity feature. Our proposed method is demonstrated to be effective and convenient through various experiments, showcasing the superiority of the newly proposed pipeline with the interpretable screentone representations. § INTRODUCTION Japanese manga is a worldwide entertainment art form. It is unique in its region-filling with specially designed bitonal patterns, called screentones, to enrich its visual contents. During manga production, artists will carefully select these screentones considering both intensity and pattern type to express various shading effects <cit.>. The screentone intensity, similarly as the luminance channel in color images, plays the key role to render and represent shading among different regions, while the screentone type is an alternative to the chrominance channel in color images to differentiate semantics over regions. However, once the manga is produced, this screening process is hard to be reverted for editing or adaptation to other types of medium, such as e-readers, Webtoons, etc. Even if the extraction and editing of ink line drawings are easy, the adaptation of screentones is exceptionally challenging, as simple pixel-level transformation of screentoned area tend to break the intended visual delivery by the artists, such as the fading effects or region contrast. As a result, it usually requires a complete rescreening of the whole manga frame. In such a process, the artists have to manually identify and amend the regions for correction (Fig. <ref>(b)). The overall process is tedious and labor-intensive, as one has to handle diverse cases of screentone variation in both screentone intensity and type. To save time and cost, computer techniques are considered to be employed to ease the rescreening process, but the bitonal and discrete screentones hinder both manga segmentation and screentone synthesis. To segment the region filled with the same screentone, recent approaches <cit.> focus on the discovery of efficient texture descriptors, like Gabor filter banks <cit.>. However, these methods cannot tackle intensity-varying screentones (background in Fig. <ref>(a)) since they did not disentangle the type feature from the intensity information. For screentone synthesis and manga generation, several attempts are proposed to produce screened manga <cit.>. But, they either generated more or less uniform patterns or failed to generate intensity-varying effects. Recently, Xie et al. <cit.> proposed a representation for screentone, which supports both region discrimination and screentone synthesis. However, since intensity and type features are unrecognizable in their model, it limits the user to identifying screentone regions with varying intensity and synthesizing screentones with given intensity or type. As can be observed in Fig. <ref>, manga artists commonly use the same type of screentone, sometimes with intensity variation, to fill the same semantic region, which requires segmenting regions based on screentone type. In this paper, we propose a framework to implement intuitive and convenient manga rescreening. The framework can recognize semantic continuous regions based on the screentone type, and manages to enable individual tuning of screentone type or intensity while maintaining the other one. The framework starts by learning an interpretable screentone representation with disentangled intensity and type features. The intensity feature is expected to visually conform to the tone/shading intensity of manga, and the type feature should have the same representation over the same screentone type. Such a representation allows discrimination of the same screentone type with varying intensity and also precise control on screentone generation given a specific tone intensity and type. To disentangle the intensity and type features, we propose to encode the latent domain with intensity as one independent axis. Yet, we observed that the diversity of screentone types is correlated with the intensity, where darker or lighter tone intensity shall gradually suppress the tone diversity. Thus, we propose to model the domain as a hypersphere space, and screentones with the same intensity are encoded as a normal distribution with standard deviation conditioned on the intensity. The disentangled representation greatly benefits region semantic understanding and segmentation of manga. While all existing texture descriptors fail in distinguishing a semantic consistent screentone region with varying intensities, the proposed representation manages to catch this consistency to produce better segmentations (Fig. <ref>(b)). With manga regions segmented, our framework can further synthesize screentones with any specified effects by editing the intensity or type features. Our method can generate various screentones preserving the original intensity variation (Fig. <ref>(c)) or the original screentone types (Fig. <ref>(d)). To validate the effectiveness of our method, we apply our method in various real-world cases and receive convincing results. We conclude our contributions as follows. * We propose a practical manga rescreening method with an interpretable screentone representation to enable manga segmentation and user-expected screentone generation. * We disentangle the intensity information from the screentones by modeling a hypersphere space with intensity as the major axis. § RELATED WORK Manga Segmentation Existing manga segmentation approaches focus on the discovery of efficient texture descriptors as screentones are laid over regions rather than individual pixels. Traditional texture analysis usually analyzes the texture by first applying filtering techniques and then representing the textures with statistical models <cit.>. Gabor Wavelet features <cit.> has been demonstrated as an effective texture discrimination technique for screentones <cit.> and utilized for measuring pattern-continuity in various manga tasks <cit.>. However, all the above texture features are windowed-based and usually fail at boundary and thin structures. Convolutional Neural Network (CNN) has also been shown to be suitable for texture analysis, in which trainable filter banks make an excellent tool in the analysis of repetitive texture patterns <cit.>. But these methods are tailored for extracting texture features in natural images and usually fail for screened manga. Recently, Xie et al. <cit.> attempted to identify regions in manga by mapping screentones into an interpolative representation with a Screentone Variational AutoEncoder (ScreenVAE). The model features a vast screentone encoding space, which generates similar representations for similar screentones. However, it cannot handle screentone regions of varying intensity. In comparison, we disentangle intensity feature from type feature and can identify the same screen type without being confused by the intensity variation. Manga Generation Traditional manga generation shades grayscale/color images to produce screened manga through halftoning <cit.> or hatching <cit.>. But these methods reproduce the intensity with more or less uniform patterns and cannot satisfy the requirement for manga which uses a rich set of screentone types to enrich the viewing experience. Qu et al. <cit.> screened color images by selecting various screentones considering tone similarity, texture similarity, and chromaticity distinguishability. However, with the limited screentone set, it may not generate screentones with user-expected intensity and types, even generate smooth transitions with the same screentone type. ScreenVAE <cit.> produced screened manga with an interpolative screentone representation. However, since intensity and type features are unrecognizable in their model, it limits the user to synthesizing new types of screentone that retain special effects. In comparison, we propose an interpretable representation disentangling intensity and type of screentones, which enables controllable generation and manipulation on both intensity and type. Representation Learning and Disentanglement Variational AutoEncoders (VAE) <cit.> and Generative Adversarial Networks (GAN) <cit.> are two of the most popular frameworks for representation learning. VAE learns a bidirectional mapping between complex data distribution and a much simpler prior distribution while GAN learns a unidirectional mapping that only allows sampling of data distribution. Disentanglement is a useful property in representation learning which increases the interpretability of the latent space, connecting certain parts of the latent representation to semantic factors, which would enable a more controllable and interactive generation process. InfoGAN <cit.> disentangles latent representation by encouraging the mutual information between the observation and a small subset of the latent variables. β-VAE <cit.> and some follow-up studies <cit.> introduced various extra constraints and properties on the prior distribution. However, the above disentanglement is implicit. Though the model separates the latent space into subparts, we cannot define their meanings beforehand. On the contrary, some approaches aim at controllable image generation with explicit disentanglement <cit.>. Disentangled Sequential Autoencoder <cit.> learns the latent representation of high dimensional sequential data, such as video or audio, by splitting it into static and dynamic parts with a partially time-invariant encoder. StyleGAN <cit.> defined the meanings of different parts of the latent representation by the model structure, making the generation controllable and more precise. In this paper, we attempt to disentangle the type and intensity information in the latent representation, so that the screentone generation can be controllable with type and intensity. § OVERVIEW As highlighted in Sec. <ref>, the key to intuitive user-friendly rescreening is to train an interpretable screentone representation that explicitly encodes the complex screentone intensity and type features into a low-frequency latent space. The orthogonal encoding of screentone type and intensity empowers the model to recognize screentone type similarity or equivalence without being confused by the variation of screentone intensity, and vice versa. More importantly, the representation is invertible, so that one can easily modify the low-frequency representation to generate new screentones with desired screen type or intensity. To enable this controllable and user-friendly encoding, we build the whole framework upon a disentangled VAE model. Different from ScreenVAE <cit.>, we explicitly enforce one of the feature descriptors to encode the intensity of manga, and the screentones with the same intensity are mapped to space following a normal distribution. We will describe our interpretable representation in Sec. <ref> in more detail. With the screentone representation learned, we propose our manga rescreening framework in two stages (detailed in Sec. <ref>). In the first stage, identifies semantic continuous regions based on the screen type similarity and segments them with a Gaussian mixture model (GMM) analysis (Fig. <ref>(a)). With the region segmented, for each screentone region, the users can directly edit the latent representation to alter either the screentone intensity (Fig. <ref>(b) upper branch) or the screentone type (Fig. <ref>(b) lower branch). § INTERPRETABLE SCREENTONE REPRESENTATION Fig. <ref> illustrates our framework for learning interpretable screentone representation, which consists of two jointly trained networks, an encoding neural network E and a decoding neural network D. The encoder E converts any screened manga X∈ℝ^1× H × W to a latent representation S∈ℝ^K× H × W, where K is the dimensionality of each latent embedding vector. On the contrary, the decoder D converts any latent representation to its original screened appearance X̃. The latent representation is defined as 4 dimensions (K=4) including one dimension for the intensity of screentone S_ itn∈ℝ^1× H × W and three for the type of screentone S_ scr∈ℝ^3× H × W. Besides, variational inference <cit.> is employed to ensure the latent representation to be interpolative. The encoder E adopts a 3-level downscaling-upscaling structure with 6 residual blocks <cit.> and the decoder D adopts a 7-level U-net structure <cit.> with strided deconvolution operations to generate screentones of different scales. To improve the generalization and fully disentangle the intensity, we impose an extra path by introducing a random intensity map. A random latent representation is generated by combining the encoded latent type feature S_ scr and random intensity map S_ ritn. Given the random latent representation S_r, the decoder D is expected to generate a realistic image X̃_̃r̃, and the reconstructed latent representation S̃_̃r̃ generated by the encoder E should also be as similar as the given latent representation S_r. §.§ Intensity-Aware Hypersphere Modeling We observed that the diversity of screentones with different intensities has the following properties. Firstly, the domain conditioned on intensity should be a symmetric space, as each pattern can transform into a new pattern with opposite intensity by inverting the black and white pixels. Second, the screentone diversity is correlated with its intensity. The diversity gradually decreases when annihilated into a pure black or pure white pattern from 50% intensity, as shown in Fig. <ref>(b). In particular, when the intensity is 100% or 0%, there is no variation of screentones. Considering the above properties, we propose to model the domain as a hypersphere with intensity as one independent axis, as illustrated in Fig. <ref>(a). Intensity-Aware Hypersphere Mapping (IAHM) is proposed to achieve the modeling. Considering the diversity of the screen types is conditioned by the intensity information, we hereby model the types of screentones with the same intensity as a normal distribution 𝒩(0,r^2·I) instead of projecting all screentones to a standard normal distribution in high dimensions. Meanwhile, we force the embedding domain to be hyperspherical with constraints of r=f(S_ itn)∼sin(π× S_ itn). As we can have 1/r· S_ scr' ∼𝒩(0, I) with S_ scr' ∼𝒩(0,r^2·I), our Intensity-Aware Hypersphere Mapping (IAHM) is then defined as S_ scr' = r⊙ S_ scr, where r=sin(π× S_ itn) where ⊙ indicates point-wise multiplication. In particular, it is deterministic with r=0 at black and white intensity. The IAHM encoding substantially improves the usage efficiency of latent space, which helps to avoid the model bias due to imbalanced varieties of different intensities. As shown in Fig. <ref>, the model without IAHM will not be able to generate screentones with darker or lighter tone intensity. Specifically, for some screentones with darker or lighter intensity, although they may be quite similar at the pixel level, they may have a large distance in the latent space without the proposed IAHM, which will make the model fail to learn these screentones. §.§ Loss Function Our model is trained with the loss function defined in Equ.<ref>, consisting of reconstruction loss ℒ_ rec, adversarial loss ℒ_ adv, intensity loss ℒ_ itn, KL divergence loss ℒ_ kl, feature consistency loss ℒ_ fcons, and feature reconstruction loss ℒ_ frec. The objective is formulated as [ ℒ = λ_ recℒ_ rec + λ_ advℒ_ adv + λ_ itnℒ_ itn; + λ_ klℒ_ kl + λ_ fconsℒ_ fcons + λ_ frecℒ_ frec ] where the hyper-parameters λ_ rec = 10, λ_ adv = 1, λ_ itn = 5, λ_ kl = 1, λ_ fcons = 20, and λ_ frec = 1 are set empirically. Reconstruction loss. The reconstruction loss ℒ_ rec ensures that the reconstructed manga X̃ is as similar to the input X as possible, formulated in pixel level. The reconstruction loss is defined as ℒ_ rec=X̃-X_2^2 where ·_2 denotes the L_2 norm. Adversarial loss. The adversarial loss ℒ_ adv encourages the decoder to generate manga with clear and visually pleasant screentones. We treat our model as a generator and define an extra discriminator D_ m with 4 strided downscaling blocks to judge whether the input image is generated or not <cit.>. Given the input image X and the reconstructed image X̃ and X̃_̃r̃, we formulate the adversarial loss as ℒ_ adv=log(D_ m(X)) + log(1-D_ m(X̃))+ log(1-D_ m(X̃_̃r̃)). Intensity loss. The intensity loss ℒ_ itn ensures the generated intensity map S_ itn visually conforms to the intensity I_ itn of the manga image. We formulate it as ℒ_ itn = S_ itn-I_ itn_2^2. KL divergence loss. The KL divergence loss ℒ_ kl ensures that the statistics of the type feature S_ scr are normally distributed. Given an input manga X and the encoded representation μ and σ, we compute the KL divergence loss as the summed Kullback-Leibler divergence <cit.> of μ and σ over the standard normal distribution 𝒩(0,𝐈). [ ℒ_ kl =KL(𝒩(μ,σ),𝒩(0,𝐈)); =1/2∑(σ^2+μ^2-log(σ^2)-1) ] Here, KL(·,·) denotes the KL divergence between two probability distributions. Feature consistency loss. The feature consistency loss ℒ_ fcons ensures that the type feature S_ scr can summarize the local texture characteristics in the input X <cit.>. Given the type feature S_ scr and its label map I_ lb which labels the screentone of each pixel, we encourage a uniform region representation within the region filled with the same screentone <cit.>. The feature consistency loss is formulated as ℒ_ fcons=w_l · (S_ scr- Avg(S_ scr,I_ lb))_2^2 where w_l is the binary mask to filter out structure lines (0 for structural lines). Avg(S_ scr, I_ lb) is a map in which each pixel is replaced by the average value of the corresponding region indexed by I_ lb in the representation S_ scr. Feature reconstruction loss. We find that intensity information may still entangle with type feature when only the above losses are imposed. To disentangle intensity feature from type feature, we encourage the type feature can still be reconstructed through the extra path with random intensity map. The feature reconstruction loss ℒ_ frec is measured as the pixel-wise difference between the random representation S_r and the reconstructed representation S̃_̃r̃. ℒ_ frec = S̃_̃r̃-S_r_2^2 § EXPERIMENTS §.§ Implementation Details Data Preparation. We use two types of data to train our model, including synthetic manga data and real manga data. For the synthetic manga data, we manually collected 100 types of screentones and generated their tone inverse by swapping the black and white pixels. We then collected 500 line arts and generated synthetic manga following Li et al. <cit.> which randomly choose and place screentones in each closed region. We synthesized 5,000 manga images of resolution 2,048× 1,536, together with the intensity and the screentone type labels of each pixel. Intensity maps and label maps are used to calculate intensity loss ℒ_ itn and feature consistency loss ℒ_ fcons, respectively. For the real manga data, we manually collected 5,000 screened manga of resolution 2,048× 1,536 to train our model. For each screened manga, we extract the structural lines using the line extraction model <cit.> and the intensity maps using total-variation-based smoothing <cit.>. Note that we do not label their ground truth screen type, so the feature consistency loss ℒ_ fcons is not calculated for this portion of data. Training. We trained the model with PyTorch <cit.>. The network weights are initialized with <cit.>. During training, we empirically set parameters as λ_ rec = 10, λ_ adv = 1, λ_ itn = 5, λ_ kl = 1, λ_ fcons = 20 and λ_ frec = 1. Adam solver <cit.> is used with a batch size of 8 and an initial learning rate of 0.0001. All image pairs are cropped to 256× 256 before feeding to the networks. Considering the bias problem of real data, the whole model is first trained with synthetic data to obtain a stable latent space. Then, both synthetic and real data are imposed for training to improve generalization. §.§ The Rescreening Pipeline With the learned representation of screentones, users can easily edit the screentones in manga by segmenting the screentone region and generating new screentones. Manga Segmentation. To perform automatic manga segmentation, we adopt the Gaussian mixture model (GMM) analysis <cit.> on the proposed representation. The optimal number of clusters is selected by the silhouette coefficient <cit.>, a clustering validation metric. As the screentone types in a single page of manga are usually limited for visual comfort, we find the optimal number of clusters ranges from 1 to 10. We illustrate the consistency of the encoded screen type feature for regular patterns (dot pattern in Fig. <ref>) and noisy patterns (first row in Fig. <ref>). All features are visualized after principal component analysis (PCA) <cit.>. Controllable Manga Generation. After segmentation, each segmented region can be rescreened by modifying its latent and decoding the latent back to screentones. Users can either modify the screentone type while preserving the original intensity variation (Fig. <ref>(e)), or the intensity feature to change the intensity (Fig. <ref>(f)). Interestingly, the editing of the intensity is very flexible. The artist can either replace an intensity-varying region with an intensity-constant one, or preserve the visual effects by increasing/decreasing the overall tone intensity proportionally, as shown in the hair region in Fig.<ref>(f)). §.§ Qualitative Evaluation As there is no method tailored for manga rescreening, we compare with approaches on manga segmentation and controllable manga generation respectively. Comparison on Manga Segmentation. In Fig. <ref>, we visually compare our method with the classic Gabor wavelet texture analysis <cit.> and the learning-based ScreenVAE model <cit.>. All results are visualized by considering the three major components as color values. We also apply the same segmentation to each feature by measuring texture similarity. As observed, all features have the capability of summarizing the texture characteristics in a local region and can distinguish different types of screentones. However, Gabor wavelet feature exhibits severe artifacts near region boundaries with blurry double edges due to its window-based analysis. ScreenVAE map can have tight boundaries towards structural lines, but it failed to segment regions with varying intensity (second row in Fig. <ref>). In contrast, our representation explicitly encodes the intensity information and can disentangle the types. We can generate a consistent type feature for varying intensity regions. Comparison on Controllable Manga Generation. Manga artists commonly use smoothly changing screentones to express shading or atmosphere, such as the hair of the third example in Fig. <ref> and the background in Fig. <ref>, which requires the intensity feature to be interpolated in tone. With our design, our representation can provide controllable generation and manipulation on both the intensity and types of screentones. Fig. <ref> (a) shows image synthesis by replacing the screentone type features of <cit.> and our method. Note that although <cit.> can also generate various screentones, it is not providing stable controls of screentones, as the screentone intensity and type are non-linearly coupled in their model. As observed, the screentone types are disentangled with intensity in our model, and we can generate screentones with expected types while preserving the original intensity during rescreening. §.§ Quantitative Evaluation Besides visual comparison, we also quantitatively evaluate the performance of our interpretable representation. In this evaluation, we mainly compare with Gabor Wavelet <cit.> and ScreenVAE <cit.>. The evaluation is on four aspects: i) screentone summarization to measure the standard deviation within each screentone region; ii) screentone distinguishability to measure the standard deviation among different screentone regions; iii) intensity accuracy to measure the difference (Mean Absolute Error) between the generated intensity map and the target intensity map; iv) reconstruction accuracy to measure the difference (LPIPS <cit.>) between the input image and the reconstructed image. Table <ref> lists the evaluation results for all methods. Although Gabor Wavelet feature <cit.> obtains the lowest value of summarization score, it got a low score on distinguishability, which means it is difficult to distinguish different screentones. ScreenVAE <cit.> and ours achieve better performance for both screentone summarization and distinguishability. Furthermore, Gabor Wavelet feature <cit.> cannot reconstruct the original screentones, while the other two methods can be used to synthesize the screentones. More importantly, our method explicitly extracts the intensity of manga, which is critical for practical manga parsing and editing. §.§ Ablation Study To study the effectiveness of individual loss terms, we performed ablation studies for each module by visually comparing the generated latent representation and the reconstructed image, as shown in Fig. <ref>. Note that for better visualization, we did not normalize the type feature here. In addition, there is no loss configuration without reconstruction loss ℒ_ rec or intensity loss ℒ_ itn. The reconstruction loss is necessary to preserve information in latent space, while our model will degrade to the original ScreenVAE <cit.> without the intensity loss. Without adversarial loss ℒ_ adv, blurry and noisy screentones may be generated (background in the bottom row of Fig. <ref>(b)). Without KL convergence loss ℒ_ kl, the latent space is not normally distributed and balanced representations may not be generated, which will fail the segmentation (top row of Fig. <ref>(c)). Without feature consistency loss ℒ_ fcons, the network may not recognize multiscale screentones, and may generate inconsistent representation for the same screentone (top row of Fig. <ref>(d)). Without feature reconstruction loss ℒ_ frec, inconsistent screentones may be generated among local regions (background in the bottom row of Fig. <ref>(e)). In comparison, the combined loss can help the network recognize the type and intensity of screentones and generate a consistent appearance (Fig. <ref>(f)). §.§ Limitations While our method can manipulate manga by segmenting the screentone region and generating screentones with desired screen type or intensity, our method may not extract some tiny screentones and some extra user hint may be required to generate good results (hair in Fig. <ref>(b)). In addition, some structural lines that are not extracted may be recognized as strip screentones (background in Fig. <ref>(b)). § CONCLUSION We propose an automatic yet controllable approach for rescreening regions in manga with user-expected screen types or intensity. It frees manga artists from the tedious manga rescreening process. In particular, we learn a screentone representation disentangled type and intensity of screentone, which is friendly for region discrimination and controllable screentone generation. With the interpretable representation, we can segment different screentone regions by measuring feature similarity. Users can generate controllable screentones to simulate various special effects. ieee_fullname
http://arxiv.org/abs/2306.06332v1
20230610020845
Asymptotic entangled states from the dissipative interaction of two charged fields
[ "R. Cartas-Fuentevilla", "O. Cruz-Limón", "C. Ramírez-Romero" ]
math-ph
[ "math-ph", "math.MP", "quant-ph" ]
Asymptotic entangled states from the dissipative interaction of two charged fields R. Cartas-Fuentevilla, O. Cruz-Limón* and C. Ramírez-Romero* Instituto de Física, Universidad Autónoma de Puebla, Apartado postal J-48 72570 Puebla Pue., México. *Facultad de Ciencias Físico Matemáticas, Benemérita Universidad Autónoma de Puebla. P.O. Box 165, 72570 Puebla, México. ABSTRACT: We construct a field theory for dissipative systems using a hypercomplex ring formalism that reproduces naturally the effective doubling field formulation of dissipative systems of thermal field theory. The system is quantized by a noncanonical ansatz that gives the unitarity of the system is conserved. Asymptotically entangled states are constructed to introduce the study of the ergodicity of systems that undergo entanglement. KEYWORDS: hyperbolic symmetries; quantum dissipation; ergodic theory; entanglement states. § INTRODUCTION Quantum mechanics has the very particular strange feature that is also surprising, entanglement. This phenomenon is present in various areas as quantum optics <cit.>, quantum field theory (QFT) <cit.>, AdS/CFT correspondence <cit.>, and it is an essential part for the development of quantum information theory, and for the emergence of new technologies related to it. For instance, in <cit.> the authors develop new techniques to entangle disparate electromagnetic fields that range from microwave radiation to optical beams; if this entanglement was achieved, it would be closer to solving the problem that arises when entanglement is shared between two separate quantum computers. Another method that has been used to generate entanglement is to use the non-local Seebeck effect <cit.>, which consists of generating a thermoelectric current through a temperature difference by using a quantum splitter, and it is found that certain processes such as Copper pair splitting (CPS) and elastic co-tunneling (EC) can contribute to this current, achieving a tuning between CPS and EC, which allows us testing fundamental theories related to entanglement and heat transport in graphene systems. On the other hand, in addition of being able to generate the entanglement it is important to measure it; for example in <cit.>, the authors propose a test for entanglement in scattering of chemical reactions. The entanglement has been generated artificially, but in most cases it is generated naturally, often being an unwanted effect. A natural way in which entanglement is generated is as an effect of dissipation, which causes that populations of the quantum states are modified due to the interaction with the environment, which in turn causes a phase shift <cit.>; these two actions correspond to the result of the intertwining between the degrees of freedom of the environment and the system of interest; in a system with these characteristics, the unitarity of the dynamics is lost <cit.>. On the other hand, any small alteration that interferes with the unitarity of the quantum evolution is undesirable, since quantum states lose their coherence, that is, quantum decoherence arises. Decoherence is an effect that cannot be avoided, it arises when a quantum system is coupled to its environment; this phenomenon is often undesirable, since it causes the information of the system of interest to be lost. But not everything is bad with decoherence, there is also some interest in this phenomenon, as discussed in <cit.>. For a long time, it has been of interest to study the quantum dynamics of a particle that is coupled with its environment, and several methods have been developed. A well-known and widely used method is the doubling of degrees of freedom; in <cit.> the authors make use of this duplication to review the general features of a dissipative quantum model of the brain and discuss how QFT phase correlations and entanglement are achieved for modeling functional brain activity. On the other hand, for systems coupled to an environment, the thermodynamic descriptions assume that any system of any finite size, being in contact with a thermal reservoir with a temperature T, will reach a state of thermal equilibrium after a period of time, which will maintain the same temperature T. When there are dissipative processes that occur at very long times, it is normal to think that this thermalization occurs; however, this is not always the case. On the one hand, it is well known that in coherent quantum mechanics, there are time-periodic modulations that can lead to non-equilibrium asymptotic states, which are known as Floquet states. On the other hand, there are cycle-stationary states, periodic states in time; the latter introduce the ergodic theory to study dissipative systems <cit.>. This work is organized as follows: in section <ref> we describe the formalism of the ring of hypercomplex numbers, essential objects in the construction of our dissipative model. In section <ref> we start by generalizing a Lagrangian consistent with the thermal field dynamics formalism. This generalization allows us to incorporate charge to the hyperbolic field theory realized in <cit.>. In section <ref> we give a quantization ansatz that is not a canonical one, and in turn, the new commutation relations obtained will allow to eliminate dissipative factors that are not desired in dissipative theories <cit.> <cit.>. In section <ref> we construct the Hamiltonian operator and we consider specific geometries for the total system as examples; we also consider the charge operator, from which a new hypercomplex description for the Noether charge will emerge due to the dissipation. Finally, we construct the evolution operator, giving way to the construction of the Matsubara formalism. For section <ref> a definition of the vacuum through certain idempotent basis in the hyperbolic ring is considered and the expected values for quantum observables are calculated. In section <ref>, the asymptotic entangled states are constructed considering that the geometry of the total system can be finite or infinite, which will lead us to consider the ergodic theory. Finally, in section <ref> we present the conclusions. This work is based on the reference <cit.>, in which the authors consider the Lagrangian formulation used in <cit.> but within the formalism of a pure hyperbolic ring. They present new commutation relations that reveal an underlying noncommutative field theory that arises naturally due to the algebraic structure of the hyperbolic ring; in turn, the hyperbolic rotations reveal an underlying internal symmetry for the dissipative dynamics. The development of the work at hand is through the steps of <cit.>, but now implementing an extension of the formalism, namely, the complex elliptic unit i is incorporated to the purely hyperbolic unit used in that work, obtaining thus a hypercomplex formalism. Contributions of the formalism at hand have opened the door to delve into other topics such as: the realization of holography through a grand partition function with duplication of fields, the relationship between the geometry of a dissipative system and ergodicity, the entanglement entropy and contributions of corrections to the Noether charge; these topics will be considered in future works. § THE HYPERCOMPLEX NUMBERS ℍ We begin by describing the formalism of hypercomplex numbers. In the formalism used in <cit.> the pure hyperbolic numbers have the form, ℙ≡{w=x+jy; j^2=1, j̅=-j; x,y ∈ℝ}, which can be generalized by promoting the real numbers in (<ref>) to ordinary complex numbers, that is, x → x+iy and y → u+iv, thus ξ =x+ iy + ju + ijv, ξ̅ =x - iy - ju + ijv; x,y,u,v ∈ℝ. We denote the ring of these numbers of four real components as ℍ; this will lead, at a quantum level, to the existence of four bosons, whose excited states will allow us to describe the entangled states between the system of interest and the environment. This new extended ring has the following properties for the complex units; for the hyperbolic unit we have j^2=1 and j=-j, for the standard complex unit i^2=-1 and i=-i, additionally we have a new complex unit, composed of a hybrid term ij, with the properties (ij)^2=-1, ij=ji and ij=ij. The ring ℍ has as subsets the pure hyperbolic numbers ℙ used in <cit.> and the usual complex numbers ℂ. The modulus of a hypercomplex number is given by <cit.>, ξξ̅ = x^2 + y^2 - u^2 - v^2 + 2ij(xv-yu). This expression is invariant under the group U(1) by the action ξ→ξ e^iθ and, under the group SO(1,1), which corresponds to hyperbolic rotations ξ→ξ e^jχ; hence (<ref>) is invariant under ξ→ξ e^j χ e^i θ, where the bi-complex phase can be expressed as, e^i α e^j β≡ e^i α + j β = cos(α)cosh(β) + i sin(α) cosh(β) + j cos(α) sinh(β) + ij sin(α) sinh(β). Furthermore, the idempotent elements corresponding to the usual complex numbers are well know, namely, 0 and 1; the additional idempotent elements of ℍ are, J^+ = 1/2(1+j), (J^+)^n=J^+; J^- = 1/2(1-j), (J^-)^n=J^-, n=1,2,3,...; and they will be constantly used; these elements work as orthogonal projectors; in addition, they eliminate to each other and are complex conjugates, J^+ J^- =0, (J^+)^∗ =J^-. Other properties of hyperbolic numbers that will be used in this work are detailed in section 2 in <cit.>. § THE DISSIPATIVE SYSTEM AND ITS LAGRANGIAN FORMALISM The basis of this work is the formulation of thermal fields that doubles the number of degrees of freedom <cit.>, in turn, this formulation is used in <cit.>, where the total system is represented by a real field Φ for the system of interest (A), and a second real field Ψ representing the thermal bath or environment (B); here we will continue with that representation for the subsystems. In this article we generalize the formulation in <cit.> by promoting the Φ and Ψ fields to charged fields, according to the generalization described in Eq.(<ref>). The starting point is the following Lagrangian that includes a mass term, which will undergo also dissipation that not was considered in <cit.>. The mass term has in general the form m_1^2 + ijm_2^2, but the corresponding dispersion relation will force m_2=0, see Eq.(<ref>), ℒ(Ω, Ω)=1/2∫ dx^d [ ∂_μΩ·∂^μΩ + γ/2 (jΩΩ̇ + c.c)- m^2ΩΩ ], where the field Ω was a pure hyperbolic field of the form Ω=Φ + j Ψ, but in the case at hand, the field Ω will be generalized by a hypercomplex field, that is: Φ→ϕ_1+iϕ_2 and Ψ→ψ_1+iψ_2, then Ω takes the form, Ω =ϕ_1 + i ϕ_2 + jψ_1 + ij ψ_2; ϕ_1,ϕ_2,ψ_1,ψ_2∈ℝ. Since the Ω field has been generalized, the additional symmetry U(1) × SO(1,1) arises. Then the elements of the Lagrangian (<ref>) have the form (<ref>), ∂_μΩ·∂^μΩ = (∂_μ ϕ_1)^2 + (∂_μ ϕ_2)^2 - (∂_μ ψ_1)^2 - (∂_μ ψ_2)^2 + 2ij (∂_μ ϕ_1 ∂_μ ψ_2 - ∂_μ ϕ_2 ∂_μ ψ_1), similarly for the terms γ(jΩΩ̇+ c.c) and (m^2 ΩΩ). The hybrid term of the form ij in (<ref>) can be understood as an interacting term between the two charges fields <cit.>. On the other hand, using the idempotent bases (<ref>), the field (<ref>) can be expressed as, Ω=J^+Ω^+ +J^-Ω^-, where, Ω^+ =(Φ+Ψ), Ω^- = (Φ-Ψ); Φ=ϕ_1 + i ϕ_2, Ψ=ψ_1 + i ψ_2, which are standard complex fields. When the equations of motion are calculated in the basis (J^+, J^-), the correspondence of each equation of motion in such a basis is notorious, namely, one for the system of interest (J^+) and, the other for the environment (J^-); this is due to the annihilation property (<ref>). First we make the variation with respect to Ω, obtaining: ∂_μ ∂^μΩ + jγ ∂_t Ω + m^2 Ω =0; when the variation is made with respect to Ω, the conjugate of the Eq.(<ref>) is obtained. Thus, we have the equation of motion for the entire system, J^+[∂_μ∂^μΩ^+ +γ∂_t Ω^+ + m^2Ω^+] + J^- [ ∂_μ∂^μΩ^- - γ∂_t Ω^- + m^2Ω^- ] =0. We have explicitly obtained the corresponding part for the subsystem of interest that is accompanied by the idempotent J^+, and the part of the environment identified with J^-; the last one has a negative sign in the dissipative term γ, indicating the time inversion, besides that it is the mirror copy of the system of interest <cit.>. In <cit.> the authors constructed a solution, which contains real exponentials, then they analytically extend that solution in the hyperbolic complex plane; it can be also be complexified to the standard complex plane, and in both cases the relative sign between ω^2_k and k^2 is not altered in the dispersion relation; however, there is a sign change in the dissipative term γ^2, being negative (-γ^2) for the standard scheme and positive (+γ^2) for the scheme hyperbolic. Here we have the two imaginary units in this scheme, the product ij will appear in our solution and as we will see, the sign for the dissipative term will be negative as in the standard scheme. Thus, the formal solution for the Eq.(<ref>) has the form, η(x,t)=ae^i p_1x^μ e^jp_2x^ν + be^-ip_1x^μ e^-jp_2x^ν, where a,b are arbitrary coefficients, and ω_1,2 and k_1,2 are real parameters. Whit this solution we extends ω→ω_1 + i ω_2 and k → k_1 + i k_2; however, we can always find, through a Lorentz rotation, a system where we have k_2=0, that is, the spatial imaginary part for k vanishes; hence we are only concerned with the problem of temporal dissipation. Using the property e^j χ=e^χJ^+ + e^-χJ^-, and the mentioned criterion, we have that (<ref>) can be rewritten as [This solution can also be obtained by proposing a real exponential, Ω(x⃗,t)=e^ω t - k⃗·x⃗, and promoting (ω , k⃗) ∈ℍ, and considering restrictions for obtaining convergent solutions.], η(x, t)= a e^[(α +iω_1)t - ik·x] J^+ + b e^[(α - iω_1t) + ik·x] J^-. With this, we obtain the solutions for the equation of motion (<ref>), Ω^+=_1 e^[(Γ_1 +i ω_1) t-i k_1·x]+ _1 e^[(Γ_1 -i ω_1) t+i k_1·x], Ω^-=_2 e^[(Γ_2+i ω_2) t-i k_2·x]+ _2 e^[(Γ_2-i ω_2) t+i k_2·x]; where (_1,2, _1,2) are arbitrary coefficients and the spectral parameters (ω, k) are real-valued. We will first calculate the solution for the part that corresponds to the system of interest, that is, the solution corresponding to J^+ in (<ref>). Especifically we can to determine of J^± projections of the Eq.(<ref>) and to use the property (<ref>); taking the solution Ω^+ of (<ref>) and performing the substitution into the equation of motion we obtain, e^[(Γ_1 + i ω_1)t - ik_1·x][ Γ_1^2 - ω_1^2 + k_1^2 + γΓ_1 + m^2 + i ω_1( 2 Γ_1 + γ) ] + e^[(Γ_1 - i ω_1)t + ik_1·x][Γ_1^2 - ω_1^2 + k_1^2 + γΓ_1 +m ^2 - i ω_1(2 Γ_1 + γ)] =0. A similar procedure is done for the solution Ω^-. From the imaginary part of these expressions, we obtain the dissipative coefficients, Γ_1=-γ/2, Γ_2=γ/2; and from the real part, we obtain the dispersion relations, ω_1,2=±√(k_1,2^2 + m^2 -γ^2/4_modified mass). It is necessary to notice some aspects of the frequencies of the system; the frequencies are real (m^2 - γ^2/4) ≥ 0 and therefore there is not an IR cut-off, for the case when (m^2 - γ^2/4) < 0, we take the positive values of the radicand[Otherwise, the radicand could take negative values and possibly lead to imaginary frequencies, which would imply superluminal speeds. In this work we only considered the real frequencies] in (<ref>), therefore k^2 ≥ -(m^2 - γ^2/4), obtaining an IR cut-off. Therefore, the general solution is represented as plane waves damped by a decaying factor e^-γ/2t for the system of interest and a growing factor e^γ/2t for the environment; this is in accordance with the rules of TFD, since in this context, the environment evolves in the reverse direction of time. Furthermore, we can write an arbitrary combination of solutions for the field Ω in terms of the (J^+,J^-) basis, Ω(x,t)=J^+ e^-γ/2 t[_1 e^i(ω_1-k_1·x)+ _2 e^-i(ω_1-k_1·x)]+J^- e^γ/2 t[ _1 e^i(ω_2-k_2·x)+ _2 e^-i(ω_2-k_2·x)], where _1,2, _1,2 are hypercomplex arbitrary coefficients; this combination will allow to construct the quantum fields in the next section. § QUANTUM FIELDS AND FIELD COMMUTATORS In the spectral decomposition used in <cit.> the authors considered to split the range of spectral parameters k into two parts, (-∞,0) ↔ J^- for the environment, and (0, +∞) ↔ J^+ for the system of interest; this is done to avoid the divergences that arise in the field operator commutators. In contrast to this, in the formulation at hand we can take the full range k∈ (-∞,+∞) since, due to the presence of the additional complex unit i, such divergences do not appear; therefore, the spectral decomposition for the field operator built with the solution (<ref>) reads, Ω(x,t)= { e^-γ/2 t J^+. ∫_-∞^∞[_1(k_1) e^i(ω_k_1 t-k_1·x)+_2^†(k_1) e^-i(ω_k_1 t-k_1·x)] d k_1 . + e^γ/2 t J^-∫_-∞^∞[_1^†(k_2) e^i(ω_k_2 t-k_2·x)+_2(k_2) e^-i(ω_k_2 t-k_2·x)] d k_2}. With this expression we can build the field commutator, [Ω̂(x,t), Ω̂^†(x^', t)] = ∫_-∞^∞ d k∫_-∞^∞ d k^'{J^+(e^i[(ω_k-ω_k') t-k·x+k^'·x^'][_1(k), _1(k')]. . . . . +e^i[(ω_k+ω_k') t - k·x - k' ·x^'][_1(k), _2^†(k')]... ... +e^-i[(ω_k +ω_k') t - k·x - k' ·x'][_2^†(k), _1(k')]... ..+e^i[(ω_k'-ω_k)t + k·x - k' ·x' ][_2^†(k), _2^†(k')]). .+J^-( e^-i[(ω_k-ω_k') t + k' ·x - k·x^'][_1^†(k'), _1^†(k)] .. + e^-i[(ω_k'+ω_k)t - k' ·x - k·x^'][_2(k'), _1^†(k)] + e^i[(ω_k'+ω_k) t - k' ·x - k·x^'][_1^†(k'), _2(k)] ..+ e^-i[(ω_k'-ω_k) t - k' ·x + k·x'][_2(k'), _2(k)] )}. One of the major difficulties that appear in the study of dissipative systems in quantum mechanics that involve the duplication of fields, is that the commutation canonical relations are not preserved under the temporal evolution <cit.>. In the commutator (<ref>), this time dependence has been eliminated due to the terms that have the damping factors vanish, e^ -γ/2t (J^+ J^ -)[ , ^†]=e^+γ/2t ( J^+ J^-)[ , ^†] =0, due to the property (<ref>). On the other hand, there is also another elimination of the damping factors, since they have the form e^-γ/2J^+· e^+γ/2J^+. The rest of the field commutators have a similar structure and the same characteristics as the expression (<ref>), thus they do not have dissipative factors; however, there exists special commutator with dissipative factors, namely, [Ω̂^†(x,t), Π̂_Ω(x',t)] = ∫_-∞^∞ d k∫_-∞^∞ d k^'{e^γ t J^+(e^i[(ω_k'-ω_k) t + k·x-k^'·x^'][_1(k), _2^†(k')]. . ..-e^-i[(ω_k'-ω_k)t + k·x - k' ·x' ][_2^†(k), _1(k')]). .+ e^-γ t J^-( e^i[(ω_k'-ω_k) t - k' ·x' + k·x][_1^†(k), _2(k')] .. ..+ e^-i[(ω_k'-ω_k) t - k' ·x' + k·x][_2(k), _1^†(k')] )}. The expression (<ref>) contains the trivial canonical commutation relations [â^†_p, â^†_q]=[â_p,â_q]=0, and the nontrivial relations [â_p, â^†_q] and [b̂_p, b̂^†_q]; which do not vanish in general. However, in this work we will consider that the last commutators are vanishing [â_p,â^†_q]=[b̂_p,b̂^†_q]=0. The main reason is that, if we keep these commutators switched on, the damping coefficients e^±γ t do not disappear, implying divergences, which bring us back to the main problem in of the dissipative dynamics. On the other hand, assuming for a moment that we do not have divergence problems when one uses these canonical relations, the use of these commutators does not allow us to identify the entanglement suffered by the subsystems due to dissipation, since they are only describing the subsystems separately, throught the pure commutators for type-a bosons for the subsystem of interest, and pure commutators for type-b bosons for the environment. Thus, we have a very special feature of the formulation at hand, since it is essential to have the non-vanishing commutator [(k),(k')]≠ 0 in order to do not trivialize the theory. From (<ref>) and (<ref>) we need to propose the following commutation rules for the annihilation and creation operators, [ [_1,2(k), _1,2(k')]=ρ_i δ(k-k'); [_1,2^†(k'), _1,2^†(k)]=ρ_i δ(k-k'); [_1,2(k), _1,2^†(k')]= σ_j δ(k+k'); [_1,2(k'), _1,2^†(k)]= σ_j δ(k + k'),; [_1,2(k), _1,2^†(k')]=0,; [_1,2(k), _1,2^†(k')]=0; ], i,j=1,2,3,4; [a_1,b_1]→ρ_1, [a_1,b_2] →ρ_2, [a_2,b_1]→ρ_3, [a_2,b_2]→ρ_4, where ρ_i and σ_j are arbitrary elements in ℍ and in general depend on (k, k'); the column on the right hand side indicates the identification (the correspondence) between the indices for the commutators and the coefficients ρ. Now considering that, [Ω̂(x,t), Ω̂^†(x',t) ]= [Ω̂(x,t), Ω̂^†(x',t) ]^†_(x↔x')⇒ σ_j=0; that is, two commutation rules in (<ref>) vanish. Therefore, using the commutation relations (<ref>), the field commutator (<ref>) reduces to, [Ω(x, t), Ω^†(x^', t)] = (2 π)^n[J^+(ρ_1- ρ_4)+ J^-(ρ_1- ρ_4)] δ_n(x'-x). In <cit.> this field commutator diverges and convergence criteria are considered, by spliting the full spectral parameter (-∞, +∞) into two parts, as mentioned at the beginning of this section. Here the integration converges due to we have the additional complex unit i, which allows us to obtain the Dirac delta; this same field commutator in <cit.> has a more complicated mathematical expression; however, in both formulations, this commutator has the same physically acceptable asymptotic limits, lim_x'- x→ 0[Ω(x,t), Ω ^†(x',t)] = ∞ , lim_x'- x→∞[Ω(x,t), Ω ^†(x', t)] = 0. Note that the field commutator (<ref>) does not depend on either the modified mass or the dissipative parameter γ. Now we obtain the conjugate canonical moment from Eq.(<ref>), Π_Ω≡∂ℒ/∂Ω̇=Ω̇-j γ/2Ω; then we construct the conjugate momentum operator, Π_Ω(x, t) =- {e^γ/2 t J^+∫_-∞^∞ i ω_k_2[_1(k_2) e^-i(ω_k_2 t-k_2·x)-_2^†(k_2) e^i(ω_k_2 t-k_2·x)] d k_2. .+ e^-γ/2 t J^-∫_-∞^∞ i ω_k_1[_1^†(k_1) e^-i(ω_k_1 t-k_1· x)-_2(k_1) e^i(ω_k_1 t-k_1· x)] d k_1}, therefore, we have the following equal time field commutator: [Π_Ω(x, t), Π_Ω^†(x^', t)] = (2 π)^n[J^+(ρ_1- ρ_4)+ J^-(ρ_1- ρ_4)] [ δ_n” ( x'-x ) - (m^2 - γ^2/4) δ_n(x' - x)], where δ” is the second derivative of the delta function. Again the dependence on dissipative factors e^±γ/2t has been eliminated due to the condition (<ref>). In contrast to <cit.>, here we have the presence of the modified mass; in particular we have the following limits, lim_(m^2 - γ^2/4) → 0[Π_Ω(x, t), Π_Ω^†(x^', t)]= (2 π)^n[J^+(ρ_1- ρ_4)+ J^-(ρ_1- ρ_4)] δ_n” ( x'-x ). On the other hand, we have the following limit, lim_m → 0[Π_Ω(x, t), Π_Ω^†(x^', t)] = (2 π)^n[J^+(ρ_1- ρ_4)+ J^-(ρ_1- ρ_4)] (δ_n” ( x'-x) + γ^2/4δ_n(x' - x) ). In both formulations, the one developed in <cit.> and in the present work, the field commutator (<ref>) satisfies the same limits that we mentioned in equation (<ref>). Similarly, we construct another equal time commutator, [Ω(x, t), Π_Ω(x^', t)] = -i ∫_-∞^∞ω_k[(J^+ρ_1+ J^-ρ̅_1) e^i k·(x^'-x)+(J^+ρ̅_4-J^-ρ_4) e^-i k·(x^'-x)] d k. This field commutator is the only one (of the three ones in this formulation) that does not vanish in both, the standard scheme and in the present scheme. In the standard scheme this field commutator is simply a Dirac delta, without providing more information. The integral in the commutator (<ref>) can be solved both numerically and analytically, considering certain restrictions for each case. We solve this integral in one spatial dimension (or 1+1 background space-time), and due to the presence of the frequencies, we use the dispersion relation (<ref>) with (m^2 - γ^2/4) > 0, for obtaining, [Ω(x, t), Π_Ω(x^', t)]= - 2 i [J^+(ρ_1- ρ_4)+ J^-(ρ_1- ρ_4)] √(m^2 -γ^2/4)/(x' - x) K_1( | m^2 -γ^2/4 | (x' - x) ), where K_1(x) is the modified Bessel function. The integral (<ref>) does not converge in general. It can also be solved numerically when the constraint (m^2 - γ^2/4)<0 is considered. On the other hand, we can see in the Fig.<ref> the behavior of the commutator (<ref>), for the the following cases: it converges to zero as m_mod→ +∞ and the field commutator diverges as m_mod→ 0. Similar to the limit computed for the field commutator (<ref>), we also compute the limit when (m^2 - γ^2/4) → 0 for the field commutator (<ref>), which shows us that this limit diverges. On the other hand, we can leave the γ-parameter intact and see the following limits, lim_m → 0[Ω(x, t), Π_Ω(x^', t)] = [J^+(ρ_1- ρ_4)+ J^-(ρ_1- ρ_4)] |γ| K_1(γ^4/16 (x'-x), ) where we have the explicit dependence of the dissipative parameter γ. We also have the following limits, lim_m →∞[Ω(x, t), Π_Ω(x^', t)] = 0, lim_(m^2 - γ^2/4) →∞[Ω(x, t), Π_Ω(x^', t)] =0. Again, in both formulations, namely in <cit.> and in this work, the field commutator (<ref>) satisfies the limits (<ref>). We can see that the integrals defining all field commutators depend in general on the spatial dimension; moreover, the field commutators (<ref>) and (<ref>) have an explicit dependency on γ through the modified mass and the frecuencies ω_k; however, the field commutator (<ref>) does not depend on any of these quantities. § EVOLUTION OPERATOR AND THE CHARGE OPERATOR In <cit.>, the function G(k,k^';system) is introduced, and depends on the spatial configurations for the subsystem and the environment; this function is real, which leads to maintain a double integration. In the formulation at hand we obtain a similar function, also identified with the total geometry of the system; our function now contains the imaginary unit i, which allows us to simplify the Hamiltonian operator, as discussed in <cit.>; we will describe this simplification for specific geometries. We start with the classical Hamiltonian operator, H= ∫[ 2 Π_ΩΠ_Ω̅+ 1/2∂_iΩ∂_iΩ̅+j γ/2(Ω̅Π_Ω̅-ΩΠ_Ω) + (m^2 -γ^2/4) ΩΩ̅/2]dx^d; each term is a U(1)× SO(1,1)-invariant; and again, the dissipative factors e^±γ t do not appear due to property (<ref>). Thus the Hamiltonian operator is, ℋ̂(γ; t )= ∫_-∞^∞ d k∫_-∞^∞ d k' {H_γ G(k, k';γ;t) [J^+{_1 (k), _1 (k')} + J^-{_2 (k'), _2 (k)}] + h.c }, where the following complex functions have been defined, G(k,k';γ;t) ≡ e^i (ω_k - ω_k')t∫_systemcomplete e^-i x· (k - k') dx^n=e^i (ω_k - ω_k')t (k - k'), H_γ ≡ 2 ω_k'ω_k + 1/2k'·k + i γ/2 (ω_k' +ω_k)+ 1/2(m^2 - γ^2/4). In (<ref>) appears the conjugate of (<ref>) and (<ref>). The function (k-k') and its conjugate, are defined as integrals over the total system, and they will depende on the geometry. §.§ The Hamiltonian in (1+1) space-time In the expression (<ref>) there is not yet a defined geometry for the total system. Different geometries can be considered; some specific configurations are shown in Fig.<ref> (see <cit.> for other geometries). We focus first on the a)-geometry in the Fig.<ref>. Therefore, the integration interval for the momenta (<ref>) will be (-∞, ∞). Hence, the (k-k') functions will be reduced to Dirac deltas; thus the Hamiltonian operator takes the form, ℋ̂(m;γ;t )= ∫_-∞^∞ d k{ H_γ[J^+{_1(k), _1(k) } + J^-{_2(k), _2(k)}]+ h.c }. Due to the presence of the complex unit i the Hamiltonian operator could be reduced to a single k-integration, as we mentioned at the beginning of this section; however, we see that the Dirac delta only arises when we have an infinite total system. Later we will consider finite geometries, where the finite geometry of the total system will play an important role. The unitary evolution operator can be constructed as the exponential of the Hamiltonian operator by using the complex unit i or, as in the case of <cit.>, with the complex unit j. We can also think of constructing the evolution operator with a hypercomplex exponential, that is, containing the two complex units e^ij ℋ̂; however, we will use the exponential with the standard unit i, since the hyperbolic complex unit in the exponential is always implicit due to the bases J^+,-. Thus our time evolution operator is, e^iĤt ≡ e^i t ∫ dk[ H_γ^+ ( J^+{_1(k),_1(k)} + J^-{_2(k),_2(k)}) + H_γ^- ( J^+{^†_2(k),^†_2(k)} + J^-{_1^†(k),_1^†(k)} )] =J^+ e^it ∫ dk[H_γ^+{_1(k),_1(k)} + H_γ^-{^†_2(k),^†_2(k)}] + J^- e^it ∫ dk[ H_γ+{_2(k),_2(k)} + H_γ^-{_1^†(k),_1^†(k)}], where the decomposition property e^j χ=e^χJ^+ + e^-χJ^-, and the annihilation property (<ref>), have been used. We can see that with this operator, the transition to the statistical case can be made. §.§ The charge operator The following conservation law follows from the invariance of the Lagrangian under the action of U(1) × SO(1,1), ∂_t(Ω̅Ω̇-ΩΩ̇+j γΩΩ)+∂_i(Ω∂^iΩ-Ω∂^iΩ)=0; with the charge density j_0=(ΩΩ̇-ΩΩ̇+j γΩΩ) and the current j^i=(Ω̅∂^iΩ-Ω∂^iΩ). Note the term j γΩΩ in the charge density, which is a new term that appears due to dissipation. Therefore, the charge Q associated with this hypercomplex current is given by the following space integral, Q=∫ j_0 dx^n=∫(ΩΠ_Ω-ΩΠ_Ω) dx^n; in terms of the four real components of the fields, the charge is: Q= ∫[ i (ϕ̇_2ϕ_1 - ϕ̇_1ϕ_2 + ψ̇_1ψ_2 - ψ̇_2ψ_1) + j (ψ̇_1ϕ_1 + ψ̇_2ϕ_2 - ϕ̇_1ψ_1 - ϕ̇_2ψ_2) ] dx^n, note that in the limit when ψ_1=ψ_2=0, the U(1)-charge is recovered, which is given by i(ϕ̇_̇2̇ϕ_1 - ϕ̇_̇1̇ϕ_2). Using the expressions (<ref>) and (<ref>) of the fields in terms of the , ^†, , ^† operators, the charge operator can be written as, Q̂(γ ; t) =-j ∫[{Ω, Π_Ω}j +c . c] d x^n =-2 i ∫_-∞^∞ω_k[ J^+{_1(k), _1(k)} + J^-{_1^†(k), _1^†(k)}. . -( J^+{_2^†(k), _2^†(k)} + J^-{_2(k), _2(k)}) ] d k. We can observe that the charge operator is anti-Hermitian Q̂^†=-Q̂. We remark the differences with respect to the results in <cit.>. First notice that in our expression (<ref>) we have a k-integral, since, once again, the imaginary unit i allows us to perform an integration; on the other hand, in our charge operator there are two additional anticommutators {_2^†, _2^†}, and {_2, _2}, which correspond to the copy-system. § THE VACUUM IN ℍ According to <cit.> and <cit.> a definition with linear expressions in operators of annihilation, leads to a trivial QFT. The vacuum will be defined as a coherent state for the following operators that involve two annihilation operators (the derivation of this definition of vacuum is extensive and can be seen in detail in <cit.>), J^+{_1(k),_1(k') }|0⟩ = J^+λ_1 |0⟩, J^-{_2(k'),_2(k) }|0⟩ =J^-λ_2 |0⟩ for all k, k' and λ_1,2 are elements of the ring ℍ. With subscript 1 we are representing the definition of the vacuum for the subsystem of interest and with subscript 2 for the environment. Now, considering the quadratic combination of the creation and annihilation operators in the observables (<ref>) and (<ref>) on the vacuum state, we have, J^-{_1(k),_1(k') }^†|0⟩ = J^-^†_1(k) ^†_1(k') |0⟩ + J^-^†_1(k') ^†_1(k) |0⟩ = J^-(|^1_k, ^1_k'⟩ + |^1_k',^1_k⟩), J^+{_2(k'),_2(k) }^†|0⟩= J^+(|^2_k',^2_k⟩ + |^2_k',^2_k⟩). To obtain the vacuum expectation values corresponding to the Hamiltonian and the charge operator we use the expression (<ref>), ⟨0| Ĥ|0| ⟩= ⟨0|0|∫⟩_-∞^∞ dk H_γ[ ( J^+λ_1 + c.c.) + ( J^-λ_2 + c.c.) ], ⟨0|Q̂|0|=⟩ - ⟨0|0|∫⟩_∞^∞ dk[ ( J^+λ_1 + c.c.) - (J^-λ_2 +c.c. )]. The integrals in (<ref>) and (<ref>) are divergent, but these divergences can be eliminated. To visualize this, we write the part on the right hand side of (<ref>) as, J^+λ_1 +c.c. =λ_ + λ_, J^-λ_2 + c.c. = λ_ - λ_. Now we can write to (<ref>) and (<ref>) in terms of λ_, and separating the real part and imaginary, we have, ⟨0| Ĥ|0| ⟩= ⟨0|0|∫⟩_-∞^∞ dk[H_k'(λ_ + λ_ +λ_ - λ_) + 2ijω_kγ(λ_ + λ_ - λ_ + λ_) ], ⟨0|Q̂|0|=⟩ -2i ⟨0|0|∫⟩_∞^∞ dk( ω_k)[λ_ + λ_ - λ_ + λ_]; where H_k' is, H_k'=2ω_k^2 + 1/2k^2 + 1/2(m^2-γ^2/4). Therefore, by imposing the restrictions, λ_ + λ_ =0, λ_-λ_=0; it means, imposing the vanishing of the projections (<ref>), hence, the vanishing of the v.e.v'.s (<ref>) and (<ref>) are achieved. The first condition corresponds to the case of λ_1∼ J^- and the second corresponds to λ_2∼ J^+ in the definitions (<ref>), where ∼ means proportional to. § ENTANGLED STATES Considering the action of the evolution operator in the form (<ref>) on the vacuum state defines above, one obtains, |0(t)⟩ ≡ e^iℋ̂t|0⟩ =e^i∫_-∞^∞ dk|[H_γ^+(J^+{_1(k), _1(k)} + J^-{_2(k), _2(k)}) + H_γ^-( J^+{_2^†(k), _2^†(k)} + J^-{_1^†(k), _1^†(k)}) ] t|0⟩; if the original vacuum is normalized, the evolved state (<ref>) will also be a normalized state for any t: ⟨0(t)|0(t)|≡⟩⟨0| e^-i ℋ̂t· e^i ℋ̂t|0⟩= ⟨0|0|.⟩ As well known in the quantum mechanics the representations of the canonical commutation relations are all unitarily equivalent to each other (Stone-Von Neuman theorem); thus the evolved vacuum leaves the original space of states, in a finite volume case; since t →∞, this yields an asymptotic state orthogonal to the initial state |0⟩ <cit.>. On the other hand, in a QFT the number of degrees of freedom is infinite, thus, there are infinite unitarily non-equivalent representations of the canonical commutation relations, this allows describing different systems that can be in different phases <cit.>. Using this, a dissipative quantum model of the brain has been studied where there is a non-unit time evolution <cit.>. Considering the evolved vacuum (<ref>), we see that this state evolves always aligned with the original vacuum state, ⟨0|0(t)| ⟩= e^∫_-∞^∞ dk[H_γ^+ J^+λ_1 - c.c. ]t - ∫_-∞^∞ dk[H_γ^+ J^-λ_2 - c.c. ]t⟨0|0| ⟩ =⟨0|0|exp⟩{∫_-∞^∞ dk[ iH_k(λ_ + λ_ + λ_ - λ_) + j2ω_kγ(λ_ + λ_ + λ_ - λ_) ] t }≠ 0. The usual imaginary part and the hyperbolic imaginary part of the second line in Eq.(<ref>) have been separated. The expression in (<ref>) has the form of a bi-complex phase e^i α e^jβ, then the property (<ref>) can be used, obtaining ⟨0|0(t)| ⟩=⟨0|0|⟩(cos[α(t)] cosh[β(t)] + i sin[α(t)] cosh[β(t)] + j cos[α(t)] sinh[β(t)] + ij sin[α(t)] sinh[β(t)]), where, β(t) = 2γ t ∫_-∞^∞ dk ω_k(λ_ + λ_ + λ_ - λ_)=0, α(t) = t∫_-∞^∞ dk H_k(λ_ + λ_ + λ_ - λ_)=0, the vanishing is due to the condition (<ref>), which ensure that the temporal evolution does not leave the original Hilbert space. Furthermore, as we know, the proportionality conditions (λ_1∼ J^-), (λ_2∼ J^+), will lead to that the entanglement dynamics is constructed in both directions, namely, for the subsystem of interest (J^+) and, for the environment (J^-) perspective. With these constraints in mind, and using the expansion e^J^±χ=1 + J^±∑_n=1^∞χ^n/n!, we have that the evolved state (<ref>) can be rewritten as, |0(t)⟩ = exp{ t∫_∞^∞ dk H_γ(J^+{_2^†, _2^†} - J^-{_1^†, _1^†}) }|0⟩ = |0⟩ + J^+ t ∫_-∞^∞ d k H_γ(|^2_k, ^2_k⟩ + |^2_k,^2_k⟩) - J^- t ∫_-∞^∞ d k H_γ(|^1_k, ^1_k⟩ + |^1_k,^1_k⟩) + ...; the notation (<ref>) for excited states is used. This state is then entangled in moments, since it cannot be factorized into the product of single modes. In the next section we will see that entanglement is present even in the absence of dissipation. §.§ Entangled asymptotic states In this section, we return to the general Hamiltonian operator (<ref>), in which the geometry of the total system has been not specified; now, we consider different geometrical configurations and we shall construct the entangled states associated. We begin by considering the following evolved state, |0(t)⟩ ≡ e^iℋ̂t|0⟩ =exp{i∫ dk^+∞_-∞∫ dk'[ H_γ G ( J^+{_2^†(k'), _2^†(k)} + J^-{_1^†(k), _1^†(k')}) ] t}|0⟩, where we used Ĥ given in (<ref>), and the function G is defined in (<ref>). Also, we have used the conditions given in Eq.(<ref>), that is, the eigenvalues in Eq.(<ref>) have disappeared. The above expression can be rewritten as, e^iℋ̂t|0⟩ =exp{i∫ dk^+∞_-∞∫ dk' H_γ(k,k') t e^i(ω_k'-ω_k)t ^+(k-k') (k,k') }|0⟩, where, (k,k') ≡ J^+{_2^†(k'), _2^†(k)} + J^-{_1^†(k), _1^†(k')}, and (k-k') is given in Eq.(<ref>) and is depending also on the geometry of the total system. Since we have incorporated the imaginary unit i in the present work, this has a great implication, since it opens the way for us to begin to study the ergodicity of systems that undergo entanglement, a characteristic that contrasts with <cit.>, where it is not possible to explore this area. As it is well known, asymptotic states are closely related to the ergodic theory; this theory has various applications in different areas of mathematics <cit.><cit.> and physics <cit.><cit.>, where ergodicity criteria and the physical properties are established for specific configurations of a system. These configurations can have well-defined asymptotic states (states that reach thermal equilibrium) or cyclostationary asymptotic states (states that do not thermalize, but relax to periodic states) <cit.>. This section is the beginning of a long way to go to study the ergodicity in entangled quantum systems with a new approach, the hypercomplex formalism. An important aspect to consider in the expression (<ref>) is that, when integrating over the total system, the expression (k-k') will have different forms when considering different geometries. An example, if the total system has the geometry of the area of a disk, then the integration will give us a modified Bessel function, which must be considered when performing the k-integrations in (<ref>). §.§ An asymptotic entangled state for a finite total system For this subsection we consider the total one-dimensional system as represented in Fig.<ref> with finite ranges, that is, the range for the environment is (L_2, 0) and the range for the system of interest is (0, L_1), thus the total system is contained in the interval (L_1, L_2). For the state (<ref>), a distribution for a discrete time sequence can be identified, namely t e^i(ω_k' -ω_k)t, as the generating function of the delta function δ_t[i(ω_k'-ω_k)]; this delta sequence satisfies: lim_n →∞∫_-∞^∞δ_n(is) f(s) ds = -if(0). Hence, we have the following asymptotic state with dissipation, lim_t →∞ e^i ℋ̂t|0⟩ = exp{i lim_t →∞∫ dk^+∞_-∞∫ dk' H_γ(k,k') t e^i(ω_k'-ω_k)t ^+(k-k') (k,k') }|0⟩ =exp{5 (L_2 - L_1 ) ∫_-∞^∞ dk [ ω_k^3/k - i 2 ω^2_kγ/k] (k,k) }|0⟩, where the function H_γ(k,k') given in Eq.(<ref>) has been considered, and the function (k-k') reduces, in this case, to (L_2 -L_1), which is the total length of the one-dimensional system; in addition, the real part and the imaginary part have been separated. Furthemore, using the expansion e^J^±χ=1 + J^±∑_n=1^∞χ^n/n!, and defining, η_k≡ω_k^3/k - i 2 ω^2_kγ/k, the state (<ref>) can be described as, lim_t →∞ e^i ℋ̂t|0⟩ = |0⟩ + 5J^+ (L_2 - L_1) ∫_-∞^∞dk η_k ( |^2_k, ^2_k⟩ + |^2_k, ^2_k⟩) + ⋯ + 5J^- (L_2 - L_1) ∫_-∞^∞dk η_k ( |^1_k, ^1_k⟩ + |^1_k, ^1_k⟩) +⋯ where the notation given in Eq.(<ref>) for excited states has been used. In expression (<ref>) we can identify what the subsystem of interest and the environment observe, by considering the projections of (<ref>) on the basis (J^+, J^-). For the point of view of the subsystem of interest, we project J^+ on Eq.(<ref>), J^+lim_t →∞ e^i ℋ̂t|0⟩= J^+( |0⟩ + 5 (L_2 - L_1) ∫_-∞^∞dk η_k ( |^2_k, ^2_k⟩ + |^2_k, ^2_k⟩) + ⋯); therefore, the subsystem of interest observes creation of bosons of type and type , corresponding to the environment. Now, if we project Eq.(<ref>) on the basis J^-, we will have the point of view of the environment, where it observes creation of bosons corresponding to the subsystem of interest, which are represented with the superscript 1. Furthermore, we also note that this state is then entangled in the momenta, since it cannot be factored into the product of individual modes. Now, if the dissipative parameter γ disappears, the following asymptotic state is obtained, lim_t →∞ e^i ℋ̂t|0⟩=exp{5 (L_2-L_1) ∫_-∞^∞ dk (k^2+m^2)^3/2/k(k,k) }|0⟩; where (k, k') has been defined in Eq.(<ref>). Similarly to Eq.(<ref>), an expansion can be done for the state (<ref>), obtaining, lim_t →∞ e^i ℋ̂t|0⟩ = |0⟩ + 5J^+ (L_2 - L_1) ∫_-∞^∞dk (k^2+m^2)^3/2/k( |^2_k, ^2_k⟩ + |^2_k, ^2_k⟩) + ⋯ + 5J^- (L_2 - L_1) ∫_-∞^∞dk (k^2+m^2)^3/2/k( |^1_k, ^1_k⟩ + |^1_k, ^1_k⟩) +⋯; similarly to (<ref>), we can project the state (<ref>) on the basis (J^+, J^-) to obtain the point of view of the subsystem of interest and the environment, respectively. Furthermore, we can observe that the entanglement is present even in the absence of dissipation, since the large number of degrees of freedom present in free quantum field theories induce entanglement with other degrees of freedom along the boundary <cit.>. §.§ An asymptotic entangled state for an infinite total system Now we consider the geometry in which the one-dimensional subsystems are semi-infinite; this spatial configuration is represented in the Fig.<ref>. For this case one can show that the function ^+(k-k') reduces to a Dirac delta; thus the state in (<ref>) takes the form, lim_t →∞ e^i ℋ̂t|0⟩ = exp{5 πlim_t →∞ t ∫_-∞^∞ dk [γω_k/5 + i ω_k^2] (k,k) }|0⟩ =exp{lim_t →∞γ t ∫_-∞^∞ dk ω_k (k,k) }[cos(5 πlim_t →∞ t ∫_-∞^∞ dk ω_k^2 (k,k) ) + i sin(...)], where the sine function has the same argument as the cosine. There is no an asymptotic state defined, since in general the expression (<ref>) diverges. However, there are works where these asymptotic states are analyzed in more detail <cit.>, where the authors consider that the product γ t to be finite, giving a holographic interpretation between two AdS boundaries; from this, they make a comparision between their results that were applied to the entropy of the system with, the results of finite temperature, interpreting γ as temperature, which suggests ergodicity in the limit thermodynamic; obtaining as a result that, the holographic dual on this two asymptotically AdS boundaries of his theory, corresponds to the BTZ black hole and where the fields are definied. On the other hand, it can be seen that when t=0 in the expression (<ref>), the usual vacuum is obtained. Furthermore, if the dissipative parameter γ vanishes in Eq.(<ref>), an oscillating state is obtained, that is, a cyclostationary state, lim_t →∞ e^i ℋ̂t|0⟩= exp{5 i πlim_t →∞ t ∫_-∞^∞ dk (k^2 + m^2) (k,k) }|0⟩, and analogously to the expression (<ref>), we can project the state (<ref>) on the basis (J^+, J^-) to have the point of view of the subsystem of interest or the environment; for the subsystem of interest, one obtains, J^+lim_t →∞ e^i ℋ̂t|0⟩= J^+exp[5 i πlim_t →∞ t ∫_-∞^∞ dk (k^2 + m^2) (|^2_k',^2_k⟩ + |^2_k',^2_k⟩) ] |0⟩. We can notice that, when we have the point of view of the subsystem of interest, we see bosons corresponding to the environment (subscript 2) and when we observe from the perspective of the environment, we see bosons that correspond to the subsystem of interest (subscript 1). In <cit.> the corresponding states with γ=0 and t=0 are equivalent; however, we see that in this work they are totally different; a more detailed explanation can be seen in <cit.>. The oscillation occurs when the systems are semi-infinite and the fields are free. As the authors comment in <cit.>, the absence of thermalization is unusual when dissipation is absent; explaining it with an example, with the Langevin equation for a free Brownian particle of mass m, where by using the condition of zero integral friction, that is, an asymptotic condition on the dissipation kernel in the Laplace domain, they can show the absence of thermalization when γ=0. In addition, this is accompanied by another phenomenon, the superdiffusion. § THE USUAL WEIGHTED MEASURE Dealing with quantum field theories it is usual to see that the expansion used for the momentum contains the weight 1/√(ω_k) as measure of integration, a convenient choice for the normalization of coefficients of type a_p. This choice imposes the well-known equal-time commutation relation [ϕ(t, x), Π(t, y)]= i δ^3(x- y). In the development of this work, we did not impose this weight on our field; however, if we follow the usual path found in the literature, we will not obtain significant changes in our field commutators (<ref>), (<ref>) and (<ref>). In this section we will discuss the results obtained by adding this weight. Considering the weight 1/√(ω_k) in the measure for the field (<ref>) and its moment (<ref>), we obtain, [Ω̂(x,t), Ω̂^†(x',t)] = 2 [J^+(ρ_1 - ρ_4) + J^-(ρ_1 - ρ_4)] K_0( |m^2 - γ^2/4| (x' - x) ), [Π̂_Ω(x,t), Π̂^†(x',t)] = -2 [J^+(ρ_1 - ρ_4) + J^-(ρ_1 - ρ_4)] √(m^2 - γ^2/4)/(x'-x) K_1(|m^2 - γ^2/4| (x'-x)), [Ω̂(x,t), Π̂_Ω(x',t)] = i δ_n(x' - x)[J^+(ρ_1 - ρ_4) + J^-(ρ_1 - ρ_4)]. Where K_0(x) and K_1(x) are modified Bessel functions of the second kind. We can notice that, by adding the weight (1/√(ω_k)), the value of the field commutator [Ω̂, Ω̂^†](Eq.(<ref>)) reduces to the field commutator [Ω̂, Π̂_Ω] above, that is [Ω̂,Ω̂^†] → [Ω̂, Π̂_Ω]. Similarly we have the mapping [Π̂_Ω,Π̂^†_Ω] → [Ω̂, Π̂_Ω] (Eq.(<ref>)), etc. The integration process of (<ref>) was the same as that performed to obtain (<ref>), considering (1+1) background space-time. Thus, by using different measures, the field commutators are interchanged to each other; in the case of the field commutator [Ω̂, Ω̂^†], we get a more pronounced “peak" for the second choice for the measure. However, this weight in the integration measure is important, since the exponent of this weight can lead to different theories, even for closed systems <cit.>. By considering the new measure, it is enough to add the corresponding weight in quantities such as the Hamiltonian operator Eq.(<ref>), the charge operator Eq.(<ref>), etc; (moreover, the adding of the new weight does not help to eliminate the divergences in the observables Eq.(<ref>) and Eq.(<ref>)) and in the entangled states. § CONCLUSIONS In this work we have made an alternative formulation to study open systems whose construction is based on a hypercomplex ring, which contains two imaginary units, namely, the standard unit i and the hyperbolic unit j; generalizing the formalism used in <cit.>. The algebraic structure of the ring in the formulation in hand has led us to a non-canonical theory which, in turn, is committed to the emergence of a new SO(1,1) symmetry. The consequences that this non-canonical theory throws at us are: the quantization scheme at hand gives us the opportunity to cure the pathologies that come from the standard quantization, since we have two complex units, the system does not leave the original Hilbert space, independly of the volume of the system. This leads to having field commutators with different characteristics from standard field commutators, since our field commutators have an explicit dependence on the dissipative parameter and the background dimension. Another implication of the formulation at hand is the way in which the grand partition function is constructed due to the emergence of the new continuous symmetry; moreover, the term corresponding to the chemical potential includes an adjustment due to dissipation (see <cit.>); with this, we would have new descriptions of the thermal field theory using, the formalism of a hypercomplex ring; works along these research lines are in progress. On the other hand, we have shown that the vacuum state temporally evolves as an entangled state, independly of whether dissipation is on or off. This result is motivation for future works, some of which are already under development. One of these works consists of using these entangled states to calculate the time-dependent entanglement entropy, with which we can have a tool to measure the entanglement in dissipative systems. We also review the asymptotic entangled states for finite and infinite systems and, due to the structure of the Hamiltonian operator that contains the geometric information of the system, the ergodic theory naturally arises. Here we have an element that is important when studying the ergodicity of a system, the geometry. With this element we begin to introduce ourselves to the study of the ergodic behavior of these asymptotic entangled states; however, it is not enough to be able to say if a system reach thermal equilibrium or remains in a cyclo-steady state. This result is motivation to continue investigating how ergodicity is involved in our formulation; a work that we are currently developing. 1.1.1 D. Browne, S. Bose, F. Mintert, M.S. Kim. From quantum optics to quantum technologies. Progress in Quantum Electronics (2017). <https://doi.org/10.1016/j.pquantelec.2017.06.002> 1.1.2 S. Hollands and K. Sanders. Entanglement Measures and Their Properties in Quantum Field Theory. SpringerBriefs in Mathematical Physics. Springer Cham (2018). 33 A. Gadelha, M. Botta-Cantcheff, D. Marchioro and D. Nedel. Entanglement from dissipation and holographic interpretation. Eur. Phys. J. C, 78, 105 (2018). 1.1 J. Chen, M. Rossi, D. Mason and A. Schliesser, Entanglement of propagating optical modes via a mechanical interface. Nature Communications 11 (1) (2020). 1.2 Z.B. Tan, A. Laitinen, N.S. Kirsanov, A. Galda, V.M. Vinokur, M. Haque, A. Savin, D.S. Golubev, G.B. Lesovik and P.J. Hakonen, Thermoelectric current in a graphene Cooper pair splitter, Nature Communications 12 (1) (2021). 1.3 J. Li and S. Kais, Entanglement classifier in chemical reactions. Science Advances (2019), <https://www.science.org/doi/epdf/10.1126/sciadv.aax5283> 1.9 M. B. Plenio and S. F. Huelga, Phys. Rev. Lett. 88, 197901 (2002); B. Kraus and J. I. Cirac, Phys. Rev. Lett. 92, 013602 (2004); S. Diehl et al., Nature Phys. 4, 878 (2008); F. Verstraete, M. M. Wolf, and J. I. Cirac, Nature Phys. 5, 633 (2009); J. T. Barreiro, et al., Nature 470, 486 (2011). A. S. Parkins, E. Solano, and J. I. Cirac, Phys. Rev. Lett. 96, 053602 (2006). 1.10 D.A. Lidar and K.B. Whaley, Decoherence-Free Subspaces and Subsystems. In: Benatti, F., Floreanini, R. (eds) Irreversible Quantum Dynamics. Lecture Notes in Physics, 622. Springer, Berlin, Heidelberg (2003). <https://doi.org/10.1007/3-540-44874-8_5> 1.14 D. Giulini, E. Joos, C. Kiefer, J. Kupsch, I.-O. Stamatescu, and H.D. Zeh, editors. Decoherence and the Appearance of a Classical World in Quantum Theory. Springer-Verlag, Berlin, (1996). 1.8 S.A. Sabbadini and G. Vitiello, Entanglement and Phase-Mediated Correlations in Quantum Field Theory. Application to Brain-Mind States. Appl. Sci, 9, 3203 (2019). <https://doi.org/10.3390/app9153203> 1.8.9 A.V. Plyukhin, Nonergodic Brownian oscillator. Phys. Rev. E 105, 014121, (2022). 25 J. Berra-Montiel, R. Cartas-Fuentevilla and O. Meza-Aldama, Hyperbolic ring based formulation for thermo field dynamics, quantum dissipation, entanglement, and holography. Eur. Phys. J. C, 80, 603 (2020). <https://doi.org/10.1140/epjc/s10052-020-8161-x> 39 H. Dekker, Classical and quantum mechanics of the damped harmonic oscillator, Phys. Reports, 80 1 (1981). 39.1 K. Tranchenko. Quantum dissipation in a scalar field theory with gapped momentum states. Sci Rep 9, 6766 (2019). <https://doi.org/10.1038/s41598-019-43273-9> 24 S. Ulrych, Relativistic quantum physics with hyperbolic numbers. Phys. Lett. B, 625, pp. 313-323 (2005). 1.17 F.C. Khanna, A.P.C. Malbouisson, J.M.C. Malbouisson and A.E. Santana, Thermal Quantum field theory: Algebraic aspects and applications. World Scientific (2009). 18 R. Cartas Fuentevilla and A. Juárez Domínguez, Quantum field theory of a hypercomplex scalar field on a commutative ring (2017). <https://doi.org/10.48550/arXiv.1705.07981> 24.1 S. Ryu and T. Takayanagi, Aspects of holographic entanglement entropy. JHEP08(2006)045. <https://iopscience.iop.org/article/10.1088/1126-6708/2006/08/045/pdf> 42 G. Vitiello, E. Celeghini and M.Rasetti, Quantum Dissipation, Annals of Physics 215, 156-170 (1992). 43 P. Jizba, B. Massimo and G. Vitiello, Quantum Field Theory and its Macroscopic Manifestations: Boson Condensation, Ordered Patterns and Topological Defects. Imperial College Press (2011). 44 S.A. Sabbadini and G. Vitiello, Entanglement and Phase-Mediated Correla-tions in Quantum Field Theory: Application to Brain-Mind States, Appl. Sci. 9(15), 3203, (2019). <https://doi.org/10.3390/app9153203> 50 I.P. Cornfeld, S.V. Fomin, and Ya.G. Sinai. Ergodic Theory. Springer-Verlag, (1982). 51 S. Katok and J.P. Thouvenot. Spectral properties and combinatorial constructions in ergodic theory. In Handobook of Dynamical Systems, 1B. Elsevier (2006). 52 D.H.J. O’Dell. Quantum Catastrophes and Ergodicity in the Dynamics of Bosonic Josephson Junctions. Phys. Rev. Lett. 109, 150406 (2012). 53 J.M. Mourão, T. Thiemann, and J.M. Velhinho. Physical properties of quantum field theory measures. J. Math. Phys. 40, 2337 (1999). 45 A.V. Plyukhin, Nonergodic Brownian oscillator, Phys. Rev. E 105, Iss.1. 014121, (2022). <https://doi.org/10.1103/PhysRevE.105.014121> 47 H. Casini and M. Huerta, Entanglement entropy in free quantum field theory, J. Phys. A: Math. Theor. 42 504007 (2009). <https://iopscience.iop.org/article/10.1088/1751-8113/42/50/504007> 49 H. Narnhofer, W. Thirring, and H. Wiklicky. Transitivity and Ergodicity of Quantum Systems. Journal of Statistical Physics, 52, Nos. 3/4, (1988). 50 R. Cartas-Fuentevilla, A. Mendez-Ugalde. Lorentz-breaking weighted measures as quantum field theory regulators. Phys. Lett. B 830 137146 (2022). <https://doi.org/10.1016/j.physletb.2022.137146>
http://arxiv.org/abs/2306.09531v1
20230615221956
Carrier-resolved real-field theory of multi-octave frequency combs
[ "Danila N. Puzyrev", "Dmitry V. Skryabin" ]
physics.optics
[ "physics.optics" ]
optica shortarticletrue
http://arxiv.org/abs/2306.04920v1
20230608034437
Flow-based Network Intrusion Detection Based on BERT Masked Language Model
[ "Loc Gia Nguyen", "Kohei Watabe" ]
cs.CR
[ "cs.CR" ]
Nagaoka University of Technology Nagaoka, Niigata Japan [email protected] Nagaoka University of Technology Nagaoka, Niigata Japan [email protected] A Network Intrusion Detection System (NIDS) is an important tool that identifies potential threats to a network. Recently, different flow-based NIDS designs utilizing Machine Learning (ML) algorithms have been proposed as potential solutions to detect intrusions efficiently. However, conventional ML-based classifiers have not seen widespread adoption in the real-world due to their poor domain adaptation capability. In this research, our goal is to explore the possibility of improve the domain adaptation capability of NIDS. Our proposal employs Natural Language Processing (NLP) techniques and Bidirectional Encoder Representations from Transformers (BERT) framework. The proposed method achieved positive results when tested on data from different domains. Flow-based Network Intrusion Detection Based on BERT Masked Language Model Kohei Watabe ========================================================================== § INTRODUCTION It is common in practical application of NIDS for there to be a change in the data distribution between its training data and the data it encounters when deployed. Conventional ML algorithms often adapt poorly to such change, which limit their usefulness in real-world scenarios <cit.>. To address this, Energy-based Flow Classifier (EFC) <cit.> was proposed as a solution. Despite having good adaptability, EFC produces high false positives rate for domains where the distribution of features of malicious flows overlap with that of benign flows. We theorize that the reason for the limitations of conventional ML algorithms and EFC is the use of singular flows as input data, as the classifier can only model the distribution of features within a flow. This limitation can be overcome with the use of sequences of flows, allowing the classifier to further models the distribution of a flow in relation to other flows. To utilize the context information from a sequence of flow, we use the BERT framework, which is able to process inputs in relation to all the other inputs in a sequence. BERT<cit.> is a transformer-based machine learning technique for NLP developed by Google. The BERT framework is comprised of two steps: pre-training and fine-tuning. In pre-training, the BERT model is trained on unlabeled data. For fine-tuning, the model is first initialized using the pre-trained parameters, and then trained using labeled data from the downstream tasks. BERT is pre-trained with two unsupervised tasks, which are Masked Language Modeling (MLM) and Next Sentence Prediction (NSP). In MLM, some of the words in a sentence are replaced with a different token. The objective is to predict the original value of the masked words based on other unmasked words in the sentence. In NSP, BERT takes sentence pairs as input. The objective is to predict whether the second sentence in the pair is the next sentence in the document. For fine-tuning, task-specific inputs and outputs are added to a pre-trained BERT model. Our research employs CIDDS-001<cit.> and CIDDS-002<cit.> data sets that contains flow samples from a small business environment emulated using OpenStack. CIDDS-001 also contains real traffic flow samples captured from an external server directly deployed on the internet. § PROPOSAL We first organize network traffic flows into structures similar to a language, treating a flow as a word and a sequence of flows as a sentence. In this study, the BERT model is pre-trained with only the MLM task. For fine-tuning, a linear layer with softmax output is used. It is important to preserve the distribution of flows within a sequence; therefore, during training, the data set is not shuffled. A training sample is generated by selecting a segment of flows from the data set at random. The overall architecture of the system is illustrated in Figure <ref>. Six features from each flow (Src Pt, Dst Pt, Proto+Flags, Packets, Bytes, Duration) are used as input data, these features are discretized as described in EFC's original paper <cit.>. The discrete value of each features are encoded as numbers (flow_i). BERT decodes each number into a 128-dimension vector, concatenates them to form a 768-dimension vector (e_flow_i), and processes it to produce a different 768-dimension vector (h_flow_i). The output of BERT is then passed through a Multilayer Perceptron classifier (a linear layer with softmax output), which reduces the dimension from 768 to 2. This 2-dimension vector represents the predicted probability of the flow being benign and malicious (e.g. benign 0.99, malicious 0.01). The predicted class of the flow is the class with the higher probability (e.g. benign). We used data from three different domains: CIDDS-001 OpenStack, CIDDS-001 External Server, and CIDDS-002 to evaluate the domain adaptation capability of the proposed method. Average composition of the data sets used in the experiment are shown in Table <ref>. Training was performed on one set of CIDDS-001 large. While testing was performed on CIDDS-001 internal, CIDDS-001 external, and CIDDS-002, each containing ten sets randomly selected from the full data sets. This testing scheme mimics the one used by Camila et al. <cit.> to make the results of the proposed method more comparable to those of EFC. Flows labeled normal are considered benign, while those labeled otherwise are considered malicious. For CIDDS-001 external, flows labeled unknown and suspicious are considered benign and malicious respectively. We assess the performance of our method in comparison to EFC and ML classifiers including Decision Tree (DT), K-Nearest Neighbors (KNN), Multilayer Perceptron (MLP), Naive Bayes (NB), and Support Vector Machine (SVM). Each classifier's performance is measured using Accuracy and F1 score <cit.>. § RESULTS AND DISCUSSION Table <ref> shows the average performance and standard error for each classifier. All classifiers achieved higher Accuracy and F1 Score on CIDDS-001 internal test sets (same domain as training data) compared to the other test sets (different domains from training data). Both the proposed method and EFC maintained performance across the two different domains, with the proposed method outperforming EFC. We also experimented with training the classifiers on smaller but balanced data sets (containing 80000 flows with the same proportion of labels as in CIDDS-001 internal). However, performance was worse for all classifiers when compared to those trained on CIDDS-001 large. Notably, the performance of the proposed method was significantly affected for all domains. By creating balanced data sets, the distribution of flows within a sequence was also altered. This suggests that the distribution of flows within a sequence is learned by the model of the proposed method. § CONCLUSION AND FUTURE WORK In this study, we suggest the use of singular flows input to be a possible explanation for the poor domain adaptation capability of conventional ML-based classifiers. Then we proposed the used of sequences of flows to address this limitation. We utilized BERT model for the representation of flow sequences and an MLP classifier to discriminate between benign and malicious flows. Early experimental results showed that the proposed method is capable of achieving good and consistent results across different domains. However, more extensive testing on recent data sets is needed to further evaluate its domain adaptation capability. In future work, we plan to investigate the impact of flow sequence sampling method on result, such as using only benign flows or grouping flows into sequences that originate from the same hosts. Making the system less reliant on labeled data is another research goals of ours. We aim to achieve this by modeling sequences of benign flows then look for anomalous representations produce by BERT, indicating malicious flows. This work was partly supported by JSPS KAKENHI Grant Number JP20H04172. ACM-Reference-Format
http://arxiv.org/abs/2306.01522v1
20230602131548
Auditory Representation Effective for Estimating Vocal Tract Information
[ "Toshio Irino", "Shintaro Doan" ]
eess.AS
[ "eess.AS", "cs.SD" ]
Auditory Representation Effective for Estimating Vocal Tract Information Toshio Irino1 and Shintaro Doan1 1 Faculty of Systems Engineering, Wakayama University, Japan E-mail: [email protected], [email protected] Received: date 2021.09 / Accepted: 2022.03 =============================================================================================================================================================================== firststyle fancy We can estimate the size of the speaker solely based on their speech sounds. We had proposed an auditory computational theory of the stabilised wavelet-Mellin transform (SWMT), which segregates information about the size and shape of vocal tract and glottal vibration, to explain this observation. It was demonstrated that the auditory representation or excitation pattern (EP) associated with a weighting function based on SWMT, referred to as “SSI weight”, made it possible to explain the psychometric functions of size perception. In this study, we investigated whether EP with SSI weight can precisely estimate vocal tract lengths (VTLs) which were measured using male and female MRI data. It was found that the use of SSI weight significantly improved the VTL estimation. Moreover, the estimation errors were significantly smaller in the EP with the SSI weight than those in the commonly used spectra derived from the Fourier transform, Mel filterbank, and WORLD vocoder. It was also shown that the SSI weight can be easily introduced into these spectra to improve the performance. Index Terms: Speaker size perception, Auditory model, Vocal tract length (VTL), Glottal vibration, Size-shape image (SSI) § INTRODUCTION We can recognize phonemes pronounced by children, women, and men despite the large differences in their heights. This indicates that our auditory system can extract and identify phonemes in which variations in the pattern of formant frequencies distinguish vowel types and the fundamental frequency determines the pitch. Speaker information can also be extracted simultaneously. Speech sounds contain information about vocal tract size, which is closely correlated with speaker size <cit.>. Many psychoacoustic studies have been conducted on size discrimination and phoneme perception from voiced and unvoiced speech sounds (e.g., <cit.>; see review in <cit.>). Irino and Patterson <cit.> proposed the stabilised wavelet-Mellin transform (SWMT) as a computational theory to explain how the auditory system estimates size and shape of vocal tract separately from glottal pulse information. As explained in Section <ref>, size estimation from voiced sounds is more difficult than from unvoiced sounds. This is because voiced speech sounds contain information about vocal tract response (filter characteristics) and glottal vibration (source characteristics) as shown in the source filter theory of speech <cit.>. Therefore, it is necessary to effectively separate this information. An auditory model based on SWMT was proposed to explain the experimental results on size discrimination of both unvoiced and voiced speech sounds<cit.>. It was demonstrated that introduction of a simple weighting function, referred to as “size-shape image weight (SSI weight)” (see Section <ref>), enabled to explain the results successfully. However, the domain of previous studies was restricted to the explanation of psychometric functions derived using synthetic speech sounds. For practical applications in signal processing, it is necessary to demonstrate that the auditory model can also extract vocal tract information from natural speech sounds. In the current study, we focus on the estimation of vocal tract lengths (VTLs) of various speakers as the first step. This is because the VTLs of sustained vowels can be precisely measured from MRI data <cit.> and, therefore, provide “ground truth” usable for evaluation. First, we describe the problem of VTL estimation using the auditory spectrum and our approach. Then, we describe the VTL estimation method and present the evaluation results. Moreover, we compared the auditory spectrum with the commonly used Fourier spectrum, Mel filterbank spectrum <cit.>, and WORLD vocoder spectrum <cit.>. Furthermore, we investigated whether the introduction of the SSI weight into these spectra improved the estimation accuracy. If it is the case, this simple function, which does not require any training data as in DNN, could improve the performance of many speech processing tasks. § AUDITORY PROCESS TO ESTIMATE SIZE §.§ Physics of vocal tract and estimation The main difference between the vowels of males and females lies in the differences in the VTL and fundamental frequency F_o. When VTL is shortened by a factor of α, the formant frequencies F_1 and F_2 move upward to α F_1 and α F_2. On the logarithmic frequency axis, log F_1 and log F_2 move up to {log F_1 + logα} and {log F_2 + logα}, respectively. This implies that the logarithmic scale factor, logα, is a constant independent of the formant frequencies. Therefore, the VTL ratio can be estimated using the cross-correlation of the log spectra corresponding to the original and shorted vocal tracts. Auditory spectra derived by gammatone or gammachirp filterbanks <cit.> are suitable for this purpose because the frequency axis, ERB_N number, is approximately a log frequency axis above 500 Hz <cit.>. This appears to be an easy task if the spectrum is calculated solely from the impulse response of the vocal tract. However, voiced sounds used in speech communication are derived from the convolution of the impulse response of the vocal tract and the waveform of glottal vibration <cit.>. This makes the estimation more difficult than expected. §.§ Problem when using auditory spectrum To be more specific, an actual female voice `a' (VTL = 15.0 cm, F_o = 182 Hz) and a male voice `a' (VTL = 18.5 cm, F_o = 101 Hz), drawn from a database described in Section <ref>, were analyzed with a dynamic compressive gammachirp auditory filterbank (GCFB) <cit.>. The output level was averaged over a short period. This is commonly called an excitation pattern (EP) <cit.>. The solid blue lines in Fig. <ref> show the EPs for the female voice (a) and male voice (b). The horizontal axis is the GCFB channel number, which is equally spaced from 100 to 8000 Hz on the ERB_N number axis <cit.>; this axis is effectively a logarithmic frequency axis above 500 Hz. In the female voice shown in Fig.  <ref> (a), prominent peaks are observed in channels 24, 32, and 41. Among these, the peak at channel 42 is associated with a formant, and the spectral shape at higher frequencies is important for VTL estimation. In contrast, the peaks at channels 24 and 32 are associated with the harmonics of F_o (182 Hz), which are resolved harmonics <cit.>. These additional peaks reduce the accuracy of the VTL estimation. In contrast, for the male voice in Fig.  <ref>(b), the peaks corresponding to the resolved harmonics are relatively small, and four formant peaks are clearly observed. The cross-correlation between the two EPs is shown by the blue solid line in Fig.  <ref>(c). The peak (blue circle) is obtained at a shift of zero, which implies that the VTLs are the same. This estimation is obviously incorrect because the measured VTL ratio is 1.23 (=18.5/15.0). The problem lies in the fact that the resolved harmonics interfere with estimation. A similar and worse problem occured in the Mel spectrum. §.§ Approach from auditory computational theory We approached this problem using a computational theory in which the auditory system can segregate and extract information about the vocal tract and glottal vibration from speech sounds. Specifically, Irino and Patterson <cit.> proposed the “stabilized wavelet-Mellin transform” (SWMT) which has been supported by several psychological experiments on size perception (e.g., <cit.>). The SWMT process is briefly described here (see <cit.> for details). In SWMT, the EP derived from GCFB is converted into a two-dimensional “Auditory Image (AI)” by “Strobe Temporal Integration (STI),” which is synchronized with the glottal pulse. The representation of vocal tract response is repeated at the glottal pulse rate (i.e., F_o). One cycle of the response is extracted from the AI to obtain a two-dimensional “Auditory Figure (AF)”, which maximally represents information about a single pulse response of the vocal tract. This representation does not exactly correspond to the impulse response but is much closer than the usual spectrum representation obtained from repeated pulse excitation. Then, the AF is transformed into a “size-shape image (SSI),” as shown in Fig.  <ref>(a). Here, the vertical axis represents the peak frequency of the auditory filter, f_p, and the horizontal axis is h, which is the product of the time interval and peak frequency. SSI is a representation of the single-pulse response and eliminates the response of the adjacent pulse, which is located below the diagonal upright curve. Although it is a good method in principle, developing an STI algorithm to obtain stable images for various speech sounds isnot easy. §.§ F_o adaptive weighting function, SSI-weight To solve this stability problem, we designed a weight function that corresponds to the area of the active part of the SSI. Figure <ref> (b) shows a function named “SSI-weight” <cit.>. This one-dimensional function is directly applicable to EP via simple multiplication. the SSI weight (w_SSI) is defined as w_SSI(f_p,F_o) = min(f_p/h_max· F_o,1), where h_max is the upper limit of h on the horizontal axis of the SSI and determines the area of information extracted from the SSI, as shown in Fig.  <ref>(a). f_p is the peak frequency of the analysis filer. F_o is the fundamental frequency at the time of the analysis. When F_o is not determined, as in unvoiced sounds, setting F_o ≃ 0 will result in a w_SSI of unity across the peak frequency. the SSI weight has been used to explain the results of the human size perception experiments <cit.> and to predict the speech intelligibility of degraded sounds <cit.> and enhanced sounds <cit.>. Most importantly, the SSI weight is applicable to any spectral representation because it is a simple weighting function on the frequency axis. §.§ Applying the SSI weight to EP In Fig. <ref>, we apply the SSI weight (black dotted line) to the original spectrum (blue solid line) to derive the weighted spectrum (red dashed line). In particular, in Fig.  <ref>(a), the first peak of the resolved harmonics of the female voice is effectively suppressed. The red dashed line in Fig. <ref>(c) shows the cross-correlation function between the SSI-weighted EPs. The peak (red asterisk) is at a shift of 6, which is approximately 1.2 times the frequency and coincides with the VTL ratio (=1.23) between the male and female. Thus, introducing the SSI weight is expected to reduce the difficulty of VTL estimation. § VTL ESTIMATION METHOD AND EVALUATION §.§ Measured VTL data The effectiveness of the SSI weight was evaluated using the “ATR vowel speech MRI data” <cit.>, which contains five vowel sounds, v {v| `a',`i',`u',`e',`o'}, accompanied by accurate VTLs measured from MRI images of sustained vocalizations. As it contained only 13 male prepared data points, we additionally derived six female data points <cit.>. Therefore, we used data from 19 speakers. §.§ Spectral analysis by GCFB We used the GCFB <cit.> for VTL estimation, as described in Section <ref>. It was set to process 100 channels with a sampling frequency of 48 kHz and a filter center frequency range of 100 – 8000 Hz. Upon audio input, the EP is output with a frame period of 0.5 ms, and the EP spectrogram is calculated. EPs in the range of ±25 ms from the center of the speech data were averaged to obtain the spectral representation “Ep” of the vowel described above. The spectral representation “Ep_SSI” was also calculated by applying the SSI weight in Eq. <ref> with the F_o estimated by WORLD <cit.>. §.§ VTL estimation algorithm For each of the five vowels, the cross-correlation functions of the EPs were calculated for all combinations of the 19 speakers (N=19). Between the i-th and j-th speakers, the shift in the peak position from the center, c_ij, was extracted using the cross-correlation function. Note that the EP was linearly interpolated in advance by a factor of 10 to make the resolution of the peak shift as a 0.1 channel. The resulting shift was assumed to be caused by the VTL difference. When the orders of i and j are swapped, the amount of shift is the same in absolute value and only the sign is reversed. Therefore, the matrix notation of all permutations is represented as [ 0 c_12 c_13 ⋯ c_1N; -c_12 0 c_23 ⋯ c_2N; -c_13 -c_23 0 ⋯ c_3N; ⋮ ⋮ ⋮ ⋱ ⋮; -c_1N -c_2N -c_3N ⋯ 0 ]. The relative shift for the i-th speaker, S_i, from the average of all speakers is calculated by taking the difference between the vertical and horizontal sums of this matrix. C_j^row = ∑_i=1^Nc_ij,      C_i^col = ∑_j=1^Nc_ij S_i = (C_i^row - C_i^col)/2N Although this equation is simple, the equivalent value was obtained using an estimation method with a generalized inverse matrix <cit.>. Since this value is a shift quantity on a logarithmic scale, its exponent yields the VTL, L_i as L_i = exp(q S_i) ·L̅ where L̅ is the measured VTL averaged across speakers. q is a conversion coefficient that depends on the spectral representation. In this study, q was determined to minimize the squared error between the regression line, calculated from the measured and estimated VTLs for all 19 speakers and five vowels, and the 1:1 identical line. §.§ Estimation results Figure  <ref> shows scatter plots between the measured and estimated VTLs when using Ep (a) and Ep_SSI with h_max=3.5 (b). The correlation coefficients for Ep and Ep_SSI were calculated for all vowels and speakers. The values were 0.71 and 0.80, respectively. Vowel labels are generally closer to the regression line in Ep_SSI than in Ep. This finding indicates that applying the SSI weight to the EP can improve the accuracy of the VTL estimation, as described in Section <ref>. §.§.§ The effect of h_max on VRL estimnation Figure <ref> shows the correlation coefficients between the measured and estimated VTLs calculated for h_max values between 0 and 6 in 0.5 steps. We assumed that the frequency response of the vocal tract filter may be best extracted by properly setting the h_max value to reduce the resolved harmonics in the EP shown in Fig. <ref>, although the h_max value was arbitrarily fixed at 5.0 when explaining the results of the human size perception experiments <cit.>. Moreover, it is essential to properly estimate vocal tract information for all vowel types. The correlation coefficients were calculated for each vowel, as shown by the lines in Fig. <ref>. When h_max was less than 3, the correlation coefficients for `i' and `u' (low first formant, F_1) were high, while those for `a,' `e,' and `o' (high F_1) were low. This result implies that the F_o information from the resolved harmonics was not sufficiently suppressed in the low-frequency region. In contrast, when h_max exceeded 4, the correlation coefficients for `a' and `o' (high F_1) were high, and those for `i' and `u' (low F_1) were low. Therefore, F_1 and F_o information were excessively suppressed. The best results were obtained when h_max was 3.5, for which the difference between the five vowels was small, and the correlation coefficient obtained from all five vowels (`All') was the highest. In this case, information about the vocal tract and glottal vibration seemed to be properly separated regardless of vowel type. § COMPARISON WITH COMMONLY USED SPECTRA We compared the estimation performances when using the above auditory representation and commonly used spectral representations to evaluate their effectiveness. A comparison was made with a Fourier spectrum, “F,” and a Mel-filterbank (MFB) spectrum <cit.>,“M.” We also included a WORLD spectrum <cit.>,“W,” because it reduces the effect of F_o by smoothing the frequency distribution and is commonly used in voice conversion as a successor of Tandem-STRAIGHT<cit.>. The estimation algorithm was identical to that described in Section <ref>. §.§ Calculation of the spectrum The short-time Fourier spectrogram of the vowel sound was calculated with a frame length of 25 ms, a hamming window, and a frame shift of 5 ms. The Mel spectrogram was obtained from the STFT spectrogram with a Mel filterbank with 25 filters equally spaced on a Mel frequency axis corresponding to between 100 and 8000 Hz. The WORLD spectrogram was obtained with a default frame rate of 5 ms. Then the obtained amplitude spectrogram, S(f_p,τ), was subjected to logarithmic compression, 20 log_10{S(f_p,τ)}, and power compression, S(f_p,τ)^P  {P | 0.1≤ P ≤ 1.0, every 0.1 }. The compressed spectrogram in the range of ±25 ms from the center was averaged to obtain the spectrum as calculated in EP (Section <ref>). For the Fourier and WORLD spectra, the frequency axis was logarithmically transformed, and a spectrum with 100 channels equally spaced between log_10(100) and log_10(8000) was obtained by linear interpolation. The Mel spectrum with 100 channels was obtained from the 25-channel spectrum via linear interpolation. As decribed in section <ref>, the SSI weight improved the estimation performance when applied to the EP and was applicable to any spectrum. Therefore, the SSI-weight was also applied to these spectra to investigate its effect. Table <ref> presents the abbreviations for each compressed spectrum. §.§ Effect of compression on the estimation Initially, we sought to identify the log and exponential compressions that yielded the best VTL estimation in each spectrum. The resulting spectra are suitable for comparison with EP. Figure <ref> shows the correlation coefficients between the measured VTLs and the VTLs estimated from the various compressed Fourier spectra. The correlation coefficients were generally higher when using the spectrum with the SSI weight (left panel) than when using the original spectrum (right panel).The Fourier spectrum with the SSI weight F_SSI^(dB) yielded the best correlation coefficient and a relatively small variability between vowels. The F^(dB) in the left panel showed high correlation coefficient although it was not the best. We used these data for comparison. Figure <ref> shows the results of the Mel spectrum. The correlation coefficients were generally smaller than those in the Fourier spectrum. The Mel spectra of M^(0.4) and M_SSI^(0.4) were the best for each panel. We also included M^log, which is the spectrum for the calculation of Mel-frequency cepstrum coefficient (MFCC) <cit.>, and M_SSI^log in the comparison. Figure <ref> shows the results of the WORLD spectrum. We selected W^log and W_SSI^log for comparison, because they provided the best correlation coefficients. §.§ Comparison between the EP and various spectra Figure  <ref> shows the correlation coefficients for the EP and the spectra selected above. Ep_SSI had the highest correlation coefficient and smallest variability among the five vowels. In contrast, M^log had the lowest correlation coefficient. Notably, the correlation coefficient was always higher in any spectrum when introducing the SSI weight. Therefore, SSI-weight could improve the estimation of vocal tract information. These differences were tested statistically. We performed the VTL estimation 10 times using data from 16 speakers after excluding two males and one female from the original 19 speakers at random. Subsequently, the RMS error between the measured and estimated VTLs was calculated. If the VTL estimation is sufficiently good, it should be stable and accurate even if three data points are randomly eliminated. Figure <ref> shows the mean and standard deviation and the results of Tukey's HSD multiple-comparison test. The RMS error for Ep_SSI was approximately 1 cm, which was significantly smaller than those for the other spectra. In contrast, the RMS error for M^log was approximately 3 cm, which was significantly greater than that for the other spectra. More importantly, the results also demonstrate that introducing the SSI weight reduced the error in any spectrum. These reductions were statistically significant (p<0.05), except for the WORLD spectrum. It is noteworthy that the difference between the errors of M^log and M_SSI^log is extremely large. §.§ Some lessons from the results Suggestions obtained from the results may serve as lessons for future speech signal processing. §.§.§ Effective auditory representation Many “auditory motivated” models have been proposed for various speech signal processing tasks. Most of them only introduced peripheral frequency analysis, such as the auditory and Mel filterbanks. The current results imply that such a frequency analysis is not sufficient for extracting vocal tract information. The introduction of the SSI weight improved the performance. As shown in Figs. <ref> and <ref>, the SSI weight is a simplified version of the Size-Shape Image in the SWMT, which was proposed as a computational theory of the central auditory process. It is important to introduce knowledge of human auditory processing to derive effective auditory representation. §.§.§ Effectiveness of MFCC Mel-frequency cepstrum coefficient (MFCC) has been commonly used in many kinds of speech processing after its tremendous success in ASR <cit.>. MFCC has also been used in VTL estimation <cit.>, even in recent DNN studies <cit.>. However, it has rarely been questioned whether the MFCC is an effective representation for this purpose. The log Mel-spectrum M^log is a basic representation for calculating MFCC. The results in Fig. <ref> imply that information about the vocal tract and glottal vibration is not sufficiently separated in M^log. As the discrete cosine transform (DCT) is applied across all frequency ranges, all cepstrum coefficients unavoidably contain both types of information. Therefore, the use of MFCC does not seem effective for VTL estimation, even if a state-of-the-art DNN method is used in the back-end. This is because the DNN is required to segregate both types of information embedded in the individual cepstral coefficients before estimating the VTL. Although it would be possible to use a large number of parameters to resolve this, interpretation of the internal representation could be difficult because of complexity. A modified version of the MFCC derived from the log Mel-spectrum associated with the SSI weight may improve the performance and interpretation. §.§.§ Merit and usage of the SSI weight the SSI weight is applicable to any type of commonly used spectra because it is a simple F_o adaptive function on a frequency axis (Eq. <ref> and Fig. <ref>). It is not necessary to estimate F_o accurately because the SSI weight is less sensitive to the F_o value. the SSI weight can be easily implemented in any speech processing program by adding a few lines and an F_o estimation package such as WORLD <cit.>. Practically, it has been introduced into an objective speech intelligibility measure, GESI <cit.>, to improve the prediction of both male and female voices. § SUMMARY In this study, we investigated auditory representations, which are effective for estimating vocal tract information. We proposed the use of the SSI weight which is derived from SWMT to segregate information about the vocal tract and glottal pulse from speech sounds. The auditory EP associated with the SSI weight improved the estimation of VTLs measured from the MRI data. Moreover, the estimation error was significantly smaller than when using the commonly used Fourier, Mel, and WORLD spectra. It was also demonstrated that the SSI weight can be easily introduced into these spectra to improve the performance. § ACKNOWLEDGMENTS This research was supported by JSPS KAKENHI Nos. 21H03468 and 21K19794. The authors would like to thank Prof. Kitamura for providing the female MRI-VTL data. § APPENDIX A. SWMT AND THE SSI WEIGHT Irino and Patterson <cit.> proposed Stabilized Wavelet-Mellin Transform (SWMT) as a computational model for extracting size and shape information of vocal tract from speech sounds. We briefly explain the relationship between the SWMT and SSI-weight described in Section <ref>. For more details, see Appendix A in <cit.>. The upper path in Fig. <ref> shows the signal processing of the SWMT, which is supported by several experiments on size information processing in the auditory system (for example <cit.>). This is effective in theory but has a problem when applied to practical applications. The process of strobe temporal integration for stabilization and 2-dimensional conversion in SWMT is rather difficult to implement in a computational model. To resolve the problem, we developed the SSI weight (Eq. <ref>) which is applicable to any spectrographic representations. The lower path in Fig. <ref> shows the analysis using an auditory spectrogram. The auditory filterbank is the same as that in SWMT. The output is processed by windowing and averaging to obtain an excitation pattern (EP). This process is simple and produces a stable representation of the spectral information in the sound, although it discards the temporal fine structure, which also plays an important role in the auditory perception <cit.>. An auditory spectrogram is a stream of EPs derived from each frame. As described in Section <ref>, the product of the SSI weight (in the center box) and the frame-based spectrum is suitable for extracting size information.
http://arxiv.org/abs/2306.04371v1
20230607120826
Large-Scale Cell Representation Learning via Divide-and-Conquer Contrastive Learning
[ "Suyuan Zhao", "Jiahuan Zhang", "Zaiqing Nie" ]
cs.CE
[ "cs.CE" ]
Thermal expansion of atmosphere and stability of vertically stratified fluids A.P. Misra July 31, 2023 ============================================================================= Single-cell RNA sequencing (scRNA-seq) data is a potent tool for comprehending the “language of life” and can provide insights into various downstream biomedical tasks. Large-scale language models (LLMs) are starting to be used for cell representation learning. However, current LLM-based cell representation learning methods depend solely on the BERT architecture, causing an anisotropic embedding space that leads to inefficient semantic representation. Contrastive learning alleviates this problem by distributing the embeddings uniformly. As a larger batch size in contrastive learning results in better representation, the practical application of contrastive learning in cell representation learning is hampered by the high dimensionality of scRNA-seq data and the large parameter volume of LLMs. To address the batch size limitation, we propose a novel divide-and-conquer contrastive learning approach to decouple the batch size from the GPU memory size for cell representation learning. Based on our divide-and-conquer contrastive learning approach, we introduce Single-Cell Language Model (CellLM), a large-scale cell representation learning model to handle high-dimensional scRNA-seq data with tens of thousands of genes. CellLM has over 50 million parameters trained with 2 million scRNA-seq data and makes the first attempt to learn cell language models from both normal cells and cancer cells. CellLM achieves new state-of-the-art (SOTA) results in all evaluated downstream tasks: including a 71.8 F_1-score for cell type annotation (a 3.0% absolute improvement over scBERT), an average F_1-score of 88.9 for single-cell drug sensitivity prediction in a few-shot scenario (an 8.3% absolute improvement), and a 93.4 Pearson's correlation for single-omics cell line drug sensitivity prediction (a 6.2% absolute improvement). The pre-trained model, codes, and datasets for our CellLM are accessible at <https://github.com/BioFM/OpenBioMed>. § INTRODUCTION Single-cell RNA sequencing (scRNA-seq) data is a potent tool for comprehending the “language of life” and can provide insights into various downstream biomedical tasks. Large-scale language models (LLMs) are beginning to be used to decipher the coding language of life and have achieved some success <cit.>, and several very recent studies have shown the effectiveness and feasibility of applying LLMs for the representation of single-cell data <cit.>. scBERT <cit.> is the first study to encode scRNA-seq data using the LLM approach. It uses more than one million normalized unlabeled scRNA-seq data, and utilizes the BERT-based pre-training model, Performer, to obtain the representation of scRNA-seq data. Exciver <cit.> works in a similar manner. However, these approaches depend solely on the BERT architecture for cell representation, and studies <cit.> have revealed that directly applying BERT may lead to a degradation in its representation quality due to the anisotropy of the embedding space. More specifically, low-frequency words are not effectively trained during pre-training, causing their embedding vectors to be sparsely distributed in the feature space. On the other hand, the embedding vectors of high-frequency words, which are well-trained, tend to cluster together in the feature space. The uneven distribution of the embedding space restricts the ability to measure semantic associations between high-frequency and low-frequency words. Likewise, scRNA-seq data show diverse gene expression frequencies, indicating that the shortcomings of the BERT architecture will carry over to the semantic representation process of scRNA-seq data. Contrastive learning addresses the anisotropy issue by uniformly distributing embeddings through learning positive and negative sample features. However, a fundamental challenge in contrastive learning lies in ensuring a sufficient number of negative samples in each training batch, which is vital for the model to learn effective features. Unfortunately, this challenge is exacerbated by the GPU memory size limitation, making it difficult to significantly increase the batch size, especially in the most common end-to-end contrastive learning methods <cit.>. In the case of scRNA-seq data, the data dimension is determined by the number of genes in a cell, which can be as high as 19,379 known protein-coding genes in humans, even higher if non-coding genes are included <cit.>. The high dimensionality of scRNA-seq data and the enormous parameters of LLMs pose a challenge in achieving a large batch size when using contrastive learning for cell representation tasks, especially when the size of GPU memory is restricted. Therefore, it is crucial to decouple the batch size from the GPU memory size. The existing methods have made strides in tackling these challenges. For instance, the memory bank approach <cit.> addresses this issue by expanding the number of negative samples through the maintenance of a vast negative sample queue that is not involved in gradient backpropagation. However, this method results in asynchronous updates of encoders for positive and negative samples, causing discrepancies not only between the samples but also between the encoders, resulting in training instability. MoCo <cit.> proposes a momentum encoder to mitigate inconsistent updates between the negative encoder and the positive encoder in the memory bank. However, in comparison to the end-to-end contrastive learning approach, MoCo only reduces the asynchronous update of the encoder. The truth of the matter is that the embeddings of positive and negative samples are still produced by different encoders. To overcome the limitation of batch size and ensure that positive and negative samples are generated by the same encoder, we propose a novel divide-and-conquer contrastive learning approach to decouple the batch size from the GPU memory size for cell representation learning. More importantly, the divide-and-conquer contrastive learning method has been mathematically rigorous and has been proven to be completely equivalent to the end-to-end contrastive learning method. It means that we can increase the batch size while simultaneously updating the encoder for positive and negative samples without introducing any additional errors. By leveraging the concept of trading time for space, a big batch is divided into several smaller mini-batches. The gradient update calculations are then carried out in sequence, allowing us to increase the appropriate batch size without compromising the synchronization of encoder updates for positive and negative samples. By distributing the workload over multiple smaller batches, we can effectively utilize available GPU memory resources while maintaining consistency in the encoder updates. Based on our divide-and-conquer contrastive learning approach, we introduce Single-Cell Language Model (CellLM), a large-scale cell representation learning model to handle high-dimensional scRNA-seq data with 19,379 genes. CellLM has over 50 million parameters trained with 2 million scRNA-seq data. It’s worth mentioning that CellLM is the first attempt to learn cell language models from both normal and cancer cells. Since cancer scRNA-seq data is helpful in understanding and treating cancer at the single-cell level <cit.>, it ultimately leads to more widespread and cost-effective treatment options at the human body level. In addition, due to the sparsity of single-cell data, we reduce the computational load of pre-training by dynamically incorporating genes with expressions, instead of utilizing full-length gene sequences. We validate the performance of CellLM on a range of downstream biomedical tasks and achieve new SOTA in all evaluated tasks: including a 71.8 F_1-score for cell type annotation (a 3% absolute improvement over scBERT), an average F_1-score of 88.9 for single-cell drug sensitivity prediction in a few-shot scenario (an 8.3% absolute improvement), and a 93.4 Pearson's correlation for single-omics cell line drug sensitivity prediction (a 6.2% absolute improvement). Our experiments demonstrate that CellLM produces a superior representation of single-cell, enhances the semantic representation of cell line data, and enables more precise virtual drug screening along the entire chain. Our main contributions can be summarized as follows: * We propose a novel divide-and-conquer contrastive learning approach to decouple the batch size from the GPU memory size for cell representation learning, which effectively addresses the embedding space anisotropy problem caused by the BERT architecture. The divide-and-conquer contrastive learning has undergone strict mathematical analysis and has been proven to be completely equivalent to the end-to-end contrastive learning method. * We introduce CellLM, a large-scale cell representation learning model with over 50 million parameters trained with 2 million scRNA-seq data. CellLM makes the first attempt to learn cell language models from both normal cells and cancer cells. * CellLM achieves SOTA results on a range of downstream biomedical tasks. It has been proven to enhance cell representation and aid in virtual drug screening. § RELATED WORKS Representation of scRNA-seq data. The gene expression profile provides valuable information about the expression levels of genes within a single cell, making it a crucial component of research studies. Nearly 20k human protein-coding genes are known now <cit.>, and directly analyzing such high-dimensional data poses an extreme challenge. Additionally, scRNA-seq data suffer from a high false dropout rate, leading to the “Dropout Zeros” phenomenon <cit.>. Researchers have devised a range of methods to tackle the challenges of such highly sparse and noisy data. Traditional approaches attempt to analyze scRNA-seq by dimensionality reduction, such as manually selecting the marker genes <cit.>, machine learning methods <cit.>, or autoencoder-based methods <cit.>. However, the genes selected manually are often based on empirical observations <cit.>, and machine learning methods have high complexity and limited noise resistance. The autoencoder-based methods heavily rely on the similarity between test and training data. LLMs have the potential to be applied to modeling scRNA-seq data. scBERT <cit.> is the first work to encode scRNA-seq data by LLM. scBERT employs the Performer <cit.> architecture to obtain gene expression representations of single-cells from over a million normalized unlabeled scRNA-seq data with 6 million parameters. Compared with domain-specific tools, scBERT achieves superior performance on the cell type annotation task. Exceiver <cit.> uses the Perceiver IO <cit.> architecture to obtain representations of single-cell count matrix data. It is pre-trained on almost 0.5 million count matrix data of single-cell gene expression from healthy humans, and its effectiveness is verified on downstream tasks. Contrastive Learning. Contrastive learning (CL) is a widely utilized self-supervised learning technique in computer vision (CV) <cit.> and natural language processing (NLP) domains <cit.>. It aims to train the encoder by generating similar embeddings for data in the same class while maximizing the dissimilarity between embeddings of different classes in the feature space. However, a significant challenge in contrastive learning is ensuring an adequate number of negative samples in each training batch, as it is crucial for the model to learn effective features. Unfortunately, this challenge is further complicated by the limitation of GPU memory size, particularly in popular end-to-end contrastive learning methods <cit.>. Numerous research efforts have focused on addressing this issue and proposed various solutions <cit.>. Several works have applied CL to improve the representation of scRNA-seq data. Concerto <cit.> employs a teacher-student network to construct positive samples in CL. CLEAR <cit.> uses data augmentation methods such as adding noise, random mask, inner swap, and dropout to construct positive samples. However, artificially perturbing the original data may alter the semantics of genes, which may not be a suitable data augmentation method for scRNA-seq data analysis where each gene expression level corresponds to a specific meaning. § METHODS In this section, we provide a comprehensive description of the CellLM workflow. The framework of the CellLM is illustrated in Fig.<ref>. §.§ Pre-Processing of scRNA-seq Data The raw data of scRNA-seq is provided as a count matrix, denoted as M={n_k}_k=1^N, where n_k is an integer representing the count of the k-th gene in the scRNA-seq and N is the number of genes. Due to the variations in sequencing protocols and conditions, the data from different sequencing batches have different levels and are not comparable. So we normalize them to X={x_k}_k=1^N, where x_k=log(1+n_k/∑_j=1^Nn_j× 10000). As scRNA-seq data is high dimensionality and sparsity, we only select the positions of each non-zero gene expression in the cell P = {p_k}={j | x_j∈ X and x_j≠ 0}, and their corresponding expression levels in Y={y_k}={x_p_k}. This reduces the original training cost by over 70%. §.§ Model Architecture We introduce the architecture of CellLM, it consists of three trainable parts: Expression encoder (φ_E). The gene expression level is obtained by normalizing the count matrix and is actually discrete in each cell. Ignoring this characteristic and treating it as continuous for model input will introduce significant noise. Therefore, we divide it into several bins based on expression level and map each bin to a trainable 512-dimensional encoding. Gene Encoder (φ_G). The protein-protein interaction (PPI) network can reflect the relationships between genes. As shown in Fig.<ref>, we use one of the state-of-the-art graph representation learning methods, GraphMAE <cit.>, to obtain gene embeddings as additional knowledge. Performer-based module. The Performer model is a variant of the Transformer model. In the Transformer model, the complexity of computing the attention matrix is quadratic in the sequence length, while the Performer calculates an approximate attention matrix with linear complexity. Since the length of the scRNA-seq is relatively large, using the Performer architecture can significantly reduce memory usage and improve computational efficiency. We input {P, Y} obtained from Section 3.1 into the model. The input matrix C={c_k} to the Performer consists of two parts: gene embedding φ_G(p_k) and expression embedding φ_E(y_k), we inject knowledge of gene interactions for each gene expression by adding them. C is then passed through the Performer model to obtain the encoding H={h_k}. That is, c_k = φ_G(p_k)+φ_E(y_k) H = Performer(C) §.§ Pre-training CellLM During the pre-training process, researchers use suitable self-supervised tasks to help the model learn general representations of the data, which can be widely applied or transferred to multiple downstream applications. In order to improve the cell representation capability of the model, we set the following three self-supervised learning methods based on the features of the scRNA-seq data we used. §.§.§ Divide-and-Conquer Contrastive Learning We introduce contrastive learning to alleviate the problem of representation degradation caused by BERT-based methods and enhance the encoding capability for cell representation. A crucial step in contrastive learning is constructing positive and negative samples for data augmentation, which helps the model better understand the data features. In scRNA-seq data, each gene expression level carries unique meaning, and artificial data augmentation methods such as shuffling and perturbing at the input data may disrupt the gene expression semantics. We believe that perturbation at the feature level is more suitable for data augmentation in scRNA-seq data. Therefore, we use two instances of standard dropout applied to the same single-cell to construct positive samples <cit.>, while other single-cells in the same batch serve as negative samples. The loss function is as follows: ℒ_CL=-1/T∑_i=1^Tloge^sim(h_i, h_i^+)/τ/∑_j=1^Te^sim(h_i, h_j^+)/τ, where T is the batch size, sim is the cosine similarity function, τ is the temperature parameter, h_i and h_i^+ represent the encoding results for the i-th data and its positive sample, respectively. Due to the enormous parameters of the model and high-dimensionality input sequences, it is challenging to train the model with a large batch size within a limited GPU memory size. However, contrastive learning requires a large batch size to provide enough negative samples. Therefore, we designed divide-and-conquer contrastive learning to separate the batch size from the actual amount of data being simultaneously processed. r0.55 < g r a p h i c s > font=small Divide-and-conquer contrastive learning detail. This allows us to train the model with a larger effective batch size while preserving the memory constraints. At the same time, this method retains the advantages of end-to-end contrastive learning, including a synchronous update of the encoder and a lossless comparison of positive and negative samples. It has been mathematically rigorous and proved to be completely equivalent to the end-to-end contrastive learning method, the mathematical proof is detailed in Appendix <ref>, where we provide a demonstration of the following: ∀ω∈Ω,∂ℒ/∂ω = ∑_k=1^S∂ℒ^(k)/∂ω, where Ω is the parameter set of the model and S is the number of steps during the divide-and-conquer contrastive learning. ℒ and ℒ^(k) is the loss of the end-to-end contrastive learning and the k-th step of the devide-and-conquer contrastive learning, respectively. The diagram of divide-and-conquer contrastive learning is shown in Fig.<ref>, and its details are presented as follows: Step 1: For a large batch size (denoted as T), we first pass all inputs through the encoder in chunks without saving gradients, computing all h_i and h_i^+ (1≤ i≤ T). Step 2: Next, we partition the large batch into mini-batches (the mini-batch size is denoted as t) to perform encoding calculations while saving gradients. This process generates h_j^' and h_j^'+ with gradients for each sample in the mini-batch, where k· t ≤ j≤ (k+1)· t. We replace the corresponding positions of h_j and h_j^+ with h_j^' and h_j^'+ for the computation of the contrastive learning loss function and then perform backpropagation. Step 3: Repeat Step 2, performing computations for other mini-batches. During this process, gradient accumulation is used, and model parameters are not updated. We continue this repetition until all T samples have been processed. §.§.§ Masked Language Modeling In our approach, we randomly mask the gene expression levels in the input sequence and utilize the model's output vectors at the corresponding positions to predict the reconstruction of the original input. We use the cross-entropy loss function as the loss function for this multi-classification task, defined as: ℒ_MLM=-1/N∑_i=1^N∑_j=1^M v_ijlog(v̂_ij), where N is the number of masked genes, M is the number of categories, v_ij and v̂_ij are the label and predicted probability, respectively, for the i-th gene expression being assigned to the j-th class. §.§.§ Cell Type Discrimination Since we perform representation learning on both healthy single-cell and cancer single-cell data, we specifically incorporate a pre-training task to distinguish tumor cells from normal cells. This task aims to make the model focus on cell-level representations while also highlighting the differences between tumor cells and normal cells, thereby obtaining a better semantic understanding of both healthy and diseased cells. A label is added at the beginning of each single-cell gene expression sequence, and the output at this position is used to predict whether the cell originates from tumor tissue or normal tissue. We use the cross-entropy loss function defined as: ℒ_CLS=-l·log(v_CLS)-(1-l)·log(1-v_CLS), where l and v_CLS are the label and predicted probability of this cell, respectively. § EXPERIMENTS In this section, we begin by providing an overview of the experimental settings, which encompass the datasets and baselines we chose, and the evaluation metrics employed for tasks. Subsequently, we delve into a thorough analysis of the experiment results. Due to space constraints, we focus on providing essential information and experiment details in this section, and other supplementary information (e.g. experiment conditions, details of fine-tuning, hyperparameter settings, additional ablation experiments on alternative pre-training tasks, etc.) are in Appendix <ref>, <ref>-<ref>. §.§ Experiment Settings §.§.§ Datasets Pre-training data During pre-training, we utilize almost 2 million scRNA-seq data, obtain from two distinct sources: PanglaoDB <cit.> and CancerSCEM <cit.>. PanglaoDB is an online database of open scRNA-seq data resources, and we leverage 1,126,580 single-cell data points from 74 human tissues in our pre-training process. CancerSCEM is a curated collection of high-quality human cancer scRNA-seq data from the literature, and we utilize 638,341 single-cell data points from 208 cancer samples. The specific datasets are chosen based on their relevance to our research on the understanding and application of normal and cancer cells. Downstream tasks We evaluate the representation ability of CellLM on two different kinds of tasks at two cell data levels. The first task is cell type annotation which involves predicting cell types from scRNA-seq data representation and directly verifies the representation power. We use human peripheral blood mononuclear cells (PBMCs) dataset Zheng68k <cit.> and pancreas dataset Baron <cit.> for this task. The second task is drug sensitivity prediction which requires to predict drug sensitivity through gene expression values. The single-cell level and cell line level are used in the drug sensitivity prediction task. For single-cell task, we conduct full and few-shot scenarios experiments on two datasets, human lung cancer cells (GSE149383) <cit.> and human oral squamous cancer cells (GSE117872) <cit.>. For the cell line task, we evaluate the representation ability of CellLM on cell lines in drug sensitivity data integrating from CCLE <cit.> and GDSC <cit.>. §.§.§ Baselines and Evaluations Cell type annotation. scBERT is the first single-cell LLM and is the best-performing model on this task. To provide more comprehensive comparisons, we select Scanpy <cit.>, a widely used tool for single-cell analysis, as our baseline. We choose macro F_1-score, weighted F_1-score, and accuracy as the evaluation metric. Drug sensitivity prediction. We compare the capabilities of scBERT on single-cell data. In the case of cell line data, we compare our results with both scBERT and DeepCDR <cit.>, utilizing single-omics data. Single-cell drug sensitivity prediction predicts whether the cell is sensitive to the drug, so we use F_1-score for evaluation. Cell line drug sensitivity prediction is to predict the IC50 of the drug experiment, which is a regression task. Therefore, we use Pearson's correlation, RMSE, MAE, and R^2 for evaluation. For the cell line drug sensitivity prediction task, we implement a simplified version of the drug encoder in TGDRP <cit.>. §.§ Experiment Results §.§.§ Results of Cell Type Annotation First, we evaluate the representation ability of CellLM on the task of single-cell type annotation. The results present in Table <ref> and the heatmap of the confusion matrix in Fig.<ref> demonstrate that our model achieves state-of-the-art (SOTA) performance on Zheng68K and Baron datasets. Notably, the macro F_1-score surpasses the current leading model scBERT by 3.0% and 5.7%, respectively. Our method successfully optimizes the original feature space of BERT and significantly improves the representation degradation caused by anisotropy in the embedding space. Moreover, these findings validate the effectiveness of the divide-and-conquer contrastive learning approach. §.§.§ Results of Drug Sensitivity Tasks In the single-cell drug sensitivity prediction experiments, we conduct both few-shot and full data scenarios. In the few-shot experiment, we aim to simulate the scarcity of drug sensitivity data, which closely resembles real-world scenarios. Here, we only utilize 5% of the available data samples for training. Conversely, the full data experiment employs 80% of the data for training. The experiment results are summarized in Table <ref>. Comparing with scBERT, CellLM demonstrates substantial improvements in both scenarios across the two datasets. Specifically, in the few-shot scenario, CellLM achieves an average improvement of 8.25%. In the full-data scenario, the average improvement is 1.65%. These experiment outcomes highlight the enhanced understanding of cancer obtained by CellLM, as it surpasses scBERT, which is solely pre-trained on normal human tissues. The incorporation of cancer data enables CellLM to capture cancer-specific features, leading to superior representations and significantly improved performance in cancer drug virtual screening. Table <ref> presents the results of the drug sensitivity prediction experiment conducted on cell lines. We simulate two scenarios that resemble real-world drug repurposing efforts <cit.>: cell line warm start and cell line cold start. In the warm-start scenario, the data is randomly divided into training, validation, and test sets. Conversely, in the cold-start scenario, the test set contains cell lines that are not seen during training. This setup requires the model to predict drug responses for novel cell lines it has never encountered before. Since our focus is on cell embedding rather than drug-specific performance, we do not establish a specific experiment scenario for drug cold start. Compared with DeepCDR, CellLM demonstrates absolute improvements of 6.2% and 3.8% in Pearson's correlation coefficient under the two scenarios. The experiment results indicate that the knowledge acquired by CellLM from single-cell transcriptomics remains effective when transferred to cell lines. Furthermore, CellLM's performance exceeds that of current methods for single-omics data. §.§.§ Ablation Study of Divide-and-Conquer Contrastive Learning To demonstrate the effectiveness of the divide-and-conquer contrastive learning method, we conduct two ablation experiments. The first one is pre-training without contrastive learning, and the second one replaces the divide-and-conquer contrastive learning with MoCo. Their performance on the cell annotation task is shown in Table <ref> (CellLM_w/o CL and CellLM_MoCo). Compared with the method that does not utilize contrastive learning (CellLM_w/o CL), CellLM demonstrates a substantial absolute improvement in macro F_1-score, reaching 5.4% and 4.6%. Furthermore, when compared to the method employing MoCo (CellLM_MoCo), CellLM achieves an absolute improvement of 4.8% and 1.1% in macro F_1-score, respectively. The results indicate that, firstly, the addition of contrastive learning can indeed improve cell representation learning using only a BERT-based structure. Secondly, the lossless end-to-end contrastive learning method, divide-and-conquer contrastive learning, can not only achieve a large training batch size but also solve the performance degradation caused by the asynchronous update of positive and negative sample encoders. Ablation experiments on other self-supervised tasks in pre-training are presented in Appendix <ref>. In addition, we also explore the interpretability of CellLM, which can be found in Appendix <ref>. §.§ Analysis: The Limit of Computing under Fixed GPU Memory Due to the utilization of Performer-based cell encoders, the GPU memory requirements during training are roughly proportional to the length of the scRNA-seq data and the mini-batch size. Therefore, with the fact of a fixed GPU memory limitation, the max scRNA-seq length and mini-batch size that can be employed are approximately inversely proportional. In our work, we use NVIDIA Tesla A100 40G and set the batch size for contrastive learning to 256 to train the model. If we use the end-to-end contrastive learning method, the max scRNA-seq length would only be 50, which is clearly insufficient since the number of expressed genes in most single-cell data ranges from 300 to 5000. In contrast, with the divide-and-conquer contrastive learning approach, we can split any batch size into mini-batches with a minimum length of 1. Under the aforementioned experiment settings, the max gene sequence length can be set to approximately 13,000, which effectively covers the majority of single-cell and cell line data, enabling the model to learn cell representations more effectively. Fig.<ref> provides a schematic illustration of this analysis. § CONCLUSION In this paper, we propose the divide-and-conquer contrastive learning method to decouple the batch size from the GPU memory size so as to solve the problem that the batch size is limited by GPU memory size when using contrastive learning to optimize cell representation. This approach allows training with arbitrarily large batch sizes on limited memory, which provides more flexible and extensive choices for modeling scRNA-seq data. In addition, we design self-supervised tasks to help the model better learn to understand features between cancer and normal cell. The SOTA results of CellLM in a range of downstream tasks confirm its effectiveness on cell representation. Furthermore, we firmly believe that the utility of divide-and-conquer contrastive learning extends far beyond its application in the cell representation scenario. Its potential can be harnessed to tackle similar challenges encountered in various other domains and contexts. § LIMITATIONS We focus solely on utilizing scRNA-seq data, yet there exist additional omics data such as scATAC-seq that can provide valuable insights. Furthermore, the current experiment exclusively focuses on normal and cancer cells, overlooking other cell types and diseases. However, we firmly believe that broadening the range of data types will significantly enhance the model's applicability across various downstream tasks. With this vision in mind, our future research endeavors will center around investigating the representation of multi-omics data and incorporating diverse cell types, enabling us to unlock new insights and possibilities. § ACKNOWLEDGEMENTS This work is jointly supported by the National Key R&D Program of China (No. 2022YFF1203002). unsrt figuresubsection tablesubsection § APPENDIX §.§ Proof of gradient equivalence Here we prove mathematically that the model parameter gradients obtained by this computation method are exactly the same as those obtained by direct end-to-end contrastive learning. Let the input vectors and their augmented version be denoted as x and x^+ respectively, where x,x^+∈ℝ^T× d_in, T is the batch size, and d_in is the input dimension. The model is denoted as f, with its parameter set represented by Ω. Thus, we have: h = f(x) h^+ = f(x^+) Next, we use the model outputs h,h^+∈ℝ^T× d_out to calculate the similarity matrix M={m_ij}, and compute the cross entropy loss ℒ by comparing it with the identity matrix I. The specifics are as follows: ∀ m_ij∈M, m_ij = h_i·h_j^+/‖h_i ‖_2 ·‖h_j^+ ‖_2 ℒ = ℒ(M, I) In end-to-end contrastive learning, the gradient computation process for each model parameter during backpropagation is as follows: ∀ω∈Ω,ω.grad = ∂ℒ/∂ω = ∂ℒ/∂h·∂h/∂ω + ∂ℒ/∂h^+·∂h^+/∂ω In our devide-and-conquer contrastive learning, we divide the large batch into S small batches with a length of t each, and the gradient computation is performed in S steps and accumulated. In the k-th step, only a small part of h, h^+ require gradients, denoted as h^(k), h^+(k)∈ℝ^t× d_out. That is, h, h^+ are divided into: h = [ h^(0); h^(1); ⋮; h^(S-1) ], h^+ = [ h^+(0); h^+(1); ⋮; h^+(S-1) ] The resulting gradient for ω in the k-th step is denoted as ω.grad^(k). Then, the gradient obtained in the k-th step is given by: ω.grad^(k) = (∂ℒ/∂ω)^(k) = ∂ℒ/∂h^(k)·∂h^(k)/∂ω + ∂ℒ/∂h^+(k)·∂h^+(k)/∂ω Using gradient accumulation, the total gradient obtained is given by ∑_k=1^Sω.grad^(k). Below, we will prove that it is the same as the gradient obtained by end-to-end training: ∵ h = [ h^(0); h^(1); ⋮; h^(S-1) ] ∴ ∂ℒ/∂h = [ ∂ℒ / ∂h^(0); ∂ℒ / ∂h^(1); ⋮; ∂ℒ / ∂h^(S-1) ] , ∂h/∂ω = [ ∂h^(0) / ∂ω; ∂h^(1) / ∂ω; ⋮; ∂h^(S-1) / ∂ω ] ∴ ∂ℒ/∂h·∂h/∂ω = ∑_k=1^S(∂ℒ/∂h^(k)·∂h^(k)/∂ω) Similarly, ∂ℒ/∂h^+·∂h^+/∂ω = ∑_k=1^S(∂ℒ/∂h^+(k)·∂h^+(k)/∂ω) ∴ ω.grad = ∑_k=1^S(∂ℒ/∂h^(k)·∂h^(k)/∂ω + ∂ℒ/∂h^+(k)·∂h^+(k)/∂ω) = ∑_k=1^Sω.grad^(k) Q.E.D. §.§ Additional Ablation Study In the main paper, we perform ablation experiments related to contrastive learning. Here we would like to perform two additional ablation experiments, one to illustrate the benefit of introducing cancer cells in our pre-training and adding the pre-training task of distinguishing normal cells from cancer cells; the other to demonstrate the boost from larger scale models and to illustrate the scalability of our model. * To demonstrate the effectiveness of the task of distinguishing between cancer and normal cells, we perform pre-training with this task removed. We test its performance on the cell line drug sensitivity prediction task, as shown in Table <ref> (CellLM_w/o CLS). Compared to the method without the cell classification task, CellLM demonstrates an absolute improvement of 1.4% and 1.5% in Pearson's correlation coefficients for cell line warm-start and cold-start, respectively, reflecting the improvement brought by our inclusion of the diseased cell pre-training task. * To test the performance improvement brought by the larger-scale model, we train two simplified versions of CellLM with reduced parameters (CellLM_25M and CellLM_8M) and evaluate their performance on the cell type annotation task. The results in Table <ref> show that the performance of CellLM improved with the increase in model size, which reflects the scalability of the model and provides the possibility of using more data to train industrial-level large models in the future. §.§ Interpretability Study Interpretability study is an important issue in deep learning. CellLM encodes each gene separately in scRNA-seq data, preserving a high level of interpretability at the gene level. Additionally, since CellLM uses the generalized attention mechanism from Performer during its inference process, the degree of attention paid to each gene during inference is visible. In the pre-training task of distinguishing between cancer cells and normal cells, we use the token for cell classification. By observing the attention maps generated during model inference, we can see the degree of attention paid to each gene by the model during classification. Fig.<ref> shows the top 20 genes that receive the most attention. Genes that are highly attended to when differentiating between cancer and normal cells likely reflect differences in gene expression levels between these two cell types. To illustrate this point, we select 20 key genes from each of 5,000 normal cells and 5,000 cancer cells from PanglaoDB panglaodb and CancerSCEM cancerscem, respectively, and perform two sets of clustering. The first set simply uses the top 20 expressed genes across all 10,000 cells, while the second set uses the top 20 genes with the highest attention. As depicted in Fig.<ref>, when colored by cell type, the second clustering set exhibits better separation between the two cell types (ARI: 0.203 versus 0.065), demonstrating that the genes receiving the highest attention more accurately reflect the differences between normal and cancer cells. Moreover, the three genes with the highest degree of attention are found to be B2M (beta-2-microglobulin), TMSB4X (thymosin beta 4 X-linked), and ACTB (actin beta), which have been shown in several biological studies to be closely associated with cancer, as shown in Table <ref>. In addition, the gene B2M, which has the highest attention level, is a well-known tumor suppressor gene castro2019elevated that has been widely used for cancer detection in clinical settings medlineplus. This also demonstrates the potential of CellLM to discover marker genes for specific problems. §.§ Datasets §.§.§ Cell Type Annotation. The annotation of single-cell types based on RNA gene expression profiles is a crucial application of scRNA-seq data because researchers typically do not know the specific type of cell being sequenced during the sequencing stage. The automatic annotation of cell types based on deep learning approaches could bring significant benefits to researchers in terms of convenience and accuracy. Zheng68k zheng68k a highly relevant and challenging dataset, consisting of 68,450 human peripheral blood mononuclear cells (PBMCs) with 11 highly related cell types. Zheng68k provides high-quality cell type annotations, making it an ideal benchmark for evaluating annotation approaches. However, the dataset poses significant challenges due to the large number of cell categories and the uneven distribution of samples between types. Table <ref> illustrates the categories included in Zheng68K and information on the number and proportion of each type. The original Baron dataset baron comprises droplet-based single-cell RNA sequencing (scRNA-seq) data obtained from over 12,000 individual pancreatic cells. These cells are derived from four human donors and two strains of mice. In our experiment, we specifically focus on the cells relevant to humans. Due to the limited number of human T cells in the dataset (only seven), we exclude T cells from the cell-type annotation task. As a result, the final dataset used in our cell type annotation task consists of 8,562 cells categorized into 13 different cell types. Table <ref> illustrates the categories included in Baron and information on the number and proportion of each type. §.§.§ Drug Sensitivity Prediction. Drug sensitivity prediction is a critical task in drug virtual screening, which involves using computational methods to evaluate the effectiveness of drugs. In complex diseases such as cancer, abnormal cells are often hidden among normal cells, and the lack of single-cell analysis can lead to low efficiency and high recurrence rates in drug therapy. Improving the representation of cells can improve the accuracy of drug sensitivity prediction. Cell line data which consists of many cells is often closer to the real treatment scenario than single-cell data. Therefore, we conduct drug sensitivity prediction experiments on two data levels in vitro: single-cell and cell line. Single-cell drug sensitivity prediction. At this stage, there is only experimental data available for a single drug on single-cell drug sensitivity due to various experimental difficulties and other factors. We conduct full data experiments on two datasets, human lung cancer cells (GSE149383) and human oral cancer cells (GSE117872). In addition, we also simulate few-shot scenarios to test whether the learned information in CellLM could help improve predictions when cell drug response data are scarce. Table <ref> shows the information of data used in single-cell drug sensitivity prediction. Cell line drug sensitivity prediction. The Cancer Cell Line Encyclopedia (CCLE) ccle provides gene expression data of thousand of human cancer cell lines, and the Cancer Drug Sensitivity Genomics (GDSC) gdsc provides experiment results of different drugs on human cancer cell lines (the most commonly used label is half-inhibitory concentration IC50, reflecting the ability of the drug to induce apoptosis). We test the transfer representation ability of CellLM from single-cell to cell lines on 106,405 pairs of drug sensitivity data, which consist of 555 cell lines and 223 drugs by integrating cell line gene expression data from CCLE and GDSC drug sensitivity data. §.§ Evaluation To provide a comprehensive overview, we will now outline the evaluation metrics employed for each task. Cell type annotation. For the cell type annotation task, we evaluate CellLM's performance using two datasets, Zheng68K and Baron, consisting of 11 classification and 13 classification tasks, respectively. To estimate the effectiveness of CellLM for multi-classification tasks, we employ three evaluation metrics: accuracy, macro F_1-score, and weighted F_1-score. Accuracy measures the closeness of the prediction to the ground truth, while macro F_1-score comprehensively assesses classification results without considering the importance of different categories. We also use weighted F_1-score to measure classification performance while accounting for the importance of different categories. These metrics are calculated based on true positive (TP), true negative (TN), false positive (FP), and false negative (FN) rates. Accuracy = TP+TN/TP+TN+FP+FN To calculate both macro F_1-score and weighted F_1-score, we need to compute Precision and Recall. These two key metrics are calculated using the following formulas: Precision = TP/TP+FP, Recall = TP/TP+FN Thus, we can compute both macro F_1-score and weighted F_1-score using the following formulas, N denotes the total number of cell types and n_i denotes the number of samples in the i-th class: macro F_1 = 1/N∑_i=1^NF_1^(i) weighted F_1 = 1/N∑_i=1^Nn_i*F_1^(i) where F_1^(i)=2*Precision^(i)*Recall^(i)/Precision^(i)+Recall^(i) Drug sensitivity prediction. The single-cell drug sensitivity prediction task is a binary classification task that aims to predict whether a given cell is sensitive to a particular drug. For this task, we evaluate the effectiveness of the model using the F_1-score, which is calculated as follows: F_1=2*Precision*Recall/Precision+Recall where the precision and recall values for the single-cell drug sensitivity prediction task are calculated in the same way as outlined above. The cell line drug sensitivity prediction task involves predicting the IC50 value in drug sensitivity experiments and is thus a regression task. To evaluate the model’s effectiveness, we use several metrics: Pearson’s correlation coefficient (ρ_Y,Ŷ), R^2, root mean squared error (RMSE), and mean absolute error (MAE). Pearson's correlation coefficient reflects the correlation between the predicted and true values, with values ranging from -1 to 1. Values greater than 0 indicate a positive correlation, with values closer to 1 indicating higher correlation. R^2 evaluates the goodness of fit of the model, with values between 0 and 1; higher values indicate better fit. RMSE measures the deviation between predicted and true values and is sensitive to outliers in the data; taking the root of RMSE reduces its sensitivity to dimensionality. MAE evaluates the actual magnitude of prediction errors. Smaller values (closer to 0) are better for both RMSE and MAE. Assuming that the ground truth is Y={y_1, y_2, ⋯ ,y_n} and the model's prediction is Ŷ={ŷ_1, ŷ_2, ⋯ ,ŷ_n}, then they are calculated as follows: ρ_Y,Ŷ = cov(Y,Ŷ)/σ_Yσ_Ŷ = E[(Y-μ_Y)(Ŷ-μ_Ŷ)]/σ_Yσ_Ŷ, whereσ_Y=√(1/n-1∑_i_1^n(Y_i-Y̅)^2) R^2 = 1-∑_i(y_i-ŷ_i)^2/∑_i(y_i-y̅)^2, wherey̅=1/n∑_i=1^ny_i RMSE = √(1/n∑_i=1^n(y_i-ŷ_i))^2) MAE = 1/n∑_i=1^n|y_i-ŷ_i| §.§ Experiment Configurations for Pertaining and the Downstream Tasks The configurations of the model, pre-training, and downstream tasks are shown in Table <ref>. Below, we will explain some important details. Pooling When using the single-cell gene expression profiles as input to the model, we only select genes with non-zero expression levels. However, when the model is used for downstream tasks, the model output will be restored to the complete gene sequence. The positions of genes with non-zero expression levels will be set to the model output. In contrast, the positions of genes with zero expression levels will be set to zero. Subsequently, we can use a simple convolutional neural network and a linear neural network to map the single-cell embeddings to the desired downstream task inputs. Pre-training During the MLM task and the classification task of normal cells and cancer cells, we map the model outputs at corresponding positions(masked genes or ) to category dimensions using a linear classification head for classification. In the contrastive learning task, we pool the model's output through a convolutional layer and a linear layer to obtain a 512-dimensional representation. Downstream Tasks Due to the limitations of the datasets, we only perform single-cell drug sensitivity prediction tasks for a single drug, labeling cells as sensitive or resistant to the drug. Essentially, this task is equivalent to binary classification. For the single-cell drug sensitivity prediction and cell type annotation tasks, we pool the model's output and pass it through several feed-forward neural networks serving as classification heads. For cell line drug sensitivity prediction, multiple drugs and cell lines are used. We have replicated the drug encoder part of TGDRP TGSAPA, which encodes drugs in SMILES format into 256-dimensional embeddings. The drug embeddings, along with the 256-dimensional cell embeddings obtained by pooling the model's output, is concatenated and passed through several linear layers to obtain the predicted IC50 values for regression. unsrt ref_appendix
http://arxiv.org/abs/2306.02578v1
20230605040959
Ashkin-Teller phase transition and multicritical behavior in a classical monomer-dimer model
[ "Satoshi Morita", "Hyun-Yong Lee", "Kedar Damle", "Naoki Kawashima" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
Current address: Faculty of Science and Technology, Keio University [][email protected] Institute for Solid State Physics, The University of Tokyo, Kashiwa, Chiba 277-8581, Japan Department of Applied Physics, Graduate School, Korea University, Sejong 30019, Korea Division of Display and Semiconductor Physics, Korea University, Sejong 30019, Korea Interdisciplinary Program in E·ICT-Culture-Sports Convergence, Korea University, Sejong 30019, Korea Department of Theoretical Physics, Tata Institute of Fundamental Research, Mumbai 400 005, India Institute for Solid State Physics, The University of Tokyo, Kashiwa, Chiba 277-8581, Japan Trans-scale Quantum Science Institute, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan We use Monte Carlo simulations and tensor network methods to study a classical monomer-dimer model on the square lattice with a hole (monomer) fugacity z, an aligning dimer-dimer interaction u that favors columnar order, and an attractive dimer-dimer interaction v between two adjacent dimers that lie on the same principal axis of the lattice. The Monte Carlo simulations of finite size systems rely on our grand-canonical generalization of the dimer worm algorithm, while the tensor network computations are based on a uniform matrix product ansatz for the eigenvector of the row-to-row transfer matrix, which work directly in the thermodynamic limit. The phase diagram has nematic, columnar order and fluid phases, and a nonzero temperature multicritical point at which all three meet. For any fixed v/u < ∞, we argue that this multicritical point continues to be located at a nonzero hole fugacity z_ mc(v/u) > 0; our numerical results confirm this theoretical expectation, but find that z_ mc(v/u) → 0 very rapidly as v/u →∞. Our numerical results also confirm the theoretical expectation that the corresponding multicritical behavior is in the universality class of the four-state Potts multicritical point on critical line of the two dimensional Ashkin-Teller model. Ashkin-Teller phase transition and multicritical behavior in a classical monomer-dimer model Naoki Kawashima July 31, 2023 ============================================================================================ § INTRODUCTION Dimer models provide interesting examples of entropy-dominated physics <cit.>. On planar graphs in the fully-packed limit (i.e. with hole fugacity set to zero), they are exactly solvable by Pfaffian methods<cit.>. These methods allow for a detailed characterization of the critical power law correlations of such fully-packed dimer models on the square and honeycomb lattices <cit.>. This critical behavior also admits an interesting description in terms of a coarse-grained action for a fluctuating height field <cit.>. The analogous fully-packed dimer model in three dimensions as well as two-dimensional bilayer models are not exactly solvable even at full packing, nor are related models with an admixture of hard squares. Several such models have been studied using numerical simulations and coarse-grained effective field theory ideas <cit.>. Connections to the physics of quantum dimer models <cit.> and resonating valence bond wavefunctions <cit.> have also been explored <cit.>. Interactions and nonzero hole fugacity also preclude the possibility of an exact solution even in the simple square lattice case. Nevertheless, the phase diagram on the square lattice in the presence of nonzero hole fugacity z and an aligning interaction u that favors two parallel dimers on a square plaquette has been studied in detail using numerical simulations <cit.>. These studies reveal that the aligning interactions drive a transition to columnar order at low temperature T and fugacity z. In this system, the transition from this columnar ordered state to the dilute high temperature dimer fluid has a continuously varying correlation length exponent ν, although the anomalous exponent associated with the columnar order parameter remains fixed at η =1/4 as long as the transition remains second-order in nature <cit.>. For temperatures below a tricritical value, the transition turns first order <cit.>. In the regime with a continuously varying correlation length exponent ν, long distance properties are controlled by the physics of the Ashkin-Teller fixed line <cit.>, for which the value of ν serves as a convenient universal coordinate <cit.>. Indeed, this particular microscopic realization of Ashkin-Teller criticality is described by the portion of the Ashkin-Teller fixed line that starts at its Kosterlitz-Thouless endpoint (corresponding to the transition in the fully-packed dimer model, with ν formally equal to infinity) and continues on to the four-state Potts point (corresponding to the tricritical transition of this dimer model, with ν = 2/3) on this fixed line <cit.>. In related work <cit.>, Papanikolaou et al also studied the effect of an additional dimer interaction v that competes with the aligning interaction u and hole fugacity z on the square lattice. The additional interaction represents an attraction between two adjacent dimers on the same principal axis of the square lattice and favors nematic order. In such a nematic state, lattice translation symmetry is preserved, but the symmetry of rotations by π/2 is spontaneously broken. As a result, ⟨ (N_h -N_v)^2 ⟩∼ L^4 in the thermodynamic limit, where N_h is the number of horizontal dimers, N_v is the number of vertical dimers, and the angular brackets denote the equilibrium average. The presence of such a state at low enough temperature and small enough hole fugacity z was also rigorously established in more recent work <cit.>. Consequently, the phase diagram in the presence of both interactions u and v is rich, and supports three different phases at nonzero hole density: a dilute fluid phase, a nematic phase, and a columnar ordered phase. Previous work <cit.> characterized the phase diagram in the T-z plane in some detail. This analysis led to the following conclusion <cit.>: When both interactions are nonzero and compete with each other, the transition from the low z low T columnar solid to the fluid proceeds in two steps, an Ising transition from columnar order to nematic order, and a second Ising transition from nematic order to fluid. In this scenario <cit.> for the phase diagram in the z-T plane, there are thus two Ising lines emerging from the z=0 Kosterlitz-Thouless transition point when v and u compete with each other. In the v/u →∞ limit, the first of these pivots to coincide with the z=0 temperature axis of the z-T phase diagram, rendering the low temperature columnar order unstable to infinitesimal z. Having two Ising transition lines emanate from the Kosterlitz-Thouless transition point on the z=0 temperature axis throws up an interesting puzzle when considered from the point of view the coarse-grained field theory ideas used earlier in closely related contexts <cit.>. At issue is the fact that the Kosterlitz-Thouless point on the z=0 axis at T=T_ KT is expected to continue on to an Ashkin-Teller line T_ AT(z) as one turns on a small z and lowers the temperature slightly. This fits in with the fact that Kosterlitz-Thouless criticality is known to emerge as the limiting behavior at one end of the Ashkin-Teller line when ν→∞ as this end point is approached. Additionally, it is also well-known that if a line of Ashkin-Teller transitions bifurcates into two Ising lines at a multicritical point, this multicritical point is expected to have four-state Potts symmetry. As a result, one expects that the correlation length exponent tends to ν=2/3 as this point is approached along the Ashkin-Teller line <cit.>. Having two Ising transition lines emanate from the Kosterlitz-Thouless transition point on the z=0 temperature axis would violate both these expectations. Resolving this puzzle is our principal motivation for revisiting this phase diagram with a pair of complementary techniques, namely, tensor network (TN) computations and large-scale Monte Carlo (MC) simulations using our grand-canonical generalization of the dimer worm algorithm <cit.>. The tensor network computations use a matrix product operator representation of the row-to-row transfer matrix to obtain a variational uniform matrix product (uMPS) approximation to its top eigenvector (with largest eigenvalue) <cit.>. This computational method allow efficient scans of large swathes of the phase diagram as well as direct determination of the central charge and scaling dimensions at critical points. Since it work directly in the thermodynamic limit, the accuracy is only limited the systematic error associated with the finite internal bond dimension for the tensors used in the matrix product representation; this error can be rendered negligible by choosing a large enough bond dimension. In contrast, the Monte Carlo results only have statistical sampling errors, but include finite-size effects. In the next section, we introduce the definition of the monomer-dimer model. In Sec. <ref>, we explain the numerical methods used in this paper. Our numerical results are shown in Sec. <ref>. The last section is devoted to discussion and conclusions. § MONOMER-DIMER MODEL We consider a classical hard-core monomer-dimer model with two kinds of attractive dimer-dimer interactions on the square lattice. The classical Hamiltonian in the grand canonical ensemble is given as H = - ∑_r∑_α=x, y{ u n_α(r) n_α(r+e_β≠α) + v n_α(r) n_α(r+2 e_α)} - μ∑_r n_m(r). Here, n_α(r) denotes the dimer occupation number of a link between neighboring sites, r and r+e_α, and n_m(r) is the monomer occupation number at a site r. These occupation numbers take 0 or 1, and should satisfy n_x(r) + n_y(r) + n_x(r-e_x) + n_y(r-e_y) + n_m(r) = 1 because of the hard-core constraint. The u-term in the first term of Eq. (<ref>) is the interaction between two dimers on a plaquette. On the other hand, the v-term acts on two adjacent dimers aligning in the dimer direction (Fig. <ref>). We call the former the plaquette interaction and the latter the dimer-aligning interaction as well as Ref. <cit.>. We assume that u and v are non-negative, that is, both the interactions are attractive. The last term of Eq. (<ref>) is the chemical potential of monomers. The fugacity of a monomer at the temperature T is defined as z≡ e^βμ, where β=1/T is the inverse temperature. In this paper, we set u+v=1 as a unit of energy so that the perfectly columnar ordered state at full packing has a constant energy per site, e=-(u+v)/2. When the plaquette interaction u is sufficiently large, the columnar ordered phase appears. This phase spontaneously breaks the symmetry of lattice rotations about a site, as well as lattice translational symmetry along one principal axis. In contrast, in the nematic phase which is favored by the v-term, the system spontaneously choses to have macroscopically more dimers of one orientation over the other. Translational symmetries along both principal axes are preserved, but the system spontaneously breaks the symmetry of lattice rotations about a site. Typical configurations of three phases are shown in Fig. <ref>. These are generated by the MC simulations at v=0.9 and z=0.2, using the method described in the next section for three temperatures, T=0.65, 0.75, and 0.85, which correspond respectively to the columnar ordered, nematic and disordered fluid phases. From this depiction, it is clear that the nematic phase breaks the lattice rotational symmetry but does not break translational symmetry, while the columnar state breaks both lattice translation symmetry and rotational symmetry. § METHODS As mentioned in Introduction, our computational study of this monomer-dimer model uses complementary methods. One employs our grand-canonical generalization of the dimer worm algorithm to perform Monte Carlo simulations, while the other uses tensor network methods. Below, we summarize each in turn. §.§ Monte Carlo method The usual dimer worm algorithm <cit.> provides a rejection-free nonlocal update scheme for interacting dimer models at full-packing. Here, we build on ideas developed in Ref. Rakala_Damle to generalize this dimer worm algorithm and obtain an efficient grand-canonical algorithm for the monomer-dimer model at nonzero monomer fugacity. In the first step of our grand-canonical scheme, one chooses at random a site j_ init of the lattice. There are two possibilities at this first step: either the initially chosen site j_ init has a monomer on it, or it is covered by a dimer. Let us consider each in turn. If j_ init has a monomer on it, we have five options at our disposal: The first four options consist of placing a dimer connecting the initial site to one of its four neighbors. The fifth option is to exit without doing anything. Each of these possibilities is assigned a probability from a probability table. We will discuss the construction of this probability table in some detail below. For now we simply introduce some language that will subsequently be useful in describing the construction of this probability table: The initial site is our first “pivot” site π_0, which we have “entered” from the “entry” σ_0 = 0, i.e. from “outside the lattice” (Fig. <ref>). Aborting our attempted worm move at this step itself without doing anything corresponds to “exiting the pivot” π_0 via “exit” σ'_0 = 0. On the other hand, if we opt to place a dimer connecting the pivot π_0 to its k-th neighbor, this option corresponds to exiting the pivot via exit σ'_0 = k (so k can take on values from 1 to 4). If the chosen exit is σ'_0 ≠ 0 , we now move to the site corresponding to the chosen exit and continue the construction. Before we describe what is done next, we need to specify the procedure to be used if the initially chosen site j_ init has a dimer covering it. In this case, one walks to the other end of this dimer; the site covered by this other end becomes our first pivot π_0, which we have “entered” from the entry σ_0 corresponding to j_ init. Now, the choices available are again five in number: One can delete this dimer that connects the pivot site π_0 to the entrance site j_0. This introduces two monomers in the system and concludes the worm move. As before, this corresponds to “exiting” the first pivot π_0 via exit σ'_0 = 0. Or, one can pivot the dimer covering π_0 so that it now connects π_0 to its k-th neighbor. If one of these latter four options is chosen, we say the pivot π_0 is exited via exit number σ'_0 = k, and we move to the site corresponding to the chosen exit to proceed further as described below. At this stage of our worm construction, we are at the site corresponding to exit σ'_0 of the previous pivot, having arrived there because we chose to place a dimer connecting the previous pivot point π_0 to this exit site. If this site does not already have another dimer covering it, we have reached an allowed configuration and the worm move ends. On the other hand, if this exit site does have another dimer already covering it, this site becomes the current “overlap site” o_0. We now walk along this pre-existing dimer from o_0 to its other end. The site at this other end becomes our next pivot site π_1, which has been “entered” via entry number σ_1 that corresponds to the overlap site o_0. At this step, there are again five choices for σ'_1, the exit to be used to exit the current pivot site π_1. As before, exit σ'_1 =0 corresponds to deleting the dimer covering the current pivot site π_1. If this is chosen, the worm move ends. On the other hand, exits numbered σ'_1=1 through σ'_1=4 correspond to pivoting the dimer covering π_1 so that it now connects π_1 to the site corresponding to σ'_1. If this exit site does not already have a dimer covering it, we have reached an allowed configuration and the move ends. Otherwise, this exit site becomes the next overlap site o_1, and the procedure is repeated. It is easy to see that this worm construction yields a valid rejection-free algorithm if we choose the probability table P_σ→σ' for transition probabilities in a way that it satisfies local detailed balance at each step. This amounts to requiring that the probability obeys the constraint equations: ω_σ P_σ→σ' = ω_σ' P_σ' →σ. Here, P_σ→σ' is the conditional probability for exiting a pivot via exit σ' given that we have entered it via entrance σ, P_σ' →σ is the conditional probability for the reverse process, and the weights ω_σ and ω_σ' represent the Boltzmann weights of the configurations corresponding to the choices σ and σ' respectively. These Boltzmann weights are to be calculated ignoring the violation of the hard-core constraint on dimers in the configurations that arise during the worm construction. The simplest choice of solution is the heat-bath solution (sometimes called the Gibbs sampler) given as P_σ→σ' = ω_σ' / ∑_σ'ω_σ'. In practice, we use the iterative Metropolized Gibbs sampler to reduce the bounce process <cit.>, i.e. reduce the magnitude of the diagonal elements of the probability table. Note also that the computation of the weights ω_σ and ω_σ' is simplified by the fact that they only differ due to factors arising from the contribution of the immediate neighborhood of the pivot. Since the equation set is homogenous, we can cancel all common factors to define reduced weights that only depend on the local environment of the pivot, and use these in Eq. (<ref>). These reduced weights can be written as ω_σ = z^2δ_σ, 0 + e^β(un+vm)(1-δ_σ, 0) where n (m) denotes the number of nearest neighbor dimers parallel in the transverse (longitudinal) direction to the dimer that covers the pivot when the configuration corresponds to entrances/exits σ≠ 0. These numbers n, m ∈{0, 1, 2} can be calculated by checking the direction of dimers on 8 sites around r_h, that is, r_h±e_x, r_h±e_y, r_h± 2e_x, r_h± 2e_y. The number of valid configurations in these 8 sites is 65089, much smaller than 5^8=390625. The factor of z^2 in the first term of Eq. (<ref>) reflects the fact that configuration σ = 0 has one fewer dimer, i.e. two additional holes (monomers) in comparison with the configurations with σ≠ 0. Finally, we note for completeness that this worm construction and its detailed balance property generalizes straightforwardly to lattices with arbitrary coordination number. The “out-of-plane” entrance/exit in the general case is numbered 0, and the other entrances/exits are numbered from 1 to n_c, where n_c is the coordination number of the pivot site in question (n_c can be different for different sites, and there is thus no restriction of regularity for this algorithm to remain valid) Using this algorithm, we perform the MC simulations of L× L systems with periodic boundary conditions for system size L up to L=512. The number of the worm updates n_w used per Monte Carlo step is chosen for each set of control parameters to be such that n_w ⟨ l_w ⟩ = L^2, ⟨ l_w ⟩ is the mean number of sites visited during the construction of a single worm. With this convention defining a MC step, we ensure that we obtain at least 2 × 10^6 MC configurations of the system from which we can calculate equilibrium properties. §.§ Tensor network method The tensor network representation of our model is based on the singular value decomposition of the local Boltzmann weight on a bond. The partition function is rewritten as the contraction of the tensor network, Z = tTr⊗_i A. The tensor A is located on sites of the square lattice and has four indices representing links to the nearest neighbor sites. An element of A has the form A_xyx'y' = ∑_s=0^4 (X_r)_sx (X_l)_sx' (X_t)_sy (X_b)_sy' Y_s, where s denotes the local configuration at a site as shown in Fig. <ref>. The matrices X's are determined by the singular value decomposition of the local Boltzmann weight on a bond. The Boltzmann weight on a horizontal bond is represented as a 5× 5 matrix, W_h = [ 1 1 1 0 1; 0 0 0 1 0; 1 1 e^β u/2 0 1; 1 e^β v 1 0 1; 1 1 1 0 e^β u/2; ], where the row (column) index of W_h corresponds to a state at r (r+e_x), respectively. Elements with a value of zero indicate a configuration prohibited by the hard-core constraint. The singular value decomposition, W_h = U_h S_h V_h^T, defines X_r≡ U_h S_h^1/2 and X_l≡ V_h S_h^1/2. We note that U_h and V_h can be chosen to be real orthogonal 5× 5 matrices since W_h is a real square matrix. Similarly we obtain the Boltzmann weight on a vertical bond between r (row) and r+e_y (column) as W_v = [ 1 1 1 1 0; 1 e^β u/2 1 1 0; 0 0 0 0 1; 1 1 1 e^β u/2 0; 1 1 e^β v 1 0; ] = U_v S_v V_v^T, and we define X_t ≡ U_v S_v^1/2 and X_b ≡ V_v S_v^1/2. The chemical potential of a monomer acts as the external field and gives the corresponding on-site factor as Y_s = z δ_s, 0 + (1-δ_s, 0). The first term has the factor of z in contrast to Eq. (<ref>). The former corresponds to the Boltzmann weight for valid configurations of our monomer-dimer model, while tha latter is for extended configurations containing one doubly occupied site. The row-to-row transfer matrix with infinite width is represented as a uniform matrix product operator with a local tensor A. Using the variational uniform matrix product state algorithm (VUMPS) <cit.>, we calculate the eigenvector corresponding to the largest eigenvalue of the transfer matrix. The uniform matrix product state (uMPS) obtained in this way approximates this eigenvector with accuracy that is controlled by the bond dimension χ of the uMPS. We increase χ up to 128 to ensure sufficient accuracy. In practice, we assume that the uMPS has a 2× 2 unit-cell structure <cit.> as is appropriate for a description of the columnar ordered state. After calculating the horizontal uMPS in this way, we also calculate the vertical uMPS, which approximates the corresponding eigenvector of the column-to-column transfer matrix. A good initial guess for the vertical uMPS can be given by the fixed point tensor of the horizontal uMPS <cit.>. We have confirmed that the results of the horizontal and vertical uMPS agree with each other to machine precision. § OBSERVABLES AND INTERPRETATION We now summarize the definitions and physical significance of the various observables of interest to us in this problem, and indicate how they may be accessed in either of the computational methods we use. §.§ The order parameters and Binder ratios We detect columnar order using a complex order parameter constructed from the following local order parameter field defined at each site r=(r_x, r_y) as Ψ_col(r) ≡ (-1)^r_x{ n_x(r) - n_x(r-e_x) } + i (-1)^r_y{ n_y(r) - n_y(r-e_y) }. The corresponding order parameter, m_col≡1/N∑_rΨ_col(r), takes ± 1 or ± i when the state has the complete columnar order. Nematic order, which breaks the symmetry of π/2 rotations, can be detected by comparing the number of horizontal and vertical dimers. With this motivation, we define the local nematic order parameter field as Ψ_nem(r) ≡ n_x(r) + n_x(r-e_x) - n_y(r) - n_y(r-e_y), The corresponding order parameter, m_nem≡1/N∑_rΨ_nem(r) = 2/N∑_r{ n_x(r) - n_y(r)} , takes on values ± 1 both in the nematic and columnar states. The corresponding Binder ratios <cit.> are defined in the usual way: U_col≡< |m_col|^4 >/< |m_col|^2 >^2, U_nem≡< m_nem^4 >/< m_nem^2 >^2. As is well-known, the Binder ratios converge in the ordered phase to 1 as the denominator and numerator take on the same limiting value in the thermodynamic limit. On the other hand, in a phase without symmetry breaking, the limiting value depends on the nature of fluctuations of the order parameter. U_col in the nematic and U_nem in the disordered fluid phase are both expected to converge to 3 in the thermodynamic limit because the fluctuations of the corresponding order parameters obey a one-dimensional Gaussian distribution in these regimes. On the other hand, U_col→ 2 in the disordered fluid phase because the fluctuations of m_col obey a two-dimensional Gaussian distribution. We note that the conventional and Ising definition of the Binder parameter for the frustrated Ising model discussed in Ref. <cit.> correspond U_col and U_nem, respectively. Since the Binder ratio is a dimensionless quantity in the sense of the renormalization group, curves that represent the dependence of a Binder ratio on a control parameter (z or T) for systems of different sizes L are all expected to cross at the critical value of the control parameter. This allow us to locate the phase transitions involving loss of order in a convenient way. §.§ Correlation length and entanglement entropy The correlation length is obtained as ξ = - 2/ln (λ_1 / λ_0) where λ_0 (λ_1) is the (second) largest eigenvalue of the transfer matrix defined by the uMPS. We note that the uMPS is always normalized such that λ_0=1, and the factor 2 comes from the unit-cell size of the uMPS. Since we calculate the uMPS in both the horizontal and vertical directions, two kinds of correlation length exist in a phase that breaks the symmetry of lattice rotations. The “transverse” correlation length ξ_⊥ is measured along the direction perpendicular to the dimers, while the “longitudinal” correlation length ξ_∥ is in the direction parallel to the dimers. In the disordered fluid phase, both correlation lengths are the same as expected. In the nematic phase, a level crossing of λ_1 occurs, and ξ_⊥ takes a smaller value than ξ_∥ below a certain temperature. In the columnar ordered phase, we always have ξ_⊥ < ξ_∥. In other words, the correlation length scale along the dimers become larger than in the perpendicular direction. Thus, the transverse correlation length seems to be suitable for studying the phase transition between the columnar ordered and disordered fluid phases. The entanglement entropy is defined as S_EE = - ∑_iσ_i^2 lnσ_i^2, where σ_i denotes the singular value of the core matrix in the mixed canonical form of the uMPS. Since we calculate the horizontal and vertical uMPS with a 2× 2 unit-cell structure, S_EE may depend on direction and position in the unit cell. In the disordered and nematic phases, we find that the all S_EE are equal to each other. On the other hand, in the columnar ordered phase, we find that S_EE takes on two values, depending on whether the core matrix of the horizontal uMPS is on a dimer or between dimers. The former always yields the larger S_EE. In our analysis below, we use the largest entanglement entropy thus obtained. §.§ Connection with coarse-grained Ashkin-Teller description It is instructive to think in terms of a coarse-grained version ψ of our local complex columnar order parameter field Ψ_col, and write ψ ∝ (τ_1 + τ_2) + i(τ_1 - τ_2) Clearly, the corresponding coarse-grained version ϕ of the local nematic order parameter field Ψ_nem satisfies ϕ ∝ Re ( ψ^2) ∝ τ_1 τ_2 The τ in the above are two coarse-grained Ising fields. In the columnar ordered phase, both τ_1 and τ_2 are ordered; this correctly accounts for the four-fold symmetry breaking in the columnar ordered state. Nematic order corresponds to the product τ_1 τ_2 being ordered, without any long range order in the individual τ. From the symmetries of the original problem, we see that interchanging the τ is a symmetry of the theory. Thus, the natural description is in terms of a symmetric Ashkin-Teller theory with two Ising fields τ_1 and τ_2. This connection to the physics of the Ashkin-Teller model <cit.> yields a wealth of information. For instance, along a line of continuous transitions from the columnar ordered state to the disordered fluid state, we expect the critical behavior to controlled by the critical properties of the corresponding fixed line in the Ashkin-Teller model. Along this line, both the Ising fields τ_1 and τ_2 have a fixed anomalous dimension of η =1/4 <cit.>. Since the columnar order parameter is linear in the Ising fields τ_1/2, we expect it to also scale with an anomalous exponent η =1/4 all along the line of continuous transitions between columnar ordered and disordered fluid phases. Along the fixed line of the Ashkin-Teller theory, τ_1 τ_2 scales with an anomalous dimension η_2 that varies continuously <cit.> and is related to the continuously varying correlation length exponent by the Ashkin-Teller relation η_2 = 1 - 1/2 ν. Since the nematic order parameter ϕ∼τ_1 τ_2, we expect it to have an anomalous dimension η_2 <cit.> given by this relation all along the line of continuous transitions between columnar ordered and disordered fluid phases. In the Ashkin-Teller model, the point at which Ashkin-Teller line splits into two lines of Ising transitions is known to have the symmetries for the four-state Potts model <cit.> ; in the phase between these two Ising transition lines, τ_1 τ_2 is ordered although τ_1 and τ_2 remain individually disordered . The enhanced Potts symmetry at this multicritical point implies that τ_1, τ_2, and τ_1 τ_2 all have the same anomalous exponent. Thus η_2 = η = 1/4 at this point, and the Ashkin-Teller relation implies ν = 2/3. Given the correspondence made above, this implies that the nematic order parameter is expected to have an anomalous exponent of 1/4 at the multicritical point at which the Ising phase boundaries of the nematic phase meet the line of continuous transitions between columnar ordered and disordered fluid phases. From this perspective, it is clear that the value of ν (or equivalently η_2) serves as a universal coordinate for the line of continuous transitions from the columnar ordered phase to the disordered fluid phase. At full-packing, i.e. z= 0, the system has a description in terms of a coarse-grained Gaussian height action for a scalar height h, and the transition from the power-law ordered high temperature state to the low temperature state is expected to be governed by a Kosterlitz-Thouless transition at which the leading cosine nonlinearity cos(8 π h) becomes relevant. As a result, one expects ν→∞ as z → 0 along the line of continuous transitions between the columnar ordered state and the disordered fluid state. From this it is clear that the multicritical point at which the two Ising lines meet cannot be at z=0, since this multicritical point corresponds to a value of ν = 2/3. This theoretical perspective and the resulting expectations informs much of the data analysis we present in the next section. § NUMERICAL RESULTS Before getting into the details, it is useful to provide a summary of our results for representative slices through the phase diagram, as these slices clarify the overall picture and help answer the question raised in Introduction. §.§ Overview To this end, we first consider a fixed z=0.2 slice and display the computed two dimensional phase diagram in the T–v plane (with u=1-v). This is shown in Fig. <ref>. For v≤ 0.7, there is a direct temperature driven phase transition between the columnar ordered and disordered fluid phases. As v is increased further, this phase boundary splits slightly below v=0.8 into two transition lines and the nematic phase appears as an intermediate phase beyond this multicritical point at which three transition lines meet. We have also checked that a corresponding slice at somewhat larger z reduces the temperature scale of the transitions. The transition temperatures shown in Fig. <ref> is determined from the peak position of the correlation length estimated by the TN method. The result of the MC method agrees with it within errors that are smaller than the symbol sizes used. Next we consider slices with fixed v=0.9 and v=0.8 (with u=1-v) and display the computed two dimensional phase diagram in the T–z plane. This is shown in Fig. <ref>. The transition points are determined in the same way as Fig. <ref>. Since the fully-packed z=0 system is particularly challenging for TN computations, we do not extend our study all the way to z=0. Nevertheless, we are able to go to low enough z to demarcate the essential features of the phase diagram. At v=0.8, the low temperature nematic phase is quite narrow and disappears below the multicritical value of z which is close to z=0.1. On the other hand, for v=0.9, the nematic phase is very broad and seems to exist even at very low values of z. However, our detailed computations reveal that there is no nematic state below a multicritical threshold value of z which is close to z=0.002 (Fig. <ref>). The actual monomer densities associated with these multicritical points are extremely small: For v=0.8, the multicritical monomer density is about δ=0.005. At v=0.9, the multicritical monomer density is a much lower value of 5× 10^-5. This conclusion is contrary to that of Ref. <cit.>. However, it is entirely consistent with our understanding of the phase boundaries based on the coarse-grained effective field theory. Indeed, as we have already reviewed earlier, the multicritical point is expected to have rather different universal behavior from the transition at full packing since the former is expected to correspond to a η_2 =1/4 and the latter corresponds to η_2 = 1 (where η_2 is the anomalous exponent associated with the nematic order parameter). As a result, for v/u < ∞, there is no consistent scenario in which the multicritical point coincides with the transition at full packing. The resolution is of course that the multicritical value of z approaches z=0 extremely rapidly as v/u is increased, but does not reach z=0 at any finite v/u. In the rest of this section, we display our results for a few representative values of v and z, and provide a detailed account of the analysis that leads to these phase diagrams and this overall conclusion. §.§ Detailed analysis We use both tensor network (TN) and Monte Carlo (MC) methods to obtain the nematic and columnar order parameters, since these two complementary methods provide a nontrivial consistency check on each other. Figure <ref> shows the temperature dependence of the order parameters at v=0.6 and v=0.9 with z=0.2. At v=0.6, both the order parameters take on non-zero values below T=0.7714, signaling the onset of columnar order. On the other hand, at v=0.9, we find that a nematic phase exists between T_1=0.722 and T_2=0.790 , as is clear from the fact that the columnar order parameter vanishes but the nematic order parameter takes on a non-zero value. The transition temperatures are estimated by identifying the crossing points of the Binder ratios, as shown in Figure <ref> for the Binder ratios calculated by the MC simulations at v=0.9 and z=0.2. These crossing points are found to be consistent with the transition temperature estimated by the TN method. We obtain the scaling dimensions of various quantities at these transitions by performing a finite-size scaling (FSS) analysis, using the following FSS form for the columnar and nematic order parameters, ⟨ |m_a|^2 ⟩∼ L^2x_a f((T-T_c) L^1/v), where x_a denotes the scaling dimension of the corresponding operator Ψ_a (a=col, nem); these scaling dimensions are related to the anomalous exponents introduced earlier via x_col = η/2, x_nem = η_2/2. Likewise, the scaling dimension of the energy operator is related to the correlation length exponent introduced earlier: x_t = 2- 1/ν. We use the kernel method <cit.> to infer the critical exponents and the critical temperature and estimate their confidential intervals. At the best fit value of the scaling dimensions and the transition point, all data collapse reasonably well onto a single curve as shown in Fig. <ref>. The scaling dimensions are plotted in Fig. <ref>. Below the multi-critical point, i.e. along the line of continuous phase transitions between the columnar ordered and disordered fluid states, the scaling dimensions x_nem and x_t are seen to continuously change with v, while x_col remains constant at a value consistent with the theoretical expectation of x_col = 1/8. We have checked that the corresponding exponents η_2 and ν satisfy the Ashkin-Teller relation η_2 = 1-1/2ν, i.e. x_t/4 = x_nem within our errors. On the other hand, we have x_t/4 ≠ x_nem for v≥ 0.8. This discontinuous change in the critical index ratio strongly suggests the change from the single transition to the two separate transitions, i.e., the existence of the multi-critical point. The expected η=η_2=1/4 at the multicritical point is actually realized at about v=0.7. Parenthetically, we note that we find a decoupled Ising point at about v=0.3 in the z=0.2 plane, where we have x_nem=0.25 and x_t=1.0. The scaling dimension of the energy operator can also be estimated from the FSS analysis of the Binder ratio as shown in Fig. <ref>. Although the non-monotonic behavior of the Binder ratio for columnar order and the relatively closely spaced successive transitions makes this difficult, we obtain almost the same results for x_t from this analysis, as we do using the earlier analysis in terms of the order parameters. We emphasize that non-monotonicity has to do with the differing character of the columnar order parameter fluctuations in the nematic and the disordered fluid phases, and the related presence of a proximate multicritical point. Similar non-monotonicity has been noted in the J_1-J_2 Ising model earlier <cit.>. In contrast to other examples of such behavior in frustrated Ising models, which is associated with a proximate weakly-first order transition, we do not find evidence of any first-order transitions for the values of v studied here. Although it is difficult to completely exclude possibility of the weakly first-order phase transition around the multi-critical point in our simulations, the weight of evidence suggests that the presence of a sizeable v term replaces the first order transition found in Ref. <cit.> by an intermediate nematic phase flanked by two second-order Ising phase boundaries. This is consistent with the fact that generalized four-state clock models, which serve as a discrete hard-spin analog of the coarse-grained description in terms of order parameter fields ψ and ϕ, are known for some parameter values to have such an intermediate phase flanked by two Ising lines that meet at a multicritical point with enhanced four-state Potts symmetry at the end of a line of Ashkin-Teller transitions <cit.>. The multi-critical point at v=v_c(z) is expected to have the 4-state Potts universality. It is difficult to determine accurate location of the multi-critical point because of the finite-size or finite bond-dimension effect. To make matters worse, one also expects that a logarithmic correction appears at the 4-state Potts model. Our data, however, support the S_4 symmetry at the multi-critical point. The order parameters satisfy m_nem > m_col below v_c, while m_nem > m_col for v>v_c (Fig. <ref>). Thus, we expect that m_col≃ m_nem at v=v_c, which indicates the emergent S_4 symmetry. By definition, the scaling dimension also appears in the corresponding critical two-point correlation function as C_a(r)∝ r^-2x_a. We consider the correlation function along an axis because the uMPS can easily calculate it. The correlation function between the local quantities is defined as C_a(r) ≡1/N∑_ρ< Ψ_a(ρ) Ψ_a^*(ρ + re_α) >. In our TN simulation, ρ runs over sites in the 2× 2 unit cell, which corresponds to a value of N=4. For correlation between monomers, the truncated correlation function, C_mono(r) ≡1/N∑_ρ{< n_m(ρ) n_m(ρ + re_α) > - < n_m(ρ) > < n_m(ρ + re_α) > }, is expected to scale as r^-2x_t. As shown in Fig. <ref>, the scaling dimensions extracted by the linear fitting of the correlation functions agree with the results obtained by the FSS analysis. The central charge is another important universal property of a critical point. According to the conformal field theory, the correlation length ξ and the entanglement entropy S_EE are related by the Calabrese-Cardy formula at criticality: S_EE = c/6lnξ + const., where c denotes the central charge. One of the advantages of the TN method is that these two quantities can be calculated naturally. Fig. <ref> clearly shows that the values obtained from the TN method are consistent with the Cardy-Calabrese formula (<ref>). Based on the theoretical framework outlined in the previous section, the continuous phase transition between the columnar ordered and disordered fluid phases is expected to have a central charge of c=1. On the other hand, the continuous Ising phase boundaries of the nematic phase are expected to have a central charge of c=1/2. From the results displayed in Fig. <ref>, we see that our results do indeed conform to both these expectations. Above the multi-critical point (v> v_c), there are two phase transitions, the columnar-nematic and nematic-disorder ones. Although the FSS result of x_t deviates from the expected value (Fig. <ref>), we believe that it is due to the correction to scaling and effects from another critical point. The TN result at v=0.9 shows that the eighth power of the order parameters and the inverse of the correlation length are linear to the temperature near the criticality (Fig. <ref>). This fact strongly indicates that these critical exponents satisfy β=1/8 and ν=1, which is consistent with the Ising universality. Finally, we comment on the approach of the multicritical point to z=0 as v/u is increased. This happens very rapidly, and simultaneously, the phase boundary between the columnar and nematic phases, shown by orange squares in Fig. <ref>, approaches the vertical temperature axis as v/u increases. This is clear from monitoring the relative values of m_col and m_nem as in our earlier discussion about the symmetry of the multicritical point. Eventually, in the limit v/u→∞, the columnar phase vanishes and only the phase boundary between the nematic and disordered phases remains. § DISCUSSIONS AND SUMMARY Our conclusions and the theoretical framework within which they are situated has already been discussed at length. Here, we confine ourselves to highlighting one aspect of the phase diagram that appears to be worth further study. This has to do with the rapidity with which the multicritical point (at which the two Ising transitions meet the Ashkin Teller line) moves towards z=0 as we increase v/u. The proximity to the full-packing limit raises the possibility that aspects of this could be understood by expanding about the full-packing limit in some way. It would be interesting to explore this in future work. A related question has to do with the extent of the nematic phase itself in the T–z phase diagram at large v/u. As noted in Ref. <cit.>, one expects that the low temperature phase at full-packing will be columnar ordered for any finite value of v/u, no matter how large. However, the extent of this phase in z decreases very rapidly with increasing v/u, until, at asymptotically large values of v/u, the columnar state only exists at full packing. Again, it would be interesting if some small z expansion method could yield a more quantitative characterization of this phenomenon, which is very challenging to study by numerical methods. The work of SM, HYL, and NK was supported by JSPS KAKENHI Grants No. JP19H01809 and No. JP20K03780. KD was supported at the Tata Institute of Fundamental Research by DAE, India and in part by a J. C. Bose Fellowship (JCB/2020/000047) of SERB, DST India, and by the Infosys-Chandrasekharan Random Geometry Center (TIFR). The inception of this work was made possible by a Visiting Professor appointment (KD) and associated research support at the Institute of Solid State Physics (ISSP), University of Tokyo. The computational component of this work has been done using the facilities of the Supercomputer Center, ISSP.
http://arxiv.org/abs/2306.06621v1
20230611084531
Continuum-discretized coupled-channel calculations for $^{6}$Li fusion reactions with closed channels
[ "Wendi Chen", "D. Y. Pang", "Hairui Guo", "Ye Tao", "Weili Sun", "Yangjun Ying" ]
nucl-th
[ "nucl-th" ]
^1School of Physics, Beihang University, Beijing 100191, People’s Republic of China ^2 Institute of Applied Physics and Computational Mathematics, Beijing 100094, People’s Republic of China Fusion reactions induced by the weakly bound nucleus ^6Li with targets ^28Si, ^64Ni, ^144Sm and ^209Bi at energies around the Coulomb barrier are investigated within a three-body model where ^6Li is described with an α + d cluster model. The total fusion (TF) cross sections are calculated with the continuum-discretized coupled-channel (CDCC) method and the complete fusion (CF) cross sections are extracted through the sum-rule model. The calculations demonstrate that (i) for the TF cross section calculations, the continuum states up to 40 MeV are found to be necessary, which corresponds to the inclusion of closed channels for light and medium mass targets, such as ^28Si, ^59Co and ^144Sm, (ii) the converged CDCC results for TF cross section at energies above the Coulomb barrier are almost the same as single channel results in which the continuum coupling effect is neglected, and (iii) the continuum coupling strongly influences partial wave fusion cross sections and the closed channels play a significant role in the improvement of the description of the CF cross sections at energies below the Coulomb barrier for the ^6Li+^28Si, ^59Co and ^144Sm systems. Continuum-discretized coupled-channel calculations for ^6Li fusion reactions with closed channels Wendi Chen^1, D.Y. Pang^1, Hairui Guo^2, Ye Tao^2, Weili Sun^2 and Yangjun Ying^2 July 31, 2023 ================================================================================================= § INTRODUCTION Reactions induced by weakly bound projectiles have been extensively investigated in the last few decades<cit.>. In these reactions, breakup is a very important reaction channel and has a strong coupling effect on elastic scattering, inelastic scattering, transfer and fusion channels. In particular, special attention has been given to the continuum coupling effects on fusion reactions both theoretically and experimentally <cit.>. Among the stable weakly bound nuclei, ^6Li has always been of interest, as it has only one bound state with a low separation energy of 1.47 MeV for breaking up into α and d fragments. So far, many experiments for ^6Li-induced fusion reactions have been reported, with targets varying from light nuclei, such as ^27Al <cit.> and ^28Si <cit.>, to heavy nuclei, such as ^197Au <cit.> and ^209Bi <cit.>. But the experimental data are still far from being fully understood. Owing to the low binding energy of the projectile, there is a high probability that the projectile breaks into two or more fragments. In such cases, two kinds of fusion processes in the collisions of weakly bound nuclei have emerged from previous investigations, namely the complete fusion (CF) and the incomplete fusion (ICF). CF occurs when the whole weakly bound projectile is captured by the target. ICF occurs when some fragments of the projectile are captured and others escape. The sum of CF and ICF amounts to the total fusion (TF). It is a great challenge to develop a realistic theory to describe the fusion process of weakly bound nuclei. Up to now, various theoretical approaches have been presented, among which the continuum discretized coupled-channel (CDCC) method has been the most popular one in making realistic predictions for the fusion reactions induced by weakly bound nuclei <cit.>, as it can effectively take the continuum coupling effect into account. For ^6Li, many researchers <cit.> have adopted the CDCC method to calculate its TF cross sections. However, it is hard to evaluate the contributions from CF and ICF processes to TF cross sections for ^6Li as its breakup fragments are both charged and their masses are comparable. Recently, a direct approach based on the CDCC method to evaluate the CF and ICF cross sections for ^6,7Li fusion reactions was developed by Lubian et al. <cit.>, in which the probabilities of CF and ICF processes were obtained by integrating the scattering wave functions with different matrix elements of coupled-channel equations. They obtained a reasonable agreement between theory and experiment for ^6,7Li fusion with heavy nuclei. On the other hand, Lei and Moro <cit.> obtained the CF cross sections for ^6,7Li+^209Bi indirectly by subtracting the cross sections of elastic breakup, nonelastic breakup and inelastic scattering from the total reaction cross section. Their results were in satisfactory agreement with experimental CF data, too. In addition, a different approach was developed by Parkar et al. <cit.> in which the CF, ICF and TF cross sections are obtained by three CDCC calculations with different short-range imaginary potentials. Their calculated results for ^6,7Li+^198Pt and ^209Bi were also in reasonable agreement with experimental data. These studies are all about 6Li fusion with heavy targets. Fusion reactions of 6Li with light and medium mass nuclei are still rare. In many cases, the convergence of CDCC calculations is not easy to be achieved when they are applied in evaluating the fusion reaction cross sections. To overcome this problem, Diaz-Torres et al. <cit.> neglected the imaginary parts of off-diagonal couplings and only kept the imaginary parts in diagonal couplings in their study of ^6Li fusion reactions. Similarly, Lubian et al. <cit.> neglected the imaginary parts of matrix elements between bound and breakup channels. In these works, the maximum energy of the continuum states, ε_max, for ^6Li was set to be 6.0-8.0 MeV. It is far lower than the threshold energy of the continuum states, ε_th=E_cm-ε_b, where E_cm is the incident energy in the centre of mass system and ε_b is the separation energy of α and d in the ground state ^6Li. It is well known that closed channels, whose channel energies are negative, play an important role in the low-energy reaction induced by weakly bound nuclei. Yahiro et al <cit.> have discussed the coupling effect of closed channels on deuteron elastic scattering, which is visible for d+^58Ni reaction at low incident energy. Ogata and Yoshida <cit.> have reexamined the calculations for deuteron elastic breakup cross sections on ^12C and ^13Be at low incident energies. They pointed out that closed channels are required for CDCC calculations to obtain good agreement with the result of Faddeev-Alt-Grassberger-Sandhas theory, in which the three-body problem is exactly solved. Very recently, we <cit.> presented a CDCC analysis for ^6Li+^59Co reactions at energies around the Coulomb barrier and found that the ε_max for ^6Li should be at least 50.0 MeV to obtain converged elastic breakup reaction cross section. Therefore, it is worthy of reexamining the fusion cross section calculations by increasing ε_max to check whether the continuum coupling effect is completely taken into account. Furthermore, as a semi-classical approach, the sum-rule model <cit.> is adopted to distinguish the CF and ICF processes. According to this model, CF and ICF occur at lower and higher angular momenta respectively. It is of interest to study the continuum coupling effect on ^6Li complete fusion with this model, as it can give a clear and simple dependence of CF cross sections on angular momenta and incident energy. Meanwhile, it can be applied to the ^6Li fusion reactions with different targets, enabling us to investigate the influence of targets on CF. In the present work, we study the coupling effect of continuum states on TF and CF cross sections for ^6Li fusion with ^28Si, ^64Ni, ^144Sm and ^209Bi targets at energies around the Coulomb barrier. Particular attention is given to the reexamination of CDCC calculations for ^6Li total fusion cross sections, where the numerical convergence problem should be solved by increasing ε_max. With the converged calculated results, the CF cross sections will be extracted by the sum-rule model to study the dependencies of CF process on breakup channels and targets. The coupling effects of open and closed channels will also be discussed. The paper is organized as follows. The formalism is given in Sec. <ref>. Reexamination of ^6Li CDCC calculations for total fusion is presented in Sec. <ref>. Sec. <ref> discusses the coupling effect of continuum states on TF and CF cross sections in detail. Finally, the summary and conclusion are given in Sec. <ref>. § THEORETICAL MODEL §.§ α+d cluster model for ^6Li ^6Li is described in α+d cluster model. Its internal wave function is written as ψ =𝒜{φ( α) [ φ _s( d ) ⊗χ _l( r⃗) ] _I^M} , where 𝒜 means the antisymmetrization of the nucleons. φ (α) and φ _s (d) are the intrinsic wave functions of α and d clusters respectively. r⃗ is the relative coordinate between two clusters. χ _l represents the α-d relative motive with angular momentum l, which couples with the spin of deuteron cluster, s, to form the total spin I and its projection M. In the present work, a simple version of the orthogonality condition model <cit.> is used to calculate the ^6Li internal wave function ψ. The effects of the antisymmetrization of nucleons are taken into account approximately by employing an effective α-d potential, V_α-d, and excluding the deepest bound state as the forbidden state. Therefore, ψ is calculated by solving a Schrodinger equation with V_α-d <cit.>, which is l-dependent. V_α-d are parameterized by the Woods-Saxon form <cit.>, including the central and spin–orbit potentials. Its parameters for l=0, 1 and 2 are listed in Table <ref>. V_α-d can well reproduce the binding energy of 1.47 MeV (l=0, s=1 and I^π=1^+) as well as the 3^+, 2^+ and 1^+ resonance states in D-wave continuum. The calculated resonance energies and widths are shown in Table <ref>, compared with the experimental data <cit.>. V_α-d also describes the low-energy α-d scattering phase shifts well, shown in Fig <ref> as a function of the relative α-d energy in centre of mass system ε. The spacial wave functions of the d and α clusters are assumed to be (1s)^2 and (1s)^4 harmonic oscillator shell model wave functions with different oscillator constants β_d and β_α respectively (β =mω /ħ), and expressed as φ( d ) =N_dexp[ -β _d/2∑_i∈ d( r⃗_i-R⃗_d ) ^2] , φ( α) =N_αexp[ -β _α/2∑_i∈α( r⃗_i-R⃗_α) ^2] where β_d=0.390 fm^-2 and β_α=0.375 fm^-2. r⃗_⃗i⃗ is the coordinate of particle i in ^6Li relative to the centre of mass of ^6Li. R⃗_d and R⃗_α represent the centre of mass for the corresponding clusters. N_d and N_α are the corresponding normalized factors. The calculated value of root-mean-square matter radius and that of charge radius are both 2.54 fm, which agree with the experimental value 2.54±0.03 fm <cit.> (the measured values for these two radii are the same). The charge form factor of elastic electron scattering by ^6Li is also calculated, as shown in Fig. <ref>. Reasonable agreement is obtained with the experimental data <cit.>. §.§ The CDCC method and the sum-rule model Detailed CDCC formalism can be found in Refs. <cit.>. In this method, a finite number of discretized and square-integrable states are adopted to represent the continuum states of the projectile. Hence, the total wave function with total angular momentum J and parity π can be expressed as ^Jπ =∑_β _β ^Jπψ _β where β represents the reaction channel. ψ _β is the internal wave function of ^6Li in β channel. The wave functions for bound and discretized states are expressed on the same footing. _β ^Jπ is the relative motion wave function between the ^6Li in β channel and target. _β ^Jπ is determined by the coupled-channel equations [ T+E_β-U_ββ ^Jπ] _β ^Jπ=-∑_β ^ 'βU_ββ ^ ' ^Jπ _β ^ ' ^Jπ, where E_β is the channel energy. T denotes the kinetic energy of ^6Li-target relative motion. The coupling matrix element U_ββ ^ ' ^Jπ is calculated as U_ββ ^ ' ^Jπ=< ψ _β|U_α -T+U_d-T| ψ _β ^ '>, where U_α -T and U_d-T are the optical potentials for α and d with the target respectively. The imaginary parts of the optical potentials, W_α -T and W_d -T, are responsible for the absorption of ^6Li by the target. Therefore, the coupling matrix elements corresponding to the absorption is given by W_ββ ^ '^Jπ=< ψ _β|W_α -T+W_d-T| ψ _β ^ '>. The total fusion cross section is then obtained as σ _TF=∑_J=0^J_maxσ _J, where σ _J=K/E∑_πββ ^'< _β^Jπ|W_ββ ^'^Jπ| _β ^'^Jπ>. K is the projectile-target relative wave number in the incident channel. It should be emphasized that the wave functions of all channels, including open and closed channels, are required in the calculations for σ _J with Eq. (<ref>). This method is equivalent to the computing approach with S matrix <cit.>, in which only the S matrices of open channels are used. According to the sum-rule model, the complete fusion cross section can be extracted directly as σ _CF=∑_J=0^J_cσ _J, where J_c is the cut-off angular momentum (see Sec. <ref> for details). § CONVERGENCE OF TOTAL FUSION CROSS SECTION CALCULATIONS Optical potential with a short-range imaginary part can be applied to calculate the fusion cross section, which is equivalent to the use of an incoming boundary condition inside the Coulomb barrier <cit.>. In principle, the calculated fusion cross section should be independent of the imaginary part of the optical potential. In this work, the São Paulo potential version 2 (SPP2) <cit.> is adopted for the real parts of the nuclear interactions between the α and d clusters with the target. SPP2 potential is calculated with the double folding method, which requires the nucleon distributions of projectile and target. We adopt the theoretical nuclear distribution embedded in the code for the target, which is calculated by an axially-symmetric self-consistent Dirac-Hartree-Bogoliubov mean field approach <cit.>. The nucleon distributions for α and d clusters are calculated with the wave function in Eq. (<ref>). For the Coulomb potentials, the radius factor r_C for two clusters with the target are both set to be 1.5 fm. The short-range imaginary parts of the d-target and α-target optical potentials are parameterized in the Woods-Saxon form. Their radius factor r_W and diffuseness parameter a_W are set to be 0.8 fm and 0.1 fm respectively so that the imaginary parts are inside the Coulomb barrier completely. The depths of the two imaginary parts W_0 are set to be the same, varying from 20.0 MeV to 80.0 MeV to examine the convergence of total fusion cross section calculation. Following our previous study <cit.>, the internal Hamiltonian of ^6Li is diagonalized by the regularized Lagrange–Laguerre mesh method <cit.>. This way of discretizing the continuum states is called the pseudo-state method, which diagonalizes the Hamiltonian by square integrable basis functions and can generate the bound and pseudo-states together. The pseudo-states are used to represent the continuum states. The basis functions are defined as f_i( r ) =( -1 ) ^i/√(hx_i)L_N( r/h )/r-hx_ire^-r/2h,i=1,2,...,N, where L_N is the Laguerre polynomial of degree N. x_i corresponds to the zeros of L_N, that is L_N(x_i)=0, i=1,2,...,N. h is a scaling parameter, adopted to the typical size of the system. For more details one can see Refs. <cit.>. In this paper, the number of basis function N and the scaling parameter h are set to be 35 and 0.5 fm respectively. Many tests have been performed to ensure that the calculated fusion cross sections are insensitive to the parameters N and h. Fig. <ref> shows the energy spectrum of ^6Li bound and pseudo-states up to D-wave continuum and 40.0 MeV. We firstly perform calculations for ^6Li+^28Si, ^64Ni, ^144Sm and ^209Bi systems at incident energy in center of mass system E_cm=V_B, where V_B is the Coulomb barrier measured in the references. V_B=6.87, 12.41, 25.15 and 30.40 MeV for ^6Li+^28Si <cit.>, ^64Ni <cit.>, ^144Sm <cit.> and ^209Bi <cit.> systems respectively. l=0, 1 and 2 states are taken into CDCC calculations. The calculations are made with the maximum continuum energy ε_max from 0 to 40 MeV for all partial waves with the depth of the imaginary potential being 20, 50 and 80 MeV, respectively. The results are shown in Fig. 4. Form these results, we can get the following important information: * The maximum continuum energy set in the calculations ε_max should be large enough to ensure that the σ_TF do not depend on the choice of the imaginary potential depth. On the other hand, when ε _max is set large enough, the resulting σ_TF values do converge to a value which is independent of the depths of the imaginary potentials; * For ^6Li+^28Si, ^64Ni and ^144Sm systems, the inclusion of closed channels is necessary for the convergence of the total fusion cross sections. * For ^6Li+^209Bi system, the σ_TF calculated with ε_max=ε_th only differs from that calculated with ε_max=40.0 MeV by 0.3% when W_0=50 MeV. The coupling effect of closed channels is weak for ^6Li total fusion with such a heavy nucleus. Practically, ε_max=40 MeV seems to be sufficient for all the four reaction systems. In this case, the changes in the total fusion cross sections are less than 2% when the imaginary potential depth changes from 20 to 80 MeV. This ε_max is well above the threshold energy of the continuum state ε_th=E_c.m.-1.47 MeV and the closed channels are well included in calculations. For a further understanding of how the cut-off of continuum state energy and the imaginary part of optical potentials influence the fusion cross section calculations, we compare the radial relative wave function for ^6Li+^64Ni system at E_c.m.=V_B with different ε_max and W_0. The radial relative wave function is defined as u^J_β(R)=RΨ ^J_β(R), where R is the relative distance between the centers of mass of ^6Li and target. To explicate the wave functions, the orbital angular momentums of the projectile-target relative motive in incoming and outgoing channels are required, namely L and L'. The radial wave functions at J^π=1^+, 4^- and 10^+ are calculated for comparision. The partial wave fusion cross section σ_J reaches a maximum at J=4, which is also the cut-off angular momentum for this reaction (See Sec. <ref> for more details). In the present work, we study the influence of ε_max and W_0 on wave functions of elastic scattering, 3^+ resonance breakup and closed channels. The 3^+ resonance state of ^6Li is represented by the pseudo-state in ^3D_3 partial wave with eigen energy 0.7111 MeV. For the closed channel wave functions, the u^J_β of the pseudo-state in ^3S_1 partial wave with eigen energy 11.5108 MeV are calculated. As the wave functions are used to calculate fusion cross sections as shown in Eq. (<ref>), we focus on the square of the absolute value of wave functions | u_β^J |^2. With a fixed W_0=50 MeV, the wave functions are calculated with ε_max=0, 10, 20, 30 and 40 MeV. Fig. <ref> shows | u_β^J |^2 of elastic scattering. L=L'=0, 5 and 10 for J^π=1^+, 4^- and 10^+ respectively. It is observed that | u_β^J |^2 well converges with ε_max=10 MeV, which coincides with the general adopted ε_max for obtaining converged ^6Li elastic scattering angular distributions. The | u_β^J |^2 for 3^+ resonance breakup are plotted in Fig. <ref> for comparison. L=0, 5 and 10, L'=2, 7 and 8 for J^π=1^+, 4^- and 10^+ respectively. Different from the elastic scattering, the convergence of | u_β^J |^2 for 3^+ resonance breakup needs a pretty high ε_max=40 MeV, especially in the inner region (R < 15 fm). In Fig. <ref>, the | u_β^J |^2 of the closed channels are shown. Although the wave functions are not completely converged with ε_max=40 MeV, we found that these small differences between the closed channel wave functions calculated with ε_max=30 and 40 MeV have little effect on the fusion cross sections. In Eq. (<ref>), the partial wave fusion cross section are calculated in an integration method. We define F^Jπ(R)=K/E∑_ββ ^' _β^Jπ*( R )W_ββ'^Jπ( R ) _β'^Jπ( R ). Therefore σ _J=∑_π∫F^Jπ(R) dR. One point worth emphasizing is that F^Jπ is dependent on J, π and L. The summation of L is necessary for the projectile with non-zero spin in the calculation of σ _J, but we omit it in Eq. (<ref>) for simplification. As the range of W_ββ ^ '^Jπ is no more than 15 fm in general, it can be deduced that F is sensitive to the wave functions in the inner part and the convergence of F also requires a high ε_max. The F^Jπ at J^π=1^+, 4^- and 10^+ are plotted in Fig. <ref> with L=0, 5 and 10 respectively. Convergence within the deviation of less than 2% is reached with ε_max=40 MeV. The F^Jπ calculated with ε_max=0 and 40 MeV have the same order of magnitude at J^π=1^+ and 4^-, but the former becomes about 10 times less than the latter at J^π=10^+. We multiply the F^Jπ calculated with ε_max=0 at J^π=10^+ by 10 for easier viewing. A more detailed discussion about the influence of ε_max on σ_J will be given in Sec. <ref>. On the other hand, the maximum of F^Jπ locates at R=3.4 fm when ε_max=0 and it moves outwards with the increase of ε_max. The peak of converged F^Jπ at J^π=10^+ even locates outside of the Coulomb barrier radius R_B=9.1 fm <cit.>. The above comparisons indicate that the continuum coupling effect not only increases the partial wave fusion cross sections at relatively higher partial waves but also fully changes the fusion mechanism. Another worth-considering issue is that the total fusion cross section is independent of the imaginary part of optical potentials when ε_max is large enough. With a fixed ε_max=40 MeV, we investigate how the u^J_β changes as the potential depth of the imaginary part varies. Fig. <ref> shows the | u_β^J |^2 calculated with W_0=20, 50 and 80 MeV at J^π=1^+. As W_0 increases from 20 to 80 MeV, the | u_β^J |^2 of elastic scattering have hardly any changes, while the | u_β^J |^2 of 3^+ resonance breakup and closed channel vary visibly and their values are negatively correlated with W_0 in the very inner regions R < 4 and 7 fm respectively. For example, the | u_β^J |^2 of closed channel calculated with W_0=20, 50 and 80 MeV all reach a peak at R=3.75 fm and their values are 0.0013, 0.0005 and 0.0003 respectively. Comparisons at J^π=4^- and 10^+ are presented in Figs. <ref> and <ref>. The | u_β^J |^2 of elastic scattering is stable against W_0. The value of | u_β^J |^2 of 3^+ resonance breakup is still negatively correlated with W_0 at J^π=4^- in the region R < 4 fm, but it becomes independent of W_0 at J^π=10^+. However, for the | u_β^J |^2 of closed channel, the negative correlation with W_0 is kept at both J^π=4^- and 10^+. Fig. <ref> shows the integrands F^Jπ calculated with W_0=20, 50 and 80 MeV at J^π=1^+, 4^- and 10^+. The F^Jπ calculated with W_0=20 MeV is flatter than those calculated with W_0=50 and 80 MeV. Fortunately, although the three kinds of F do not coincide perfectly, their integrations are almost the same, which will not affect the subsequent analysis. Furthermore, an overall comparison of partial wave fusion cross sections calculated with different W_0 is presented in Fig. <ref>. σ_J at each J is indeed independent of W_0 when ε_max=40 MeV. In Fig. <ref>, an example of the convergence of TF cross sections calculations is shown for the ^6Li+^64Ni system at energies around the Coulomb barrier. In these calculations, W_0=50.0 MeV. The CDCC calculations with ε_max=40.0 and 45.0 MeV are almost the same when l=0, 1 and 2 states are included. The inclusion of the states with higher l has an invisible effect on TF cross sections. Therefore, the convergence of TF cross section calculations in the present work is ensured. § CALCULATED RESULTS AND ANALYSIS In this section, we study the continuum coupling effect on total and complete fusion. The CDCC calculations are performed with W_0=50.0 MeV and l=0, 1 and 2 continuum states. Four types of calculations are performed: (1) No continuum states are included, i.e., single channel calculations; (2) Continuum states below ε_max=6.0 MeV are included, which include the resonance states; (3) Continuum states below ε_max=ε_th are included, closed channels are ignored; (4) Continuum states below ε_max=40.0 MeV are included. It should be emphasised that the second type of calculation is not made for ^6Li+^28Si system at some low incident energies to avoid the inclusion of closed channels. The fourth type of calculation is not performed for ^6Li+^209Bi system at a few high energies, where ε_th is larger than 40.0 MeV. Although the second and third types of calculations can not provide converged TF cross sections, they can be applied to analyse the coupling effect of the continuum states in different energy regions. In the present work, we mainly focus on the coupling effect from the continuum states above the resonance energy regions (ε≥ 6.0 MeV). The coupling effects of open and closed channels can be distinguished by comparing the results of the third and fourth types of calculations. §.§ Continuum coupling effect on total fusion Figures <ref> and <ref> show the calculated TF cross sections for ^6Li+^28Si, ^64Ni, ^144Sm and ^209Bi systems, and their comparison with the experimental data <cit.>. The two figures are shown in logarithmic and linear scale, convenient to see the TF cross sections at energies below and above the Coulomb barrier respectively. In general, the agreement between the converged CDCC results and the measured value is reasonably good in the sub-barrier region, but the converged CDCC results overestimate the experimental data at energies above the Coulomb barrier for ^64Ni, ^144Sm and ^209Bi targets. This overestimation may be resulted by the following reasons: (1) The adopted nuclear potentials for α and d with targets (SPP2) are the systematic research results for various elastic scattering experimental data and it is expected that there are some disagreements for specific reaction systems. (2) In the present work, only the ^6Li→α+d channel is taken into account. However, there are other important reaction channels, such as n-transfer, which can influence the TF cross sections. (3) Rath et al. <cit.> measured the incomplete fusion cross section for ^6Li+^144Sm by summing the detected cross sections for d- and α-capture, in which not all possible evaporation residue channels were included. Therefore, the values of the ICF and TF(=CF+ICF) can be considered as the lower limit. Similarly, Dasgupta et al. <cit.> measured the fusion cross sections for ^6Li+^209Bi system by detecting α particles emitted by the evaporation residues of the compound nuclei. However, they missed the contribution from ^209Po, which could not be measured due to its long half-life (about 200 yr). According to the statistical model estimation <cit.>, the contribution from ^209Po was excepted to be significant at the high energies. For all the four collision systems, it can be seen in Figs. <ref> and <ref> that the results of the CDCC calculations become close to the single channel results in the above-barrier region by varying ε_max from 6.0 MeV to 40.0 MeV. In the sub-barrier region, the calculated TF cross sections are enhanced by increasing ε_max. To determine the continuum coupling effect on TF cross section, we define Γ _i=σ _TF( i )/σ _TF( 1 )-1,i=2,3,4. The σ_TF(i) (i=1,2,3,4) denotes the TF cross section obtained by the i-th type of calculation as listed in the beginning of Sec. <ref>. Fig. <ref> presents the Γ_i-values (i=2, 3 and 4) for ^6Li+^28Si, ^64Ni, ^144Sm and ^209Bi systems. Compared with the TF cross sections obtained without continuum states coupling, there is a 10-15% suppression at energies above the Coulomb barrier for these reaction systems when ε_max=6.0 MeV (See the results of Γ_2). However, this suppression is much reduced for ^6Li+^28Si and eliminated for ^6Li+^64Ni, ^144Sm and ^209Bi when the converged result is obtained with ε_max=40.0 MeV (See the results of Γ_4). In the sub-barrier region, the theoretical TF cross sections are further enhanced by increasing ε_max. A detailed discussion is given below. For the ^6Li+^28Si system, with the ε_max increasing from 6.0 MeV to ε_th, the suppression remains nearly unchanged in the region 1.0 ≤ E_cm / V_B ≤ 1.8 and is moderately reduced in 1.8 ≤ E_cm / V_B ≤ 2.5 (See the results of Γ_3). Only when the closed channels are taken into CDCC calculations can the suppression effect be negligible. In higher incident energies, the TF cross sections obtained with ε_max=ε_th and 40.0 MeV are almost the same, indicating that the coupling effect of closed channels can be ignored in this high energy region. In the sub-barrier region, a much stronger enhancement is provided by the calculation with ε_max=40.0 MeV, where the coupling effect of closed channels is completely taken into account. For the medium mass targets ^64Ni and ^144Sm, the calculated results obtained with ε_max=ε_th are slightly smaller than the single channel results in a very narrow region around E_cm. But the converged CDCC calculation, which includes the closed channels, has only an enhanced effect on σ_TF at energies below the Coulomb barrier. When E_cm > 1.1 V_B, the values obtained with ε_max=ε_th and 40.0 MeV are almost the same as the single channel results. In the energy region E_cm < 0.9 V_B, the CDCC calculations with ε_max=ε_th and 40.0 MeV provide a similar enhancement effect on TF cross section, which is stronger than that given by the calculation with ε_max=6.0 MeV. As the Coulomb barrier for ^6Li+^209Bi system is quite high, the results obtained with ε_max=40.0 MeV are almost the same as those obtained with ε_max=ε_th. Their results are approximately equal to the results obtained without continuum states coupling at energies above the Coulomb barrier. In the sub-barrier region, the TF cross sections are further enhanced by taking higher energy continuum states (ε≥ 6.0 MeV) into CDCC calculations. The above results seem to suggest that the coupling effects from low-energy continuum states (0≤ε≤ 6.0 MeV) and higher-energy continuum states (6.0 ≤ε≤ 40.0 MeV) add destructively for the total fusion cross sections at energies above the Coulomb barrier. In the end, the converged CDCC results provide nearly the same total fusion reactions as single-channel calculations. This is an interesting observation. §.§ Continuum coupling effect on complete fusion Based on the converged TF cross sections and the experimental data for complete fusion <cit.>, we apply the sum-rule model to extract the cut-off angular momentum J_c and CF cross section σ_CF, as shown in Eq. (<ref>). In the early researches <cit.>, the sum-rule model was applied to the reactions at incident energies well above the Coulomb barrier. In these cases, CF and ICF processes can be well separated by a critical angular momentum, J_crit, which is nearly independent of the incident energy. Recently, Mukeru et al. combined this model with CDCC method to study the fusion of weakly bound nuclei at energies around the Coulomb barrier, such as ^8Li+^208Pb <cit.> and ^9Be+^124Sn, ^144Sm and ^208Pb <cit.>. The angular momentum to separate the CF and ICF processes, J_c, was found to be incident-energy-dependent. Following the method of Mukeru et al. <cit.>, J_c can be linked to the critical angular momentum J_crit with an analytical expression J_c={ 1-exp[ -C_0( E_cm/V_B-C_1 ) ] } J_crit. J_crit is calculated with the formalism in Ref. <cit.>. C_0 and C_1 are obtained by fitting J_c. Their values are listed in Table <ref>. The extracted J_c and the fitting curves are plotted in Fig. <ref>. The fitted value is set to 0 when it is negative. As the energy decreases, it can be seen obviously that the fitting curve for ^6Li+^28Si returns to zero at E_cm/V_B ≈0.6 while those for other systems return to 0 at E_cm/V_B ≈0.8. As there is no available CF data for ^6Li+^28Si system in the sub-barrier energy region, this difference is expected to be explained with further experimental and theoretical studies. The calculated CF cross sections are compared with experimental data in Fig. <ref>. Good agreement is obtained between theory and experiment. To analyse the continuum coupling effect on CF process, we pay close attention to the partial wave fusion cross sections σ_J in the partial waves below the cut-off angular momentum J_c, at energies below, near, above and well above the Coulomb barrier. In addition, it is practical to apply J_c for the single channel calculation to extract the cross section σ_CF^1 ch, which can be regarded as the CF cross section without continuum coupling. It can be compared with σ_CF to determine the continuum effect on complete fusion quantitatively. Fig. <ref> shows the partial wave fusion cross sections σ_J for ^6Li+^28Si system at E_cm=5.0, 7.0, 9.0 and 12.0 MeV. At the first energy point which is below the Coulomb barrier, the σ_J of the single-channel calculation and the CDCC calculation with ε_max=ε_th at J≤ J_c are almost the same. It suggests that in this case, the continuum states of open channels have little coupling effect on complete fusion, although the 3^+ and 2^+ resonance states have been taken into account. Only when the closed channels are taken into calculations will the σ_J in lower angular momenta be significantly enhanced. At E_cm=9.0 and 12.0 MeV, the converged σ_J in J ≤ J_c is slightly larger than those calculated with ε_max=6.0 MeV and ε_th. But they are all visibly smaller than the single channel results in J_c-2 ≤ J ≤ J_c. Because of the continuum coupling, the σ_CF in the above-barrier region is smaller than σ_CF^1 ch about 5-20%, which is consistent with the suppression factor of 15% given by Mandira and Lubian <cit.>. The partial wave fusion cross sections for ^6Li+^64Ni system are presented in Fig. <ref>. There is a noticeable issue shown in Fig. <ref>(a). At the energy (11.0 MeV) below the Coulomb barrier, the CDCC calculated σ_J in J ≤ J_c decreases firstly when ε_max increases from 6.0 MeV to ε_th and it then increases by including closed channels. Eventually, the converged σ_J is slightly larger than the single channel result in J ≤ J_c. As the σ_J in J ≤ J_c obtained with the single channel calculation and the CDCC calculation with ε_max=6.0 MeV are almost the same, it can be concluded that the enhancement on σ_CF mainly comes from the continuum states above 6.0 MeV and the closed channels have a fundamental coupling effect. At E_cm=12.0, 14.0 and 17.0 MeV, the σ_J in J ≤ J_c obtained with ε_max=40.0 MeV are moderately larger than the results of other two kinds of CDCC calculations, while they are all smaller than the single channel result in J_c-2 ≤ J ≤ J_c observably. By comparing σ_CF^1 ch and σ_CF, a 13-18% suppression factor is given at energies above the Coulomb barrier, in good agreement with the result of 13±7% in Ref. <cit.>. In Fig. <ref>, the partial wave fusion cross sections σ_J for ^6Li+^144Sm system are plotted at E_cm=22.0, 25.0, 27.0 and 30.0 MeV. As ε_max increases from 6.0 MeV to 40.0 MeV, the σ_J in J ≤ J_c is slightly enlarged for all four energy points. The σ_CF is larger than the σ_CF^1 ch at E_cm=22.0, while it becomes smaller about 5-25% at other three energies because of the reduction of σ_J in J_c-2 ≤ J ≤ J_c. It is reasonably consistent with the suppression of 32±5% in Ref. <cit.>. The partial wave fusion cross sections σ_J for ^6Li+^209Bi are presented in Fig. <ref> at E_cm=28.0, 30.0, 32.0 and 35.0 MeV. The results obtained with ε_max=ε_th and 40.0 MeV are almost the same in all partial waves, suggesting that the closed channels have hardly any coupling effect on this reaction system. On the other hand, the σ_J in J ≤ J_c is visibly enlarged at E_cm=28.0 and 30.0 MeV by increasing ε_max from 6.0 MeV to ε_th, showing the obvious coupling effect of the continuum states above the resonance energy region. With the same improvement of ε_max, there is a slight enhancement on σ_J in J ≤ J_c at E_cm=32.0 and 35.0 MeV. Similarly with other reaction systems, σ_CF is higher than σ_CF^1ch at E_cm=28.0 MeV but becomes lower about 10-35% at higher energies, which agrees with the suppression of 36±3% provided by Dasgupta et al. <cit.>. In addition, for all four reaction systems, we found that: (1) At energies below the Coulomb barrier, the σ_J obtained by the converged CDCC calculation is larger than that obtained by single channel calculation at each J. Based on the sum-rule model, it indicates that continuum coupling increases both the complete fusion and incomplete fusion cross sections in the sub-barrier energy region. (2) At energies above the Coulomb barrier, the single channel calculated σ_J is larger than that obtained by the converged CDCC calculation in angular momentum region around J_c but it becomes smaller at relatively higher J. Finally, the single channel calculation and the converged CDCC calculation provide close values for the total fusion cross section, as shown in Fig. <ref>. It can be deduced that continuum coupling reduces and increases the probabilities of CF and ICF processes respectively in the above-barrier energy region. As our present work overestimates the TF cross section at energies above the Coulomb barrier, we do not further investigate the continuum coupling effect on incomplete fusion in this paper. A comprehensive study of the continuum coupling effect on ^6Li fusion will be done in the future with a good description of CF, ICF and TF processes. § SUMMARY AND CONCLUSION CDCC calculations have been presented for the fusion reactions of weakly bound projectile ^6Li with ^28Si, ^64Ni, ^144Sm and ^209Bi targets. The inclusion of continuum states up to 40 MeV was found necessary for the convergence of the total fusion cross sections, which means that the inclusion of closed channels in CDCC calculations is necessary for ^6Li fusion with light and medium mass targets, such as ^28Si, ^64Ni and ^144Sm, at incident energies around the Coulomb barriers. Reasonable agreement between the calculated TF cross section and experimental data is obtained. At energies above the Coulomb barrier, it is found that the continuum coupling effects induced by low-energy (0≤ε≤ 6.0 MeV) and higher-energy (6.0 ≤ε≤ 40.0 MeV) continuum states are different for TF cross sections: the former reduces the TF cross sections by around 10-15%, but the latter increases the TF cross sections. Therefore the final converged results are nearly the same as the results of single channel calculations, in which no continuum coupling effects were taken into account. In the sub-barrier region, the calculated TF cross section is further enhanced by including the higher-energy continuum states. The Sum-rule model has been adopted to extract the complete fusion cross section as well as the critical angular momentum J_c. The coupling effect of continuum states can not be ignored for the complete fusion process, especially at energies below the Coulomb barrier. In particular, the enhancement effect on the complete fusion cross section in the sub-barrier region is dominated by closed channels for the ^6Li+^28Si, ^64Ni and ^144Sm systems. At energies above the Coulomb barrier, the CDCC calculated complete fusion cross section can be slightly enlarged by taking the higher-energy continuum states into account, while the converged result is still smaller than the single channel result. In general, the extracted suppression factors for complete fusion in the above-barrier region are consistent with previous studies for these reaction systems. It is found that the suppression mainly occurs in the angular momentum region J_c-2 ≤ J ≤ J_c, which is independent of the fusion system. We believe that this independence calls for full-quantum investigations. Finally, as the coupling effect of the higher-energy continuum states (including both open and closed channels) is remarkable for ^6Li induced fusion reactions, it is of interest to study their effect on the fusion reactions induced by other weakly bound nuclei, such as ^7Li and ^9Be. Related research is in progress. This work was financially supported by the National Natural Science Foundation of China (Grant No. U2067205).
http://arxiv.org/abs/2306.12216v1
20230621121829
The Jost function and Siegert pseudostates from R-matrix calculations at complex wavenumbers
[ "Paul Vaandrager", "Jérémy Dohet-Eraly", "Jean-Marc Sparenberg" ]
quant-ph
[ "quant-ph", "nucl-th" ]
Python Framework for Modular and Parametric SPICE Netlists Generation Sergio Vinagrero Gutiérrez, Giorgio Di Natale, Elena-Ioana Vatajelu Univ. Grenoble Alpes, CNRS, Grenoble INP*, TIMA, 38000 Grenoble, France {sergio.vinagrero-gutierrez,giorgio.di-natale,ioana.vatajelu}@univ-grenoble-alpes.fr July 31, 2023 ====================================================================================================================================================================================================================================== The single-channel Jost function is calculated with the computational R-matrix on a Lagrange-Jacobi mesh, in order to study its behaviour at complex wavenumbers. Three potentials derived from supersymmetric transformations are used to test the accuracy of the method. Each of these potentials, with s-wave or p-wave bound, resonance or virtual states, has a simple analytical expression for the Jost function, which is compared with the calculated Jost function. Siegert states and Siegert pseudostates are determined by finding the zeros of the calculated Jost function. Poles of the exact Jost function are not present in the calculated Jost function due to the truncation of the potential in the R-matrix method. Instead, Siegert pseudostates arise in the vicinity of the missing poles. § INTRODUCTION The Jost function is a fundamental concept in non-relativistic quantum scattering theory <cit.>. It contains all the information required for studying a scattering system. In particular, the asymptotic behaviour of a wave function can be given in terms of the Jost function and the scattering matrix (S-matrix) can then be defined as a ratio of Jost functions in a simple way. The Jost function thus has simpler analytic properties compared to those of the S-matrix. Scattering observables such as the phase shift or scattering cross-section can be calculated from the Jost function, and its zeros at specific complex wavenumbers, k, correspond exactly with simple poles of the S-matrix. Quantum states with such wavenumbers are bound, virtual or resonance states, known as Siegert states collectively <cit.>. In most numerical calculations, zeros of the Jost function appear which do not correspond with Siegert states only. In particular, zeros of the Jost function occur for calculations where the interaction potential of a quantum system is truncated at some radius sufficiently far from the interaction region, where the interaction potential is approximately zero. Such zeros of the Jost function correspond with the so-called Siegert pseudostates, which satisfy specific boundary conditions at the radius where the potential is truncated <cit.>. Apart from zeros, it is known that the Jost function may have poles for certain k with Re(k)=0 and Im(k)<0. This corresponds with zeros of the S-matrix <cit.>. Interaction potentials with explicit, analytical expressions for the Jost functions can be derived, where such poles are present. However, these poles generally do not appear for numerical calculations of the Jost function, since the interaction potential is truncated in most numerical methods of calculating the Jost function or S-matrix. See Refs. <cit.>, for example, where no poles are present in the derived analytic structure of the Jost function, on the assumption that the potential is zero at a large radius. Zeros of the calculated Jost function with wavenumbers in the vicinity of an exact Jost-function pole do arise, instead of a pole <cit.>. These are specific Siegert pseudostates which effectively replace the Jost-function pole. In this study, numerical calculations are performed using the R-matrix formalism, where the configuration space is divided into two regions separated by the channel radius, a. This serves as the cut-off radius where the potential is truncated. In the internal region, where r≤ a, the wave function is expanded on a square-integrable basis. In the external region where r ≥ a, it is approximated by its asymptotic behaviour. It has been shown that the computational R-matrix on a Lagrange mesh can be used to determine phase shifts and cross-sections accurately, as well as bound state energies and resonance parameters, for various potentials <cit.>. The method is extended here to calculate the single-channel Jost function at complex k. Short-ranged model potentials with known resonances and bound or virtual states, and with simple analytical expressions for the corresponding Jost functions for ℓ=0 and ℓ=1, can be constructed from supersymmetric transformations <cit.>. These potentials are then used to test the accuracy of the calculated Jost functions determined from the computational R-matrix. The behaviour of the calculated Jost functions on the entire complex k-plane is explored. In particular, it is used to calculate the Siegert states and Siegert pseudostates for each of the model potentials. Various procedures for finding Siegert states and Siegert pseudostates with the computational R-matrix exist, but there is little comparison in the accuracy of the methods. The results obtained by finding the zeros of the Jost function are compared with the method from Refs. <cit.>, which can only be applied to scattering problems where ℓ =0. In the next section, the Jost function, Siegert states and Siegert pseudostates are discussed. Section <ref> includes an overview of the R-matrix on a Lagrange mesh and gives the method of calculating the Jost function from the R-matrix. The test potentials are discussed in Section <ref>. In the results section, the calculated Jost functions are compared with the exact Jost functions. Calculated wavenumbers of the Siegert states and Siegert pseudostates are given and compared with ones obtained by other methods, where possible. The conclusion follows. § THE JOST FUNCTION, SIEGERT STATES AND SIEGERT PSEUDOSTATES §.§ The Jost function and Siegert states The radial Schrödinger equation is given by H_ℓ u_ℓ = E u_ℓ with the two-body Hamiltonian given by: H_ℓ = T_0 + T_ℓ + V_S(r) + V_C(r) , where V_S(r) is the short-ranged radial potential, which is finite at the origin and which goes to zero faster than r^-2 as r→∞. The Coulomb potential is given by V_C(r) = e^2 Z_1 Z_2/r and the kinetic operators by T_0 = -ħ^2/2μd^2/dr^2 T_ℓ = ħ^2/2μℓ(ℓ+1)/r^2 , where μ is the reduced mass of the two-body system. The square of the channel momentum and Sommerfeld parameter are given as follows, respectively: k^2 = 2μ E/ħ^2η = μ e^2 Z_1 Z_2/kħ^2. The regular solution, ϕ_ℓ(k,r), of eq. (<ref>) is defined by its behaviour at the origin, but the convention for its normalisation differs in some texts. Here it is chosen to behave exactly like the regular Coulomb function F_ℓ(kr,η) <cit.>: ϕ_ℓ(k,r) F_ℓ(kr,η) C_ℓ(η) (kr)^ℓ+1, where the Coulomb barrier factor is given by C_ℓ(η) = 2^ℓ e^-πη/2/Γ(2ℓ + 2)|Γ(ℓ+1 ± iη)|. When there are no Coulomb interactions, η = 0 and C_ℓ(0) = 1/(2ℓ+1)!!. The normalisation of the regular solution used here then becomes identical to that used in Ref. <cit.>, which is chosen to be the same for Coulomb- and non-Coulomb interactions. The single-channel Jost function is defined as the energy-dependent amplitudes of the incoming and outgoing spherical waves of the regular solution to the radial wave-equation, at the limit far from the interaction region <cit.>. The asymptotic behaviour of the regular solution is given as follows <cit.>: ϕ_ℓ(k,r) i/2[ f_ℓ(k) I_ℓ(kr,η) - f_ℓ(-k) O_ℓ(kr,η) ], where f_ℓ(k) is the Jost function. The functions I_ℓ(kr,η) and O_ℓ(kr,η) correspond with the incoming and outgoing spherical waves, respectively, and are given in terms of the regular and irregular Coulomb functions as follows: I_ℓ(kr,η) = G_ℓ(kr,η) - iF_ℓ(kr,η) O_ℓ(kr,η) = G_ℓ(kr,η) + iF_ℓ(kr,η). They have the following useful symmetry relation, which can be proven from the properties of the Coulomb functions given in Ref. <cit.> or, in greater detail, in Ref. <cit.>: I^*_ℓ(-kr,-η) = (-1)^ℓ e^-πη O_ℓ (kr,η). The scattering matrix, S_ℓ(k), is defined in terms of the Jost function by the following <cit.>: S_ℓ(k) = f_ℓ(-k)/f_ℓ(k). The Jost function has the symmetry property <cit.>: f_ℓ(-k) = f_ℓ^*(k^*). Siegert states are defined as complex energy eigenstates of eq. (<ref>). The boundary conditions of the regular solution for the Siegert states are given by eq. (<ref>) at the origin, and by the following relation for large r <cit.>: d/drϕ(k,r →∞) = ik ϕ(k,r →∞). Using eq. (<ref>) for the regular solution where r →∞, it can be deduced that the condition only holds for states with wavenumber, k, where the Jost function, f_ℓ(k), is zero, which implies a purely outgoing wave function. This is the case for bound, resonance and virtual states <cit.>, and of course corresponds with poles of the S-matrix by eq. (<ref>). Bound states occur for Re(k) = 0, Im(k)>0; virtual states for Re(k) = 0, Im(k) < 0; resonance states for Re(k) > 0, Im(k) < 0 and mirror-resonances symmetrical to physical resonances for Re(k)<0, Im(k) < 0 <cit.>. The poles of the S-matrix that occur for zeros of the Jost function, f_ℓ(k), are the “true’’ poles of the S-matrix and are the Siegert states. However, poles of the S-matrix also correspond with poles of f_ℓ(-k), the numerator in eq. (<ref>). These are known as the false poles of the S-matrix <cit.>, and do not have clear physical meaning like true S-matrix poles. See Ref. <cit.> for further details. It is important to note that the Jost function as defined in Ref. <cit.> and most other texts, is normalised by the behaviour of eq. (<ref>) at the origin. Different normalisations result in a Jost function differing by an energy-dependent factor. This factor has no impact on the S-matrix, since it cancels out in eq. (<ref>) for real scattering energies, where phase shifts or cross-sections are calculated. Or, when the zeros of the Jost function are determined to locate Siegert states, the energy-dependent factor (which is finite) does not affect the search for zeros. See Ref. <cit.> for further details. §.§ Truncation of the potential and Siegert pseudostates In most numerical calculations, at some r=a the interaction potential is truncated: V( r≥ a) =0. Consequently, the Siegert boundary condition for the regular solution is used instead of eq. (<ref>): dϕ(k,a)/dr = ik ϕ(k,a). This approximation still gives accurate results when calculating physical quantities of interest, in particular the Siegert states and phase shifts, provided a is chosen large enough. A consequence of its implementation is the occurrence of the so-called Siegert pseudostates, which are also complex energy eigenvalues of (<ref>), but do not have the same strong physical meaning as the Siegert states. The Siegert pseudostates can occur for positive or negative Re(k) and Im(k) < 0, and thus resemble wide resonances. See Refs. <cit.> for a detailed discussion. The Siegert pseudostates are, furthermore, related to exact Jost-function poles. The Jost function is mostly entire in k, but it may have poles in a certain region of the complex k-plane, specifically along the negative imaginary k-axis <cit.>. However, the analytic structure of the Jost function for scattering involving short-ranged and Coulomb interactions is explicitly given in Refs. <cit.>, and possible poles on the imaginary axis of k between -i∞ and 0 are not accounted for. This is because the Jost function is only analytic on the whole k-plane for truncated potentials, apart from further possible poles at k=0. With the computational R-matrix method, the potential is also truncated at some r=a, which should result in a Jost function which is analytic in k. All the potentials used in this study have exact Jost-function poles at certain negative imaginary values of k. However, the calculated Jost function has a finite value dependent on the choice of a at the poles of the exact Jost function. The poles of the exact Jost function manifest in the calculated Jost function by the behaviour of the Siegert pseudostates, which are distributed around the actual pole, as will be seen in Section <ref>. Since poles of the calculated Jost function (and so f(-k)) do not occur and are replaced by Siegert pseudostates, false poles of the calculated S-matrix do not occur either, but the corresponding Siegert pseudostates must appear as zeros of the S-matrix. From eq. (<ref>), it is clear that each Jost-function zero is an S-matrix pole and each Jost-function pole is an S-matrix zero. Furthermore, for each S-matrix pole at a specific k, an S-matrix zero will occur at -k. When considering Siegert states, Siegert pseudostates and false poles, this makes the analytic structure of the S-matrix much more complicated than the analytic structure of Jost function, which is why it is preferred to study the Jost function. § THE R-MATRIX METHOD §.§ R-matrix formalism A comprehensive derivation and explanation of the R-matrix method is given in Refs. <cit.>, for example. It must firstly be noted that there are, in fact, two approaches in determining the R-matrix for a scattering problem: by fitting the R-matrix to experimental data, known as the phenomenological R-matrix method, or by calculating the R-matrix from a model potential, known as the computational R-matrix method. We restrict ourselves to determine the single-channel Jost function from the computational R-matrix. In the R-matrix method, the configuration space is divided into an internal region and an external region, with a boundary at the channel radius, r=a, beyond which the potential is approximately zero: in other words, the potential is truncated. In the internal region with r ≤ a, the wave function is expanded on some finite basis of N linearly independent, square-integrable basis functions: u_ℓ(k,r) = ∑_i = 1^N c_i φ_i(r), 0 ≤ r ≤ a, which disappear at r=0 and satisfy arbitrary boundary conditions at r=a. In the space of the basis functions, the Hamiltonian operator, H_ℓ, is not Hermitian nor does it admit a complete set of eigenvectors, since the basis functions have arbitrary boundary conditions at r=a. To overcome this, the Bloch operator is introduced, defined by: ℒ(B) = ħ^2/2 μδ(r-a) ( d/dr - B/r), where B is known as the boundary parameter, which can be chosen arbitrarily. It can be shown that the operator, [H_ℓ+ℒ(B) ], is Hermitian for real B in the space of the basis functions <cit.>. This implies that it does admit a complete set of eigenvectors, and is thus diagonalisable. However, there are instances where B is chosen to be complex, in which case the operator, [H_ℓ+ℒ(B) ], can no longer be Hermitian. It can, however, be shown that it still admits a complete set of N eigenvectors, v_n ℓ, with eigenvalues, E_nℓ for any complex B, in the space of the basis functions. It is still diagonalisable, which is important later. Because of the Bloch operator, and at the limit of infinite basis, the eigenfunctions of [H_ℓ+ℒ(B) ] have a fixed logarithmic derivative at the boundary r=a, the value of which is proportional to B. This is not the case for a finite basis, for which non-converged eigenfunctions may display different boundary behaviours. It might thus be tempting to enforce the boundary condition by choosing basis functions, φ_i(r) that satisfy it. This turns out to be a bad practice leading to numerical inaccuracies <cit.>, because physical wave functions at arbitrary energies have varying boundary conditions which are better covered by a basis without fixed logarithmic derivative at the boundary. Using the Bloch operator, eq. (<ref>) can be written as follows: [ H_ℓ- E + ℒ(B) ] u_ℓ(k,r) = ℒ(B) u_ℓ(k,r), This is a linear differential equation where the Green's function, G_ℓ(r,r') of the differential operator, [H_ℓ-E+ℒ(B) ], must satisfy the following relation, per definition: [H_ℓ-E+ℒ(B) ] G_ℓ(r,r') = δ(r-r'). The Green's function can be approximated in the internal region by an expansion over a finite set of the basis functions: G_ℓ(r,r') = ∑_i,j=1^N φ_i(r) [C(E,B)^-1]_ijφ_j(r') 0 ≤ r,r' ≤ a and with the matrix elements of C given by: C_ij(E,B) = ∫_0^a φ_i(r) [ H_ℓ - E + ℒ(B) ] φ_j(r) dr . The expression for the Green's function would be exact, if it were expanded over a complete basis. The wave function can be determined with the Green's function <cit.>: u_ℓ(k,r) = ∫_0^∞ G_ℓ(r,r') ℒ(B) u_ℓ(r') dr'. Using eq. (<ref>) as an operator of the variable r' in eq. (<ref>), the following is obtained: u_ℓ(k,r) = [ d u_ℓ(k,a)/dr - B/a u_ℓ(k,a) ] ħ^2/2 μ G_ℓ(r,a) = [ d u_ℓ(k,a)/dr - B/a u_ℓ(k,a) ] ħ^2/2 μ∑_i,j=1^N φ_i(r) [C(E,B)^-1]_ijφ_j(a), 0 ≤ r ≤ a . This expression can be simplified by introducing the R-matrix, which is defined at the boundary, r = a, as follows: u_ℓ(k,a) = [ a du_ℓ(k,a)/dr - B u_ℓ(k,a) ] R_ℓ(E,B). Comparison with eq. (<ref>) gives the relation between the R-matrix and the Green's function: R_ℓ(E,B) = ħ^2/2 μ a G_ℓ(a,a). By eq. (<ref>), the R-matrix is then given by: R_ℓ(E,B) = ħ^2/2 μ a∑_i,j=1^N φ_i(a) [C(E,B)^-1]_ijφ_j(a). Since the operator, [H_ℓ+ℒ(B) ], is diagonalisable and admits a complete set of eigenvectors for any B in the space of the basis functions, the inverse of matrix C(E,B) can be expanded in the finite spectral decomposition of N eigenvalues: [ C(E,B)^-1]_ij = ∑_n=1^N v_nℓv_nℓ^T/E_nℓ-E. The following familiar expression for the R-matrix is then obtained: R_ℓ (E,B) = ∑_n=1^N γ_nℓγ_nℓ^T/E_nℓ-E, with γ_nℓ = √(ħ^2/2 μ a)∑_i=1^N v_nℓ,iφ_i(a), where v_nℓ,i is the i^th component of v_nℓ. The R-matrix can then be calculated in two ways: firstly, by finding the inverse of C(E,B) of eq. (<ref>) at a specific E and using the result in eq. (<ref>). Or, secondly, by finding the eigenvalues, E_nℓ, and eigenvectors, v_nℓ, of C(0,B) and using eq. (<ref>). The second method is preferred in this study, as the R-matrix for all complex E (and thus k) is obtained by calculating eigenvalues and -vectors only once. Both methods were, however, attempted and give identical numerical accuracy, with insignificant difference in computational time. The R-matrix is entire in E, apart from the N simple poles at E=E_nℓ. For real B, these poles will be real, and for complex B, these poles will also be complex. Using eq. (<ref>), the wave function in the interior is given in terms of the R-matrix and the wave function at the boundary: u_ℓ(k,r) = u_ℓ(k,a)/R_ℓ(E,B)ħ^2/2 μ a∑_i,j=1^N φ_i(r) [C(E,B)^-1]_ijφ_j(a), 0 ≤ r ≤ a . The normalisation of the wave function in the interior will be fixed by the normalisation of the wave function at the boundary, which will be approximated by the asymptotic behaviour of the wave function <cit.>: u_ℓ(k,r) I_ℓ(kr,η) - S_ℓ(k) O_ℓ(kr,η). The wave function and its derivative at the boundary are then given by: u_ℓ(k,a) = I_ℓ(ka,η) - S_ℓ(k) O_ℓ(ka,η), d/dr u_ℓ(k,a) = d/dr I_ℓ(ka,η) - S_ℓ(k) d/dr O_ℓ(ka,η). Using eqs. (<ref>) and (<ref>) in eq. (<ref>) gives the following approximate expression for the S-matrix in terms of the R-matrix <cit.>: S_ℓ(k) = I_ℓ(ka,η)/O_ℓ(ka,η)1- [ L^*_ℓ(k)-B ] R_ℓ(E,B) /1- [ L_ℓ(k)-B ] R_ℓ(E,B) , where L_ℓ(k) is the logarithmic derivative at the channel radius a, defined by: L_ℓ(k) = a/O_ℓ(ka,η)dO_ℓ(ka,η)/dr . Having established eq. (<ref>) for the S-matrix, eq. (<ref>) can now be used to fix the absolute normalisation of the wave function in the interior region by imposing its continuity at the boundary. Substituting it into eq. (<ref>) with eq. (<ref>) for the S-matrix, the following approximate expression for the wave function in the internal region in terms of the R-matrix is obtained: u_ℓ(k,r) = -ikħ^2/μ O_ℓ(ka,η)1/1- [ L_ℓ(k)-B ] R_ℓ(E,B) ∑_i,j=1^N φ_i(r) [C(E,B)^-1]_ijφ_j(a), 0 ≤ r ≤ a . It should be stressed that this normalization does not guarantee the continuity of the derivative of the wave function at the boundary, which is only reached at the infinite basis limit. As shown in Ref. <cit.>, the wave function and the S-matrix are, remarkably, independent of the choice of boundary parameter, B. A choice of B=0 is usual in the R-matrix theory, or another real value which results in a real R-matrix in the usual sense. However, a complex choice of B = L_ℓ(k) is particularly useful. It results in a complex extension of the usual R-matrix: see Refs. <cit.> for example. The following simple S-matrix is obtained if B = L_ℓ(k): S_ℓ(k) = I_ℓ(ka,η)/O_ℓ(ka,η){ 1- [ L^*_ℓ(k)-L_ℓ(k) ] R_ℓ[E,L_ℓ(k)] } . In this case, the complex R-matrix poles correspond exactly with the S-matrix poles, since the factor I_ℓ(ka,η)/O_ℓ(ka,η) is finite for all k. It must be stressed that the S-matrix poles are the same for any choice of B, but the R-matrix poles depend on the choice of B. §.§ The R-matrix and Jost function The Jost function calculated with the R-matrix method is based on the approximation of the wave function expanded over an appropriate finite basis, u_ℓ(k,r) given by eq. (<ref>). This wave function and the regular solution, ϕ_ℓ(k,r), can only differ by an energy-dependent factor proportional to the Jost function. When considering the asymptotic behaviour of ϕ_ℓ(k,r) and u_ℓ(k,r), given in eq. (<ref>) and eq. (<ref>) respectively, also using eq. (<ref>) for the S-matrix, the following relation is obtained, which must hold for all r: ϕ_ℓ(k,r) = i/2f_ℓ(k) u_ℓ(k,r). By the behaviour of the regular solution near the origin, eq. (<ref>), the Jost function can then be determined with: f_ℓ(k) = -2i lim_r → 0F_ℓ(kr,η)/u_ℓ(k,r) = -2i lim_r → 0C_ℓ(η) (kr)^ℓ+1/u_ℓ(k,r). Using eq. (<ref>), the approximation for u_ℓ(k,r) in the internal region, results in the following expression for the Jost function: f_ℓ(k) = C_ℓ(η) 2 k^ℓμ/ħ^2 O_ℓ(ka,η) [1-(L_ℓ(k)-B)R_ℓ(E,B) ] 1/χ(B), with χ(B) = ∑_i,j=1^N [ lim_r→ 0φ_i(r)/r^ℓ+1] [ C(E,B)^-1]_ijφ_j(a). To accurately determine the Jost function, calculations need to be performed with an appropriate set of basis functions, which will determine the exact behaviour of the limit in eq. (<ref>), so that it exists. This choice of basis function is discussed in the next section. Using eq. (<ref>), the following expression for f_ℓ(-k) is obtained: f_ℓ(-k) = C_ℓ(η) 2 k^ℓμ/ħ^2 I_ℓ(ka,η) [1-(L^*_ℓ(k)-B)R_ℓ(E,B) ] 1/χ(B). Substituting eqs. (<ref>) and (<ref>) into eq. (<ref>) recovers eq. (<ref>), as expected. For the same choice of B, the poles of the R-matrix and of the function, χ(B) must be the same, since both are functions of the inverse of C(E,B). By the spectral decomposition given in eq. (<ref>), the poles of the R-matrix and of χ(B) are simply the eigenvalues, E_nℓ of C(0,B). Furthermore, the Jost function is completely independent of the choice of B, like the S-matrix. This is not obvious from eq. (<ref>). It can be proven using eq. (B3) in Appendix B of Ref. <cit.>, but it also follows from the fact that the wave function used in eq. (<ref>) is independent of B <cit.>. For simplicity, all calculations of the Jost function are performed with the simplest choice of B=0: f_ℓ(k) = C_ℓ(η) 2 k^ℓμ/ħ^2 O_ℓ(ka,η) [1-L_ℓ(k)R_ℓ(E,0) ] 1/χ(0). For a choice of B = L_ℓ(k), the Jost function no longer depends on the R-matrix, and it can be calculated with: f_ℓ(k) = C_ℓ(η) 2 k^ℓμ/ħ^2 O_ℓ(ka,η) 1/χ[L_ℓ(k)] . In this expression, the zeros of the Jost function are only due to the poles of χ[L_ℓ(k)], which are the same as the complex R-matrix poles for B = L_ℓ(k). These are, of course, the k or E that correspond with Siegert states and Siegert pseudostates. Eq. (<ref>) is less useful for practical calculations, since χ[L_ℓ(k)] is numerically more difficult to calculate than χ(0). §.§ R-matrix on a shifted Lagrange-Jacobi mesh Performing R-matrix calculations on a Lagrange mesh involves calculating the integral in eq. (<ref>) using Gauss quadrature. This is an approximation where the definite integral over the function is replaced by a weighted sum of function values at specific points, r_i in the interval 0 to a. Comprehensive details on calculating the R-matrix using the Lagrange-Jacobi mesh can be found in Refs. <cit.>. Only the main equations are given here. The N mesh points, r_i, where i=1,2,...,N over the interval [0,a] are obtained by finding the N zeros of the shifted Jacobi polynomial: P^(0,β)_N ( 2r_i/a - 1 ) = 0. The corresponding N shifted Lagrange-Jacobi functions regularised by r/a are defined by: φ_i(r) = (-1)^N-i√(a-r_i/a r_i)P^(0,β)_N (2r/a-1 ) /r-r_i a^-β/2 r^β/2+1, with φ_i(r_j) = λ_i δ_ij where λ_i are the weights associated with the Gauss quadrature. For β = 2ℓ, the basis functions have the correct behaviour at the origin. Furthermore, the limit in eq. (<ref>) exists, and is given as follows: lim_r→ 0φ_i(r)/r^ℓ+1 = (-1)^i+1√(a-r_i/ar_i^3)(N+2ℓ)!/N!(2ℓ)!1/a^ℓ. Using the Gauss-Jacobi quadrature associated with the mesh, the basis functions are orthonormal: ∫_0^a φ_i(r) φ_j(r) dr Gauss=δ_ij. When the Gauss approximation is also used for the potentials terms, the matrix elements of C(E,B) are given explicitly as follows <cit.>: C_ij(E,B) = T^0_ij +ℒ_ij(B)+ [ ħ^2/2μℓ(ℓ+1)/r_i^2 +V_S(r_i)+V_C(r_i) -E ]δ_ij, where the Bloch operator matrix elements, ℒ_ij(B), are given by: ℒ_ij(B) = ℒ_ij(0) - ħ^2/2μB/aφ_i(a) φ_j(a). The matrix elements T_ij^0 +ℒ_ij(0) are given in Ref. <cit.>. They read explicitly, for i=j, T^0_ij+ℒ_ij(0)= 1/24r_i(a-r_i)[ 4(2N+β+1)^2-3β^2+8-β^2-4/r_ia -20/a-r_ia ], and, for i j, T^0_ij+ℒ_ij(0)= (-1)^i-j+1/2 √(r_i r_j(a-r_i)(a-r_j)) ×[ N(N+β + 1)+β/2+1+ a r_i+a r_j -2r_i r_j/(r_i-r_j)^2 -a/a-r_i -a/a-r_j]. For ℓ=0, the shifted Lagrange-Jacobi mesh becomes exactly the shifted Lagrange-Legendre mesh <cit.>. The Lagrange-Legendre mesh was used in Ref. <cit.>, where it was shown that using the Gauss approximation does not affect the accuracy of the R-matrix method. §.§ R-matrix methods of calculating Siegert states and pseudostates As shown in Ref. <cit.>, the method of Refs. <cit.> and the R-matrix formalism correspond exactly for B=L_ℓ(k) defined in eq. (<ref>). The Siegert condition, eq. (<ref>), is met with such a choice of B. As required, a purely outgoing wave function for r ≥ a is obtained. The Siegert state energies and Siegert pseudostate energies are the eigenvalues, E_nℓ, of the matrix, C(0,B), of eq. (<ref>) with B=L_ℓ(k_nℓ). Calculating these eigenvalues is not as simple as for a choice of B=0, since E_nℓ is embedded in L_ℓ(k_nℓ) via k_nℓ^2 = 2μ E_nℓ/ħ^2. For ℓ = 0 and η = 0, the external logarithmic derivative is given simply by L_0 = ika. Using eq. (<ref>) and eq. (<ref>), the following eigenvalue equation is constructed: [ C_ij(0,0) - i ħ^2/2μ k_n0φ_i(a) φ_j(a) -ħ^2/2 μ k_n0^2 δ_ij]v_n0 = 0. Solving this for all k_n0 will give the Siegert states and Siegert pseudostates. This quadratic matrix eigenvalue equation can be written as an easily-solvable generalized eigenvalue problem of double the size <cit.>. This method, that we refer to as the Siegert method, is used in Ref. <cit.> for the Bargmann potential. For ℓ>0 and η≠ 0, however, such an algebraic method does not exist. Ref. <cit.> suggests using an iterative method, where initially B=0. Using eq. (<ref>), one of the resulting eigenvalues of C(0,0) for the first iteration, E^(1)_nℓ, is then used in B=L_ℓ(k_nℓ^(1)) with k_nℓ^(j)=±√(2μ E_nℓ^j/ħ^2) for j=1,2,.... The relevant eigenvalue, E^(2)_nℓ of C(0,L_ℓ(k_nℓ^1)) is then used in the next iteration. The calculation is repeated until the eigenvalue converges for some j. This simple scheme is in fact a standard method for calculating bound states with the R-matrix formalism and can be used in calculating resonance parameter as well <cit.>. It will be referred to as the iterative R-matrix method. Since the Siegert condition, eq. (<ref>), holds for states where the Jost function is zero, an alternative method for finding the Siegert states and Sigert pseudostates, which we will call the Jost method, simply involves finding the zeros of the calculated Jost function for B=0. The Siegert state and Siegert pseudostate energies for the other two methods are eigenvalues of C(0,L_ℓ(k)), with B=L_ℓ(k). By the spectral decomposition, eq. (<ref>), these energies correspond with poles of C(E,L_ℓ(k))^-1. By eq. (<ref>), these energies or corresponding k result in zeros of the Jost function for B=L_ℓ(k), since the poles of C (E,L_ℓ(k))^-1 lead to poles of χ(L_ℓ(k)). However, the Jost function is independent of B and it will have the same zeros for the choice of B=0. The Jost method is thus equivalent to the other two methods, and it can be used for any ℓ and η, unlike the Siegert method. Finding poles of the S-matrix is, of course, equivalent to finding zeros of the Jost function. Finding poles of the S-matrix is done in Ref. <cit.>, for example, to locate Siegert states. § TEST POTENTIALS Three single-channel potentials with known, exact expressions for the Jost functions will be considered, to test the accuracy of eq. (<ref>). For simplicity, all three potentials will not include Coulomb interactions (η=0), and ħ = μ = 1. §.§ Potential 1 where ℓ=0 with bound or virtual state The first potential that will be used is the Eckart potential <cit.>, which falls under a class of potentials derived by Bargmann <cit.> and is thus often referred to as the Bargmann potential instead. It is derived from supersymmetric transformations in Ref. <cit.>. In its simplified form, it is given by: V(r) = -4 κ_0^2 β_V e^-2 κ_0 r/[1+β_Ve^-2 κ_0 r]^2, β_V=κ_0+κ_1/κ_0-κ_1>1, where κ_0 and κ_1 are real quantities, with κ_0>0. In order for the potential not to have a singularity, it is required that κ_0 > κ_1. The value of the potential at the origin is: V(0) = 2 (κ_1^2-κ_0^2)<0. The corresponding Jost function is given by: f_0(k) = k-iκ_1/k+iκ_0. A bound or virtual state occurs for k=iκ_1 (a zero of the Jost function), with κ_1>0 for a bound state and κ_1<0 for a virtual state. A Jost-function pole occurs at k=-iκ_0, where it is required that κ_0 > 0. This potential is identical to the one used in Ref. <cit.>, with κ_0 = b and κ_1 = -c. The exact expression for the phase shift for this potential is given by eq. (4.16) of Ref. <cit.>. §.§ Potential 2 where ℓ=0 with resonance For the second potential, we will use eqs.(4.47)-(4.51) of Ref. <cit.>. There was an error in the signs of ζ_i, which has been corrected here: V(r) = (κ_0^2-κ_1^2) [κ_1^2sinh^2(κ_0 r+ζ_0) -κ_0^2sinh^2(κ_1 r+ζ_1 )]/[κ_1sinh(κ_0 r+ζ_0)cosh(κ_1 r+ ζ_1 )- κ_0sinh(κ_1 r+ζ_1)cosh(κ_0 r+ ζ_0)]^2, where the real positive constants, ζ_i are defined by eq. (4.46) of Ref. <cit.>, where a further error in the first expression has been corrected: ζ_i = arctanhκ_iα + arctanhκ_iα^* = arctanh 2 α_R κ_iκ_i^2 + |α|^2. The parameters κ_0 and κ_1 are real and positive while α = α_R + i α_I is complex and α_R and α_I are real. Assuming κ_1 > κ_0, the potential is not singular if κ_0 tanhζ_1 > κ_1 tanhζ_0. The value of the potential at the origin cannot be given in terms of the parameters with such a simple expression as for potential 1. The corresponding exact Jost function reads: f_0(k) = (k+α)(k+α^*)/(k+κ_0)(k+κ_1) . There is thus a resonance at k=-iα (with a mirror-resonance at k=-iα^*) and two Jost-function poles at k=-iκ_0 and k=-iκ_1. The exact expression for the phase shift for this potential is given by eq. (4.51) of Ref. <cit.>. §.§ Potential 3 where ℓ=1 with resonance This potential is not given in Ref. <cit.> and will therefore be discussed in more detail. It is an extension of the second potential, but with ℓ=1. It is also derived from supersymmetric transformations and has the same expression for the exact Jost function as potential 2. The phase shifts for both potentials are consequently also the same. The potential is given by: V(r) = -2 ^2/ r^2lnW[(1+1/α r) e^-α r, (1+1/α^* r) e^-α^* r, . . cosh(κ_1 r)-sinh(κ_1 r)/κ_1 r, cosh(κ_2 r)-sinh(κ_2 r)/κ_2 r] = - ^2/ r^2lnW[r, e^-α r, e^-α^* r, sinh(κ_0 r), sinh(κ_1 r)] -1/r^2 = - ^2/ r^2lnW[r + 2α_R/|α| ^2, sinh(κ_0 r -ζ_0), sinh(κ_1 r -ζ_1)] -1/r^2. Note that the centrifugal term, T_1=1/r^2, is incorporated in the Wronskian, W(r), in eqs. (<ref>) and (<ref>) and is thus subtracted in both these expressions. Furthermore, the exponentials in the Wronskian allows one to reduce the order between eq. (<ref>) and eq. (<ref>). For this potential to be short-ranged (except for the centrifugal term) and for the corresponding phase shift to satisfy the corresponding effective-range expansion, the parameters have to satisfy the additional condition: 0 = 1/κ_0+1/κ_1-1/α-1/α^* = 1/κ_0+1/κ_1-2α_R/|α| ^2. At small r, numerical instabilities leading to quasi singularities occur, when using any of the three equivalent expressions, eqs. (<ref>)-(<ref>). In eq. (<ref>), this is due to denominators in various terms containing a factor r. In eqs. (<ref>) and (<ref>), it is due to the centrifugal term being embedded in the Wronskian. However, it is known that the potential is finite at r=0 if the condition, eq. (<ref>) is met. A possible way of dealing with this numerical instability, which is applied in this study, is to use eq. (<ref>) and first to apply the sum-of-angles relation for sinh: sinh(κ_i r -ζ_i) = sinh(κ_i r) cosh(ζ_i) - cosh(κ_i r) sinh(ζ_i), and then to approximate the hyperbolic trigonometric functions with the first m terms of their Taylor expansions: sinh(κ_i r) ≈∑_n=0^m-1(κ_i r)^2n+1/(2n+1)!cosh(κ_i r) ≈∑_n=0^m-1(κ_i r)^2n/(2n)!. This approximation for the potential was used for r<0.05, with m=6. In particular, the value of the potential at the origin can be determined, but there is no simple expression for V(0). § RESULTS §.§ Potential 1 The ℓ=0 potential with bound state, eq. (<ref>), is used with parameters κ_0=2 and κ_1=1, which corresponds exactly with the Bargmann potential used in Ref. <cit.>. Since the Lagrange-Jacobi mesh with ℓ=0 reduces to the Lagrange-Legendre mesh, the phase shifts reported in Ref. <cit.> could be reproduced for a=5 and N=25. In our Jost-function calculations, the same channel radius of a=5 is chosen, since the potential is approximately zero for r ≥ 5. For a reference value of k=0.5, the number of mesh points, N is steadily increased until values for the calculated Jost function converge. This occurs at N=40, with an accuracy of 8 digits. For different k, convergence of the calculated Jost function occurs for different N, but for simplicity all calculated values are given for the same choice of a=5 and N=40, unless otherwise stated. Figure <ref> shows the exact Jost function, eq. (<ref>) as well as the Jost function calculated from the R-matrix using eq. (<ref>), in the complex k-plane. The most striking feature is the line of Siegert pseudostates that clearly appear around k=-2i in the plot of the calculated Jost function. The pseudostates, which have regular spacing between them, replace the exact Jost-function pole at k=-2i, which does not appear in the calculated Jost function. Figure <ref> shows the corresponding exact and calculated S-matrix using eq. (<ref>), where the false S-matrix pole at k=2i appears in the exact plot and the more complicated analytic structure is clearly seen. For increasing a, the number of Siegert pseudostates in an interval of Re(k) increases, and the spacing between states decreases. The Siegert pseudostates are thus strongly dependent on a. For increasing N, the accuracy of the Siegert pseudostates increases, until some point of convergence in the values for a number of digits is reached. In the vicinity of the pseudostates, specifically below k=-2i, the accuracy of the calculated Jost function is no longer reliable, which is also clear from table <ref>. Table <ref> shows specific values of the calculated Jost function. Only values with Re(k) > 0 are given, since the calculated and exact Jost functions are symmetrical around the real k-axis. If | k | is large, the wave function becomes strongly oscillating and inaccuracies are to be expected for any numerical calculation. Values of k with -5 ≤Re(k) ≤ 5 and -5 ≤Im(k) ≤ 5 are thus chosen. Apart from the Jost-function pole and Siegert pseudostates, the calculated Jost-function values are independent of a, up to an increasing accuracy with increasing a, provided that N is big enough to reach convergence. The calculated Jost-function value at the exact Jost-function pole of k=-2i is dependent on the choice of a. For a larger choice of a, the calculated value is larger, as shown in table <ref>. The increase is approximately linear with increasing a. The calculated Jost function at k=1+i and k=1-2i are also given in table <ref>, for reference. The first value is independent of a, up to an increasing accuracy with increasing a. The second value, which is in the vicinity of the pseudostates, fluctuates for increasing a, since the pseudostates also shift for increasing a. The value of N must be increased with increasing a in order for the calculated values to converge. For a≥ 6, the functions I_ℓ(ka,η) and O_ℓ(ka,η) oscillate strongly with respect to k. For k=-2i in particular, the functions grow and shrink exponentially with respect to increasing a. This causes numerical difficulties, which can partly be addressed by using the asymptotic approximations for I_ℓ(ka,η) and O_ℓ(ka,η) given in Ref. <cit.>, for example. There are still exponentially increasing factors in eq. (<ref>), however, which is why convergence in the Jost-function value at k=1-2i and k=-2i is no longer reached even for a very large number of meshpoints, N. The Jost-function zero, which is a Siegert state corresponding to a bound state in this case, appears at k=i for the calculated Jost function. Other than the pseudostates, it is independent of a, up to an increasing accuracy with increasing a, provided that a is again chosen large enough for the potential to be approximately zero for r ≥ a. Finding the S-matrix poles or calculating the Jost-function zeros to locate the bound state gives exactly the same result, as expected, with an accuracy of 8 significant figures. The zeros of the Jost function were determined using a Newton-Raphson iterative method with various arbitrary starting values of k. Poles of the S-matrix were similarly determined using a Newton-Raphson method to find zeros of 1/S_ℓ(k). However, the calculated S-matrix has a more complicated structure than the calculated Jost function, as seen in figure <ref>. When using the Newton-Raphson method to find the zeros of the inverse S-matrix, starting values need to be chosen much closer to the actual zeros in order for the method to converge. Finding the Jost-function zeros proved to be easier, as the Newton-Raphson method converged quickly for most starting values. Apart from the bound state, which is the Siegert state, the Siegert pseudostates were also calculated by finding the zeros of the calculated Jost function. Table <ref> gives a few of the Siegert pseudostates obtained by solving the eigenvalue eq. (<ref>) using the algebraic method of Ref. <cit.>, with the results from finding the zeros of the calculated Jost function, eq. (<ref>). Both methods give similar accuracy when compared to the Lagrange-Legendre mesh calculations of Ref. <cit.>, where a=5 and N=25. However, accuracy is better for the choice of a=5 and N=40 in the current work, due to the larger number of mesh points. The values of Ref. <cit.> given in table <ref> were calculated by solving eq. (49) of that paper. This equation is obtained from the approximate behaviour of the wave functions of the Siegert pseudostates for the Bargmann potential truncated at r=a, and is only applicable to situations where ℓ=0. The simple iterative R-matrix procedure where B=L_ℓ(k), described in Section <ref>, was also attempted to calculate the Siegert pseudostates, without success. A virtual state can be created with potential 1 if κ_1<0. In particular, the values (κ_0,κ_1)=(2,-0.5) and (κ_0,κ_1)=(2,-1) have been tested. The Siegert method, determining eigenvalues, k_nℓ, of eq. (<ref>), gives almost identical results to finding the zeros of the Jost function using eq. (<ref>). These values are within an accuracy of around 5 digits. The simple R-matrix iterative method setting B=L_ℓ(k) also gives good results, but convergences only occurs after a large number of iterations. In conclusion, finding the zeros of the Jost function consistently gives readily converging results with good accuracy for both Siegert states (bound and virtual) and Siegert pseudostates. §.§ Potential 2 Parameters for potential 1 were chosen to correspond with the Bargmann potential used in Ref. <cit.>, so that results could easily be compared. No other study (to our knowledge) has used potential 2 and 3 in calculations. The parameters, α, κ_0 and κ_1, are chosen to be the same for both potential 2 with ℓ=0 and potential 3 with ℓ=1. This means they will have exactly the same Jost function, S-matrix, resonance, and Jost-function poles. This simplifies the comparison of results. The parameter, α, is firstly chosen to result in a relatively narrow resonance occurring at an experimentally attainable energy. The main restriction is then condition (<ref>) of potential 3. For the resonance to have a positive real part of the energy, α_R < α_I. Also, κ_0, κ_1, α_R and α_I must be positive. Furthermore, it is known that the calculated Jost function is inaccurate in the vicinity of an exact Jost-function pole, from the results of potential 1. If α_R is chosen too close to κ_0 and κ_1, we will not be able to accurately locate the resonance at k=-iα with the R-matrix calculations. A choice of κ_0 = 1.5, κ_1= 0.75 and α = 0.2+0.4i satisfies all the requirements, for example, and we were able to calculate the Jost function from the R-matrix for such a choice. However, better accuracy for the resonance calculation was obtained with larger values of κ_1, so that the Jost-function poles are further from the resonance. Due to the condition (<ref>), a larger κ_1 requires an adjustment in the choice of α as well. The results for potential 2, eq. (<ref>), for κ_0=1.5, κ_1=9.75 and α=0.1+0.5i are reported here. The potential has a value of V(0)=-97.7925 at the origin for this choice. Figure <ref> shows plots of both potential 2 and potential 3 for the same parameters. The Jost-function values at various complex k are given in table <ref>, where the calculated Jost-function values were obtained with a=5 and N=70. Figure <ref> shows the exact Jost function with the calculated Jost function for potential 2, where Siegert pseudostates around the first pole with k=-1.5i are again present for the calculated Jost function. The calculated Jost-function values near the Siegert pseudostates are, once again, not accurate and the Jost-function pole is once again absent in the calculated Jost function. The calculated Jost function also becomes very unstable for values of Im(k) lower than the first pole at k=-iκ_0, the further one moves from the pole along the negative Im(k) axis. The plots of Figure <ref> are thus scaled so that the second pole with k=-9.75i is not visible, due to the instability in the calculated Jost function in this region. There is agreement with the Siegert pseudostates calculated from finding the eigenvalues of eq. (<ref>) and by finding zeros of eq. (<ref>). The first few Siegert pseudostates are given in table <ref>. The characteristic line of pseudostates that appear around the exact Jost-function pole for potential 1, appears around the first pole of potential 2. However, neither the Jost method nor the Siegert method of finding the eigenvalues of eq. (<ref>) gives any distinct line of pseudostates around the second exact Jost-function pole for potential 2. No pseudostates appear around the second pole for parameters where the poles are closer together and nearer the origin, either, such as α = 0.2+0.4i, κ_0 = 1.5 and κ_1= 0.75. This is because the calculated Jost function is not accurate below the first pole, in the k-plane. The exact resonance is located at k=-iα. As can be seen in the first value of table <ref>, very similar results with an accuracy of 5 digits was obtained for a=5 and N=70 with the Siegert method, where the eigenvalues of (<ref>) are calculated, and by finding the zeros of the calculated Jost function, eq. (<ref>). Similar accuracy was obtained with the iterative R-matrix method, letting B=L_ℓ(k). There is once again a strong dependence of the calculated Jost function on the channel radius, a, at an exact Jost-function pole. The calculated Jost-function value becomes larger as a is increased, but, as with potential 1, the value does not converge for larger a due to the inaccuracy around the nearby pseudostates. §.§ Potential 3 The ℓ=1 potential with resonance, eq. (<ref>), is used with the same parameters as potential 2: κ_0=1.5, κ_1=9.75 and α=0.1+0.5i, giving V(0)=-32.5975. The potential is plotted in Figure <ref>, with potential 2. The results for the Jost functions at various complex k are given in table <ref>. For these results, a=5 and N=50 in the R-matrix calculations. The plot of the exact Jost function compared with the calculated Jost function for potential 3 is almost identical to that of potential 2, given in Figure <ref>. This is to be expected, since a=5 for the calculations with both potentials. For this reason, the plot of the Jost functions for potential 3 is not shown. As with potential 1 and 2, the calculated Jost-function values near the Siegert pseudostates are not accurate and the exact Jost-function pole is absent in the calculated Jost function. As before, there is a strong dependence of the calculated Jost function on the channel radius, a, at the exact Jost-function pole. For increasing a, the calculated Jost-function value at the exact Jost-function pole becomes larger and pseudostates are closer together. The Siegert pseudostates cannot be determined from finding the eigenvalues of eq. (<ref>), since ℓ 0 in this case. They could be determined by finding zeros of the calculated Jost function, and the first few are given in table <ref>. The exact resonance is also given by k=-iα. An accuracy of 5 digits was obtained for a=5 and N=50 by finding the zeros of the calculated Jost function (the value also appears in table <ref>), with similar accuracy from the iterative R-matrix method. In fact, this accuracy is obtained with both methods for N=50, which is an improvement on the situation for potential 2, where a larger number of mesh points is necessary. This is because potential 2 has a large negative value at the origin, requiring a larger mesh to reflect the sharply-changing behaviour of the potential accurately. There is a small but significant difference in the Siegert pseudostates for potential 2 and 3, as can be seen when comparing the values of tables <ref> and <ref>. It is known that the Siegert pseudostates depend on a, which is chosen to be the same for both potentials. The choice of N does not significantly affect the Siegert pseudostates, provided it is large enough. The difference is thus due to the ℓ dependence. § CONCLUSION A method for determining the Jost function from R-matrix calculations on a Lagrange-Jacobi mesh for any partial wave is developed. The focus of the study is to analyse the behaviour of this calculated Jost function in the complex k-plane, specifically its relation to the Siegert states and Siegert pseudostates. The accuracy of the calculated Jost function is, firstly, a reflection of the reliability of the R-matrix method since inaccuracies in the calculated Jost function are due to inaccuracies in the R-matrix. The zeros of the calculated Jost function correspond with the Siegert states and the Siegert pseudostates. Poles on the negative imaginary axis of k that may exist for the exact Jost function, do not appear in the calculated Jost function. Poles nearest to the origin are replaced by a distinct horizontal line of equally-spaced Siegert pseudostates that occur around the point where the pole should be. The calculated Jost function is reliable for all k (provided |k| is not too large) apart from intervals around the Siegert pseudostates, specifically below the position of an exact Jost-function pole. A further important part of this study is to accurately determine the values of k that correspond with Siegert states (bound, virtual or resonance states) and Siegert pseudostates. Three methods are used. The first method, proposed here, is by finding the zeros of the Jost function calculated with the R-matrix on a Lagrange-Jacobi mesh. This method is simple to apply and gives excellent results for all the test potentials, and there is no restriction on its application. Finding the zeros of the calculated Jost function is identical to finding the poles of the S-matrix from R-matrix calculations. The S-matrix can accurately be determined with the R-matrix on various meshes, and in practical calculations the poles can be determined in various ways. However, it is found that the Newton-Raphson procedure used to find Jost-function zeros in this study is more robust in finding Siegert states and Siegert pseudostates than trying to locate S-matrix poles. The second method is the Siegert method of Ref. <cit.>, which is only applicable to s-wave scattering but gives good results where it can be applied. The third method is the simple iterative R-matrix scheme proposed in Ref. <cit.>. When it converges, which is not always guaranteed, it gives very accurate results. Siegert pseudostates could not be determined with this method. The reason for this lack of convergence of the Siegert pseudostates may be because they are mathematically similar to wide resonances, which are numerically difficult to locate. § ACKNOWLEDGEMENTS This work has received funding from the F.R.S.-FNRS under Grant No. 4.45.10.08. as well as from a bilateral grant of the South African National Research Foundation (NRF) and F.R.S.-FNRS, Grant PINT-BILAT-M R.M004.19 “Multi-channel quantum resonances”. We also extend our gratitude to Nicholas Trofimenkoff who contacted one of the authors to point out errors in Ref. <cit.>, which have been corrected here. 99 tocchapterBibliography Taylor J.R. Taylor, Scattering Theory: The Quantum Theory of Nonrelativistic Collisions (Dover, New York, 2000). Rakityansky2022 S.A. Rakityansky, Jost Functions in Quantum Mechanics: A Unified Approach to Scattering, Bound, and Resonant State Problems, (Springer, Berlin, 2022). Tolstikhin1997 O.I. Tolstikhin, V.N. Ostrovsky and H. Nakamura, “Siegert Pseudo-States as a Universal Tool: Resonances, S Matrix, Green Function”, Phys. Rev. Lett. 79, 2026 (1997). Tolstikhin1998 O.I. Tolstikhin, V.N. Ostrovsky and H. Nakamura, “Siegert pseudostate formulation of scattering theory: One-channel case”, Phys. Rev. A 58, 2077 (1998). Rakityansky2013 S.A. Rakityansky and N. Elander, “Analytic structure of the multichannel Jost matrix for potentials with Coulombic tails”, J. Math. Phys. 54, 122112 (2013) Baye2002 D. Baye, J. Goldbeter and J-M. Sparenberg, “Equivalence of the Siegert-pseudostate and Lagrange-mesh R-matrix methods”, Phys. Rev. A 65, 052710 (2002). Baye1998 D. Baye, M. Hesse, J-M. Sparenberg and M. Vincke, “Analysis of the R-matrix method on Lagrange meshes”, J. Phys. B 31, 3439 (1998). Baye2015 D. Baye, “The Lagrange-mesh method”, Phys. Rep. 565, 1 (2015). Descouvemont2010 P. Descouvemont and D. Baye, “The R-matrix theory”, Rep. Prog. Phys. 73, 036301 (2010). Baye2014 D. Baye, J-M. Sparenberg, A.M. Pupasov-Maksimov and B.F. Samsonov, “Single- and coupled-channel radial inverse scattering with supersymmetric transformations”, J. Phys. A 47, 243001 (2014). NIST F.W.J. Olver et al., NIST Handbook of Mathematical Functions (Cambridge University Press, Cambridge, 2010). Rakityansky1996 S.A. Sofianos and S.A. Rakityansky, “Exact method for locating potential resonances and Regge trajectories”, J. Phys. A 30, 3725 (1997). Gaspard2018 D. Gaspard, “Connection formulas between Coulomb wave functions”, J. Math. Phys. 59, 112104 (2018). Burke P.G. Burke, R-Matrix Theory of Atomic Collisions (Springer, Berlin, 2011). Schneider1981 B.I. Schneider, “Direct calculation of resonance energies and widths using an R-matrix approach”, Phys. Rev. A 24, 1 (1981). Descouvemont1990 P. Descouvemont and M. Vincke, “Iterative method for resonance properties in the R-matrix theory”, Phys. Rev. A 42, 3835 (1990). Ducru2022 P. Ducru and V. Sobes, “Definite complete invariant parametrization of R-matrix theory”, Phys. Rev. C 105, 024601 (2022). Eckart1930 C. Eckart, “The Penetration of a Potential Barrier by Electrons”, Phys. Rev. 35, 1303 (1930). Bargmann1949 V. Bargmann, “Remarks on the Determination of a Central Field of Force from the Elastic Scattering Phase Shifts”, Phys. Rev. 75, 301 (1949).
http://arxiv.org/abs/2306.10996v1
20230619150512
The correlations of stellar tidal disruption rates with properties of massive black holes and their host galaxies
[ "Yunfeng Chen", "Qingjuan Yu", "Youjun Lu" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.GA" ]
0000-0001-5393-9853]Yunfeng Chen School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China; [email protected] Kavli Institute for Astronomy and Astrophysics, and School of Physics, Peking University, Beijing, 100871, China; [email protected] 0000-0002-1745-8064]Qingjuan Yu Kavli Institute for Astronomy and Astrophysics, and School of Physics, Peking University, Beijing, 100871, China; [email protected] 0000-0002-1310-4664]Youjun Lu National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100101, China; [email protected] School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China Qingjuan Yu Stars can be either disrupted as tidal disruption events (TDEs) or swallowed as a whole by massive black holes (MBHs) at galactic centers when they approach sufficiently close to these MBHs. In this work, we investigate the correlations of such stellar consumption rates with both the MBH mass M and the inner slope of the host galaxy mass density distribution α. We introduce a simplified analytical power-law model with a power-law stellar mass density distribution surrounding MBHs and separate the contributions of two-body relaxation and stellar orbital precession for the stellar orbital angular momentum evolution in nonspherical galaxy potentials. The stellar consumption rates derived from this simplified model can be well consistent with the numerical results obtained with a more realistic treatment of stellar distributions and dynamics around MBHs, providing an efficient way to estimate TDE rates. The origin of the correlations of stellar consumption rates with M and α are explained by the dependence of this analytical model on those MBH/host galaxy properties and by the separation of the stellar angular momentum evolution mechanisms. We propose that the strong positive correlation between the rates of stellar consumption due to two-body relaxation and α provides one interpretation for the overrepresentation of TDEs found in some rare E+A/poststarburst galaxies. We find high TDE rates for giant stars, up to those for solar-type stars. The understanding of the origin of the correlations of the stellar consumption rates will be necessary for obtaining the demographics of MBHs and their host galaxies via TDEs. § INTRODUCTION Tidal disruption event (TDE) happens when a star travels too close to a massive black hole (MBH) so that the tidal force exerted on the star by the MBH surpasses the star's self-gravity, causing the star to be ripped apart and disrupted, accompanied by luminous flares due to subsequent accretion of the stripped stellar material <cit.>. TDEs may frequently occur at the centers of galaxies, as observations have shown the ubiquitous existence of MBHs (with mass ∼10^6–10^10) at galactic nuclei (e.g., ). These TDEs provide rich electromagnetic signals, which help to study the relativistic effects, accretion physics, formation of radio jets and interior structure of torn stars, etc. <cit.>. TDEs illuminate those dormant MBHs, and the event rates in individual galaxies depend on the MBH properties (e.g., , hereafter MT99; ). Therefore, they can serve as powerful probes of the MBH demographics, such as the MBH mass, spin, binarity, and the occupation fraction of MBHs in galaxies <cit.>. Depending on the event rate, the consumption of intruding stars may provide an important channel for the growth of MBHs in the centers of galaxies <cit.> and even has significant effects on the MBH spin evolution <cit.>, especially for those relatively less massive ones. TDEs are first discovered as energetic transients in the soft X-ray band by archival searches of the ROSAT All-Sky Survey data <cit.> and later found via searches by multi-wavelength (e.g., UV, optical, X-ray, and gamma-ray) observations <cit.>. Over the past decade, statistically significant samples of TDEs have begun to accumulate, largely owing to the rapid advancement of the optical time domain surveys, such as PTF <cit.>, ASAS-SN <cit.>, Pan-STARRS <cit.>, and ZTF <cit.>. For example, a recent census by <cit.> listed 56 TDE candidates, among which around two thirds are discovered by the wide-field optical time domain surveys. Based on different samples of TDE candidates, the TDE event rate is estimated to be ∼10^-5–10^-4 (e.g., ). Some recent studies on the TDE host galaxies reveal intriguingly that TDEs prefer those rare E+A/poststarburst galaxies <cit.>. After removing the accountable selection effects (e.g., MBH mass, redshift completeness, strong active galactic nucleus presence, bulge colors and surface brightness), the remaining factor of the overrepresentation of TDEs in those rare subclass of galaxies is ∼25–48 <cit.>. The underlying mechanism(s) responsible for this preference is still unclear. Some proposed explanations include binary MBH formation due to a recent galaxy merger, stellar orbital perturbations induced by what has also caused the starburst in the past, a unique population of stars such as an evolved population of A giants which are susceptible for disruptions, and high central stellar density distributions or concentrations (e.g., see ). To properly interpret these observations and thereby distinguish different dynamical mechanisms occurring in the galactic nuclei to produce TDEs, it is necessary to have a thorough understanding of these mechanisms, especially on the dependence of those theoretically predicted rates on the properties of the central MBHs and their host galaxies. In a spherical stellar system composed of a central MBH and surrounding stars, the tidal radius for a given type of stars and the MBH event horizon determine the size of the loss cone in the phase space of the stellar energy and angular momentum, in the sense that a star at a given energy can be consumed (either tidally disrupted or swallowed as a whole) by the MBH when its angular momentum falls below a critical value. The rate of stellar consumptions is determined by the refilling rate of low-angular-momentum stars into the loss cone, since stars initially in the loss cone will be quickly consumed (e.g., within an orbital period). The two-body relaxation process of stars can set a lower limit for the refilling rate of low-angular-momentum stars into the loss cone. In realistic stellar systems, some other mechanisms may also work or even play a dominant role in refilling the loss cone, and therefore enhance the stellar consumption rates. Possible mechanisms include the resonant relaxation <cit.>, massive perturbers which may accelerate the two-body relaxation process <cit.>, binary MBHs or recoiled MBHs <cit.>, as well as the stellar orbital precession within nonspherical galaxy gravitational potentials (MT99; ). In an earlier work (, hereafter CYL20), a statistical study is conducted on the cosmic distributions of stellar tidal disruptions by MBHs at galactic centers, due to the combined effects of two-body relaxation and orbital precession of stars in triaxial galaxy potentials; and the statistical results reveal the correlations of the stellar consumption rate with both the MBH mass M and the inner slope of the galaxy surface brightness profile γ. A negative correlation between the stellar consumption rate (per galaxy) with the MBH mass (e.g., ) is found to exist only for M≲ 10^7, and the correlation becomes positive for M≳ 10^7. At a given MBH mass M, the stellar consumption rate is higher in galaxies with larger γ (steeper inner surface brightness profile) if M≲ 10^7, but insensitive to γ for MBHs with larger masses. In triaxial galaxy potentials, the phase space of the loss cone described for spherical galaxies can be replaced by the phase space of a general loss region to incorporate the stars that can precess onto low angular momentum orbits. The dichotomic trends of the stellar consumption rate at different MBH mass ranges can be explained by that the dominant fluxes of stellar consumption have different origins of the stellar low-angular-momentum orbits. A further quantitative understanding of the origin of these correlations is of great importance as it provides the key to distinguish the dominant mechanism(s) of TDEs under different circumstances. Apart from that, the current and future TDE observations shed new light on the demographic studies of the MBH population, especially the mass function and the occupation fraction of MBHs at the low-mass end (e.g., ); the correlations of the stellar consumption with the MBH/galaxy properties cannot be ignored in a proper interpretation of the observational results, and should be accompanied with a quantitative understanding of the origin. To understand the origin of these correlations, we employ an analytical model considering power-law stellar distribution under the Keplerian potential of the central MBH. We first verify that this simplified model provides a fairly good approximation when evaluating the event rate of stellar consumption due to the two mechanisms, i.e., two-body relaxation and stellar orbital precession in nonspherical potentials. Then we identify the dominant factor(s) responsible for the slope and the scatter of each correlation. The paper is organized as follows. In Section <ref>, we first briefly describe the galaxy sample used in this study. Then with the sample galaxies, we show the correlations of the stellar consumption rates with the MBH mass and the galaxy inner stellar distribution, as obtained in CYL20. In Section <ref>, we construct the analytical model and obtain the approximated expression for the stellar consumption rate due to either mechanism and inspect the contribution from different terms in the approximated expressions to these correlations. In Section <ref>, we explore the possibility of using such correlations to explain the observed overrepresentation of TDEs in those rare E+A/poststarburst galaxies. Based on the quantitative correlations constructed from the analytical model, we also generalize the discussion from the consumption rates of solar-type stars to those of other different types of stars. § GALAXY SAMPLE AND RATE CORRELATIONS §.§ Galaxy sample In this study, we adopt the two observational samples of early-type galaxies given by <cit.> and <cit.> to investigate the correlation between the stellar consumption rate by the central MBH and either the MBH mass or the host galaxy inner stellar distribution. For both samples, high-spatial-resolution observations of the galaxy surface brightness by HST are available, which are crucial for TDE studies since stars from the inner region of the host galaxy (e.g., inside the influential radius of the central MBH) contribute significantly to the stellar consumption (MT99). Note that these two galaxy samples have also been adopted by CYL20 to study the cosmic distributions of the stellar tidal disruptions and by <cit.> to study the properties of the cosmic population of binary MBHs as well as their gravitational wave emissions. For those early-type galaxies, their surface brightness profiles I(R) can be well described by the Nuker law <cit.>. The best-fitting Nuker-law parameters for the sample galaxies can be found in each source paper. When calculating the stellar disruption/consumption rate, the mass-to-light ratio M/L of the galaxy is also required, since M/L and the surface brightness profile together determine the mass density distribution of the galaxy if it is spherically distributed. For galaxies in <cit.>, we obtain their M/L following Section 2.1 of <cit.>, i.e., their Equations (4)-(5). For galaxies in <cit.>, we note that their r-band M/L can be found in <cit.>, but their Nuker-law parameters were fitted using the V-band observations. In this study, we assume that the V-band M/L for these galaxies are the same as their r-band values. We estimate the MBH masses M for the sample galaxies based on empirical scaling relations. For galaxies in <cit.>, we follow Section 2.1 of <cit.>, i.e., adopting the M–σ relation from <cit.> when σ is available in <cit.>, while adopting the M–L_V relation from <cit.> otherwise. For galaxies in <cit.>, we adopt the M–σ relation from <cit.>, with σ taken from <cit.>. When studying the stellar consumption due to loss-region draining, we assume that each sample galaxy has a triaxial shape characterized by p_ρ=0.9 and q_ρ=0.8, where p_ρ and q_ρ represent the medium-to-major and minor-to-major axis ratios of the galaxy mass density distribution.[We adopt this shape (p_ρ=0.9, q_ρ=0.8) as a representative of generic triaxial stellar systems. The corresponding triaxiality parameter is T_ρ=(1-p_ρ^2)/(1-q_ρ^2)≃ 0.53. As seen from Figure 7 of <cit.>, the loss region (i.e., the stellar reservoir for the refilling of the loss cone in triaxial systems) changes abruptly when T_ρ is very close to 0/1 or q_ρ is very close to 1, while it changes little with other T_ρ and q_ρ values. Therefore, the results obtained in this study will hold even when the host galaxy has a different shape, as long as the shape is not very close to perfectly spherical or axisymmetric ones.] §.§ Stellar consumption rate correlations In a spherical stellar system, the rate of stellar consumption by the central MBH due to two-body relaxation can be evaluated by solving the Fokker-Planck equation and the solution can be found in MT99 (see also ), i.e., ()d = 4π^2 f̅()P()μ̅()J^2()d/ln R_0^-1(). Here f̅() is the “isotropized” distribution function (see Eq. 21 of MT99), P()≡ P(,J^2=0) is the orbital period of a test particle with “specific binding energy” (hereafter abbreviated as energy) and zero “specific angular momentum” J (hereafter abbreviated as angular momentum), μ̅() is the orbital-averaged diffusion coefficient of the dimensionless angular momentum R()≡ J^2()/J^2() in the limit R→ 0, with J() denoting the angular momentum of a star on a circular orbit with energy , R_0() marks the value of R() at which the distribution function falls to zero. R_0() is given by R_0() = R()× { exp(-q) q ≥ 1 exp(-0.186q-0.824√(q)) q<1 , . where R()≡ J^2()/J^2() represents the relative size of the loss cone at energy in the phase space, with J denoting the loss-cone angular momentum and q()≡ P()μ̅()/ R() representing the ratio of the change of R() for radial orbits during an orbital period to R() (MT99). The stellar consumption rate due to loss-region draining in a nonspherical potential at a sufficiently long consumption time T can be approximated by ()d = 4π^2 f() J^2() exp[-T/P()J^2()/J^2()]d, where in triaxial galaxies J() corresponds to the characteristic angular momentum at energy below which stars can precess into the loss cone, and it is determined by the nonspherical shape of the host galaxy (see Eq. 9 in CYL20 and Eq. 52 in MT99). Due to the draining, stars initially inside the loss region are gradually depleted, and stars initially outside the loss region can be diffused into it due to two-body relaxation. As shown in CYL20, in triaxial galaxies, the loss-region refilling rate can be obtained by generalizing the analysis done for spherical systems, i.e., by replacing J in Equations (<ref>)–(<ref>) with the angular momentum J characterizing the size of the loss region. When the stars initially in the loss region are depleted significantly at T≫ P()J^2()/J^2(), the loss-region refilling rate due to two-body relaxation can be larger than the loss-region draining rate given by Equation (<ref>) and thus dominate the stellar consumption rate. When the loss-region refilling rate due to two-body relaxation dominates the stellar consumption rate, compared with the rate obtained through two-body relaxation in spherical systems, the correction of the stellar consumption rates for triaxial galaxies can be mainly due to the change of the logarithm term of ln R_0^-1() in Equations (<ref>)–(<ref>); and compared to the estimates for spherical systems, the average stellar consumption rates in triaxial galaxies are increased roughly by a similar factor of f^ tri∼3–5 for low-mass MBHs (with masses ranging from M∼ 10^5–10^7M_⊙; see Figure 3 or 4 in CYL20). This rate increasing factor is almost independent of the MBH mass for low-mass MBHs (when T≳ 1; see Fig. 5 of CYL20). That is, the consumption rate–MBH mass correlation obtained by only considering two-body relaxation in spherical systems is similar to that by considering loss-region draining and refilling, except for a normalization difference by a factor of f^ tri∼ 3-5 (see Fig. 3 of CYL20). Thus, below for the purpose of this work on discussing the changing tendencies of the stellar consumption rates with properties of MBHs and host galaxies and for simplicity, we use Equations (<ref>)–(<ref>) obtained due to the two-body relaxation for spherical systems to analyze parts of the origin of the correlations. The origin of the correlations of the stellar consumption rates with properties of MBHs and host galaxies can be analyzed through the functions of = ∫()d or = ∫()d, where is the total rate of stellar consumption due to two-body relaxation obtained by assuming that galaxies are spherical, and is the total rate of stellar consumption due to the draining of the loss region obtained by assuming that galaxies are non-spherical. By taking into account that at some , () can become subdominant compared to the corresponding loss region refilling rate due to two-body relaxation, the stellar consumption rate (obtained by the integration over ) in a triaxial galaxy, ∼∫max[f^ tri(),()]d, is expected to be larger than max(f^ tri,), as well as being lower than the sum of f^ tri+. Thus the stellar consumption rate is expected to be ∼ if ≫ f^ tri, and ∼ f^ tri if ≪ f^ tri; As to be seen below (e.g., Figs. <ref>-<ref>), both of those two cases cover a significantly large parameter space of host galaxy properties, so that the functions of and can be used to analyze the origin of the correlations of the stellar consumption rates. We estimate those stellar consumption rates of and for each sample galaxy according to Equations (<ref>)–(<ref>). We refer to CYL20 for some detailed procedures to calculate the relevant quantities in these equations. Figure <ref> shows the dependence of the stellar consumption rate due to two-body relaxation on the MBH mass M (left panel) and on the inner slope of the galaxy stellar number/mass density distribution α (right panel), respectively. The stellar consumption rate of each galaxy is evaluated based on Equation (<ref>). As seen from this figure, has a negative correlation with M, and a positive one with α. We conduct linear fittings to the log-log M relation (left panel) and the log-α relation (right panel), respectively. The best-fitting relations are shown by the red dashed line in each panel, with the corresponding slope and intercept labeled in the panel. Similarly, in Figure <ref>, we show the dependence of the stellar consumption rate due to loss-region draining in nonspherical potentials on the mass of the central MBH M and on the inner slope of the galaxy stellar number/mass density distribution α, where we set T=10. In contrast to the case of two-body relaxation, has a positive correlation with M and a mildly negative correlation with α. The best-fitting relations are also shown by the red dashed lines in both panels. As seen from Figures <ref> and <ref>, we have at M≲ 10^7 and at M≳ 10^7. Correspondingly, we apply the correlations of and to interpret the correlation tendencies of the stellar consumption rates in those low-mass and high-mass ranges of MBHs, respectively. Note that the fits shown in Figures <ref> and <ref> cover the whole MBH mass range, as it is a clear way to show the effects of the same mechanism through a large mass range. If the fit is limited to only a small mass range of M≲ 10^7, specifically the correlation between and M could appear quite mild or may not be negative due to the small mass range and large rate scatters. In general, the other correlation tendencies are found not to be affected significantly even if limiting the fit mass range to either M≲ 10^7 or M≳ 10^7. In this work, for connecting the stellar consumption rates with the underlying mechanisms and for simplicity, we use the fit results of the whole mass range below. § THE POWER-LAW STELLAR DISTRIBUTION MODEL LRRcRR Best-fit parameters and their 1σ uncertainties for the different correlations shown in Figures <ref>–<ref>, <ref>, and <ref>. 2clog_10Y=b_M+k_Mlog_10M_ BH,8 2clog_10Y=b_α+k_αα 2-3 5-6 Y b_M k_M b_α k_α -4.64(0.04) -0.28(0.04) -6.61(0.18) 1.37(0.12) -0.40(0.04) -0.47(0.04) -2.35(0.20) 1.37(0.13) ζ(α) -0.11(0.01) -0.06(0.01) -1.02(0.01) 0.63(0.00) ϕ(ω) -0.72(0.02) 0.26(0.02) 0.57(0.11) -0.90(0.07) -3.54(0.03) 0.90(0.03) -2.67(0.31) -0.64(0.21) χ(α) 0.26(0.01) 0.16(0.01) 1.18(0.04) -0.64(0.03) -0.37(0.04) 0.69(0.04) -0.71(0.29) 0.20(0.19) -0.18(0.01) 0.01(0.01) 0.12(0.02) -0.21(0.02) The explanation to the correlations shown in Figures <ref> and <ref> obtained by using Equations (<ref>)-(<ref>) is not explicit, since many of the terms in these equations are calculated numerically for individual galaxies. In the following subsections, we employ a power-law model to obtain approximate expressions for the stellar consumption rates due to both the mechanisms. By saying “power-law model” we mean that the stellar number/mass density distribution of the galaxy can be described by a single power law and only the Keplerian potential of the central MBH is considered. We investigate the relative contributions of different terms in this simplified model to the stellar consumption rate and thus figure out the dominant factors that lead to these two correlations. As a counterpart, we define the “full model” as the one evaluating the stellar consumption rates through Equations (<ref>)–(<ref>). Below, we describe the details of the power-law model. In the simplified model, we assume that the distribution of stars around the central MBH follows a single power law, i.e., n_∗(r)∝ r^-α where n_∗(r) is the spatial number density of stars at radius r and α is the power-law index. We define the influential radius of the MBH r to be the radius within which the enclosed stellar mass equals the MBH mass, i.e., M≡ M_∗(r≤ r)= M. We make the power-law assumption based on the finding that stellar diffusion in the energy space gradually drives the distribution of stars in a system containing a central MBH towards a power-law cusp <cit.>. In practice, we adopt the Nuker law <cit.> as a generic description of the surface brightness profiles of the galaxies. The break radii of realistic galaxies R are typically much larger than the influential radii of the MBHs r, ensuring that the power-law density distribution a fairly good approximation for stars inside r. Therefore, the spatial number density of stars surrounding the MBH could be formulated as n_∗(r) = (3-α)N/4π r^3 (r/r)^-α, where N is the total number of stars enclosed in r. The total number of stars enclosed in any given radius r is N_∗(≤ r)= N(r/r)^3-α. If we assume identical mass of stars, we also have M_∗(≤ r)= M(r/r)^3-α, with M_∗(≤ r) being the total stellar mass enclosed in radius r. Throughout this paper, we make the approximation that α=γ+1, where γ is the inner slope of the Nuker-law surface brightness profile. It turns out to be a good approximation for sample galaxies with γ 0.2. For galaxies with γ 0.2, the approximation tends to overestimate the inner slope of the host galaxy stellar number/mass density profile. However, the main conclusions obtained in this paper do not change by a small number of such galaxies in the sample. In the power-law model, we ignore the self-gravity of the stellar system and assume that the potential is mainly contributed by the central MBH. We make this assumption based on the fact that the loss-cone consumption rate due to either mechanism peaks around the influential radius of the central MBH if we convert energy to radius based on the radial energy profile for stars in circular orbits, i.e., (r). For example, for most of the sample galaxies, () peaks around r within a factor of 2, while () peaks around r within a factor of 3 when the consumption time is fixed to T=10. Adopting a different T does not significantly affect our results. Under the Keplerian potential of the central MBH, we evaluate the ergodic distribution function based on the Eddington's formula. The resulting differential energy distribution of stars N()d (i.e., the number of stars in the stellar system with energy in the range of →+d) is expressed by N()d = ξ(α)N(/)^α-3 dln, where ξ(α) = √(π)/8(3-α)2^3-α Γ(α+1)/Γ(α-1/2), and ≡σ^2/2 where σ^2≡ GM/r. §.§ Two-body relaxation mechanism We first consider the stellar consumption due to the two-body relaxation by assuming the galaxy gravitational potential are spherical. When the Keplerian potential of the central MBH dominates, we have P(,J^2) ≃ P() ≃ 2π GM/(2)^3/2; and the differential energy distribution can be expressed as N()≃ 4π^2 f()P()J^2() (i.e., see Eq. 5 of MT99). In this case, Equation (<ref>) can be approximated as ()d ≃N()μ̅() d/ln R_0^-1() ≃N()/t()d/ln R_0^-1(), where in the latter expression we have used the inverse of the relaxation timescale 1/t() to approximate the orbital-averaged diffusion coefficient of the angular momentum, μ̅(). The relaxation timescale t() can be expressed as (see and Eq. 1 of ) t() ≃ηQ^2P()/N_∗(≤ r)ln Q ≃2πη GM Q^2/Nσ^3ln Q (/)^3/2-α, where η≃ 1/8, Q≡ M/⟨ m_⋆⟩≃ N with ⟨ m_⋆⟩ representing the mean mass of stars, and P() is the orbital period of a circular orbit at energy . To get the rightmost expression in Equation (<ref>), we assume the dominance of the Keplerian potential of the central MBH and use r/r≃ (/)^-1. Substituting Equations (<ref>) and (<ref>) into Equation (<ref>), we have () ≃ξ(α)σ^3ln Q/2πη GMln R_0^-1() (/)^2α-9/2. If we omit the variance of (), which is a mildly decreasing function of in the range , then () is a decreasing function of as long as α<9/4, as seen from Equation (<ref>). If the inner stellar distribution is core-like, i.e., α is small, () decreases relatively sharply with increasing , whereas if the inner stellar distribution is cusp-like with large α, () decreases relatively mildly with increasing . In real galaxies, () peaks at ≡ω where ω≃ 1. For <, the self-gravity from the stellar system cannot be neglected and Equation (<ref>) does not work any more. As a matter of fact, () is an increasing function of at ≤, instead of a decreasing function as described by Equation (<ref>) for >. Therefore, the stellar consumption rate can be divided into two parts separated by . For the inner part with ≥, the rate can be approximated by integrating () over the range ≥ using Equation (<ref>). For those real galaxies in our sample, we find that the fraction of the stellar consumption rate contributed by the inner part (i.e., with ≥) spans 0.1∼ 0.8, and this fraction tends to be larger for a galaxy with a larger α. Therefore, to compensate for the fraction of the rate contributed by those stars with ≤ which are omitted in the integration, we introduce a fudge factor by integrating () over the range ≥= ω using Equation (<ref>). The resulting stellar consumption rate in the power-law model is given by ≃ ∫_ω^∞() dln ≃ζ(α)σ^3(ω)^2α-9/2ln Q/2πη GMln R_0^-1(ω) = ζ(α) ϕ(ω) σ^3/Mln Q/2π Gη, where ζ(α) and ϕ(ω) are defined as ζ(α) = 2/9-4α·ξ(α), and ϕ(ω) = (ω)^2α-9/2/(ω). We find that =1/2 works well empirically. Therefore, we set =1/2 in the following analysis if not stated specifically. We compare the stellar consumption rates calculated from Equations (<ref>)–(<ref>) and those obtained approximately from the power-law model, as shown in Figure <ref>. Apparently, the rates estimated by using the simple power-law model are well consistent with those calculated from the full model (see Eqs. <ref>–<ref> and ), which verifies the effectiveness of the simple power-law model in obtaining the correct rate and suggests the validity of using this simple model to inspect the correlations between with both M and α. According to Equation (<ref>), both the correlation between and M and that between and α are controlled by three terms, i.e., , ζ(α), and ϕ(ω). Figure <ref> shows the dependence of the above three terms on the MBH mass M (left panels) and the inner slope of the host galaxy mass density distribution α (right panels), separately. As seen from this figure, σ^3M^-1 negatively correlates with M, ϕ(ω) positively correlates with M, while the term ζ(α) only weakly correlate with M; both σ^3 M^-1 and ζ(α) positively correlate with α, while ϕ(ω) negatively correlates with α, To describe the relative contribution of each term to the -M relation and the -α relation quantitatively, we conduct a linear least square fitting to the data points shown in each panel of Figure <ref>. The best fit is shown by the red dashed line in each panel and its form is also labeled there. The best-fit parameters with their 1σ uncertainties are also listed in Table <ref>. We identify the main factors that lead to the -M relation and the -α relation quantitatively based on the fitting results (see Tab. <ref> and legends in Figs. <ref> and <ref>). * For the -M relation, the best fit gives a slope of -0.28 (left panel of Fig. <ref>). The best fits shown in the left panels of Figure <ref> (from top to bottom) have slopes of -0.47, -0.06, and 0.26, respectively. The sum of these three slopes is -0.27, which can approximately account for that found in the left panel of Figure <ref>. Among the three terms, dominates the total slope. The contribution from ϕ(ω) can only cancel about half the amount of , and ζ(α) contributes the least to the total slope. Apart from the slope, the data points in the top left panel of Figure <ref> have comparable scatters as those in the left panel of Figure <ref>, and are substantially larger than those in the middle left or the bottom left panel of Figure <ref>. Therefore, the intrinsic scatter of the -M relation is also dominated by term , i.e., by the intrinsic scatter of the correlation between and M, which is in turn determined by the scatter of the correlation between σ and M. * For the -α relation, the best fit gives a slope of 1.37 (right panel of Fig. <ref>). The best fits shown in the right panels of Figure <ref> (from top to bottom) give slopes of 1.37, 0.63, and -0.90, respectively. The sum of these three slopes is 1.10, which is close to that found for the -α relation (right panel of Fig. <ref>), again suggests that the correlation between and α can be explained approximately by the combined effect of these three terms. Among the three terms, dominates the contribution to the total slope. While ζ(α) and ϕ(ω) contribute to the total slope in close amounts but different signs so as to be cancelled much. As seen also from the right panels of Figure <ref>, the scatters among the data points in the top panel dominate over those in the middle or bottom panels. Therefore, we can conclude that both the slope and the scatter of the correlation between and α are dominated by the correlation between and α. §.§ Loss-region draining in nonspherical potentials We now consider the correlation between and M or α in galaxies with nonspherical mass distributions. We define the loss-region draining timescale τ()≡ P() J^2()/J^2(), which characterizes how long the reservoir of loss-region stars at energy can sustain the consumption by the central MBH. Similar as in the loss-cone case, we define the dimensionless loss-region angular momentum R()≡ J^2()/J^2(), which characterizes the relative size of the loss region at energy in the phase space. In generic triaxial galaxies, R() can reach the order of ∼0.1 at energies satisfying and decreases with increasing for >. The draining timescale can be then expressed as τ()= P() R()/R(). Therefore, the flux of stars being drained from the loss region into the loss cone to be consumed can be expressed as ()d≃ N()R()d/τ() exp[-T/τ()], where we have again made the approximation P(,J^2)≃ P()≡ P(,J^2=0). Assuming the dominance of the Keplerian potential of the central MBH, we have P()≃ 2π GM/(2)^3/2 and J()≃ GM/(2)^1/2. Define the consumption radius r≡ J^2/2GM, then the loss-region draining timescale can be approximated as τ()≃ π (GM)^2 R()/4√(2)^5/2r. Since typically () is an increasing function of , the total stellar consumption rate at the draining time T due to loss-region draining is dominated by the flux of stars at ≃_T, with _T satisfies τ(_T)=T, which gives _T=[π(GM)^2 R_,T/4√(2)T r]^2/5, where R = R(_T). At energy < _T where the loss region has not been exhausted yet, increases with in a manner that ∼^α-1/2 in the power-law model. Therefore, the total stellar consumption rate due to the loss-region draining can be evaluated as ≃ 2/2α-1f_T_T (_T) = 2ξ(α)/2α-1f_T N R/T (_T/)^α-3, where ξ(α) is given by Equation (<ref>), f_T is a fudge factor introduced to make the rates estimated by the power-law model consistent with those by the full model, and f_T changes from 0.78 at T=0.1 to 0.56 at T=10. Since we analyze the rate correlations at T=10, we adopt f_T=0.56 below. The definition of the consumption radius r naturally leads to a separatrix in the MBH mass M, below which r = r= (f M/m_⋆)^1/3 R_⋆, with m_⋆ and R_⋆ being the mass and radius of the star and f being a dimensionless factor, while above which r = 2r= 8GM/c^2 (r≡ 4GM/c^2; see Eq. 1 in CYL20). For MBHs with masses being below the separatrix, setting m_⋆ and R_⋆ to be the solar mass and solar radius and f=1, we have _T/ = [π (GM)^2R/T rσ^5]^2/5 ≃ 1.24[M/10^8]^2/3 [σ/200]^-2 [R/0.1]^2/5 [T/10]^-2/5. Substituting Equation (<ref>) into Equation (<ref>), the stellar consumption rate due to the loss-region draining when the MBH mass is below the separatrix can be approximated in the power-law model as ≃ 1.24^α-32ξ(α)/2α-1× 10^-3f_T [M/10^8]^2α-3/3 [σ/200]^6-2α [R/0.1]^2α-1/5 [10/T]^2α-1/5. On the other hand, when the MBHs have masses being above the separatrix, we have _T/ = [π (GM)^2R/T rσ^5]^2/5 ≃ 0.735 [M/10^8]^2/5 [σ/200]^-2 [R/0.1]^2/5 [T/10]^-2/5. Substituting Equation (<ref>) into Equation (<ref>), the corresponding stellar consumption rate due to the loss-region draining when the MBH mass is above the separatrix can be approximated in the power-law model as ≃ 0.735^α-32ξ(α)/2α-1× 10^-3f_T [M/10^8]^2α-1/5 [σ/200]^6-2α [R/0.1]^2α-1/5 [10/T]^2α-1/5. For the convenience of the following analysis, we define χ(α)= { 1.24^α-3·2ξ(α)/2α-1, r≥ 2r, 0.735^α-3·2ξ(α)/2α-1, r< 2r. . and κ(M,σ) = { M_ BH,8^2/3α-1σ_ h,200^6-2α, r≥ 2r, M_ BH,8^2/5α-1/5σ_ h,200^6-2α, r< 2r. . With these definitions, the stellar consumption rate due to the loss-region draining in the power-law model can be expressed as ≃ 10^-3f_T χ(α)κ(M,σ) × [R/0.1]^2α -1/5 [10/T]^2α-1/5. In the following, we fix the consumption time to T=10. Similarly as done for the case of the two-body relaxation, we compare the stellar consumption rates calculated from Equation (<ref>) and those obtained approximately from the power-law model (Eq. <ref>) as shown in Figure <ref>. As seen from the figure, the rates evaluated based on these two approaches are generally consistent with each other which again verifies the effectiveness of the power-law model in estimating the stellar consumption rate due to the loss-region draining in nonspherical potentials. According to Equation (<ref>), both the correlation between and M and that between and α are controlled by the terms of χ(α), κ(M,σ), and (R/0.1)^2α -1/5, and Figure <ref> shows the dependence of these three terms on M (left panels) and α (right panels), separately. As done for those correlations shown in Figure <ref>, we also conduct linear least square fittings to the data points shown in each panel of Figure <ref>. The best fit is indicated by the red dashed line in each panel and the best fit form is also labeled there. The best-fit parameters with their 1σ uncertainties are also listed in Table <ref>. We use the fitting results to identify the main factors that contribute to the correlation between and M and that between and α (see Tab. <ref> and legends in Figs. <ref> and <ref>). * As seen from the left panel of Figure <ref>, the –M correlation has a slope of 0.90. While according to the left panels of Figure <ref>, the correlations between those three inspected terms (in logarithm) and log M have slopes of 0.16, 0.69, and 0.01, respectively. Overall, these three relations combined can well account for the correlation between and M. Among the three terms, dominates the correlation, which scales as ∼σ^3 and ∼σ^3 M^2/5 when M is below and above the separatrix, respectively, if setting α=3/2. In addition, the term χ(α) contributes minorly to the correlation. By comparing the data points in the left panels of Figure <ref> and the left panel of Figure <ref>, it is clearly revealed that the scatter of the correlation between and M is also dominated by . * As seen from the right panel of Figure <ref>, there exist considerable scatters among the data points. Given the scatters, the correlation between and α appears to be mild. A negative slope (-0.64) is returned when the linear square regression is applied to the data points, suggesting a decrease of about a factor of 4 from α=1 to α=2. From the right panels of Figure <ref>, the best-fit slopes of the three inspected terms (in logarithm) against α are -0.64, 0.20, and -0.21, respectively. Again, these three relations combined together can well account for that correlation between and α. Among the three terms, χ(α) dominates the correlation. The scatter of the correlation is largely contributed by the term of . § DISCUSSIONS §.§ Overrepresentation of TDEs in E+A/poststarburst galaxies Some recent studies towards TDE host galaxies suggest that TDEs prefer to happen in those rare E+A/poststarburst galaxies <cit.>. The underlying (physical) explanations for the preference are still unclear, and there may be alternative characteristics to distinguish TDE host galaxies from normal ones <cit.>. Here we discuss whether the preference can be explained by the correlations between the stellar consumption rate and the inner slope of the galaxy stellar number/mass density distribution. Since TDE flares can only be observed when the MBH mass is below the Hills' mass <cit.>, in which regime the two-body relaxation mechanism dominates over the stellar orbital precession mechanism (CYL20), we focus on the correlation between and α. To explore whether the overrepresentation can be explained by the high central stellar density of the galaxies, <cit.> identified a region in the Sersic index and MBH mass parameter space which contained ∼ 2% of their reference catalog galaxies but ≥ 60% of those TDE host galaxies. This means that the averaged TDE rate in those high-Sersic-index galaxies should be higher than that in their low-Sersic-index counterparts by a factor of ∼ 25-48. From the right panel of Figure <ref>, the mean value of log increases by 1.37 when α is increased from 1 to 2, or equivalently when γ (the inner slope of the galaxy surface brightness profile) is increased from 0 to 1. As also seen from the panel, the intrinsic scatter of the correlation has no significant dependence on α. Therefore, the averaged consumption rate ⟨⟩ is increased by a factor of ∼ 24 from α=1 to α=2, which is within a factor of 2 of the analysis by <cit.>. Given the limited number of TDEs in the observational analysis (i.e., 5–10 events), it suggests that the correlation between and α may be responsible for the overrepresentation reported by recent studies towards TDE host galaxies. The overrepresentation may also be contributed by draining of the loss-region stars in nonspherical potentials, as those poststarburst systems may also have a higher degree of asymmetry in their shapes as compared with generic galaxies. The presence of A type stars in the nuclear regions of those E+A/poststarburst galaxies indicates their relatively young ages and dynamical states (e.g., ), and therefore a relatively full loss region and a larger draining rate. As the loss region draining rate decreases with increasing time T, we can define the time T at which the loss region draining rate (T=T) equals to the loss cone refilling rate f^ tri. We have (T) f^ tri at T T and (T) f^ tri at T T. By using the scaling dependence of on time T shown in Equation (<ref>) (i.e., ∝ T^(1-2α)/5) and the scaling dependence of and on M shown at T=10 in Figures <ref>–<ref>, we can obtain the time T by (2α-1/5) log(T/10)≃ 1.10-log f^ tri+1.18log M_ bh,8. For example, we have T≃ 10 (f^ tri 3)^-0.4 (M3× 10^7M_⊙)^2.95 if α=3/2 in Equation (<ref>). Compared with the stellar consumption rate in a system with age ∼10, the enhancement of stellar consumption rate due to the draining of the loss-region stars in a young system with dynamical age T can be estimated by a factor of (T)[min(10,T)]≃ [min(10,T) T]^2α-1/5. If assuming the age T=0.1, and f^ tri≃ 3 and α=3/2 in Equation (<ref>), the factor is ∼ 6 for M 3× 10^7M_⊙ (for which T10) and smaller for lower MBH masses (e.g. ∼ 2 for M=10^7M_⊙). Those estimation obtained from the above analytical scaling fittings are consistent with those shown in Figure 5 of CYL20. Overall, the TDE event rate in the E+A galaxies may be enhanced by a factor of several tens as compared with generic galaxies. Among them, the effect due to larger α or higher central densities of E+A galaxies dominates the rate enhancement, while draining of the loss-region stars in these dynamically young systems may also contribute a factor of a few, depending on the dynamical age of the system and the MBH mass. We expect that the above explaination to the overrepresentation of the TDEs due to the different contributions will be testable through a significant accumulation of TDE observations along with observations of their host galaxy properties. §.§ Generalization for different types of stars In the above analysis, we assume a single stellar population with solar mass and solar radius. In reality, however, the stellar system is composed of a spectrum of stars with different masses m_∗ and radii R_∗. For each species in the stellar system, the mass separatrix of MBHs that can tidally disrupt or directly swallow low-angular-momentum stars is determined by a comparison of r_ t and 2r_ swl, as mentioned in Section <ref>. Different types of stars have different M separatrixes between TDEs and direct capture events, which depend on stellar mass and radius by a scaling factor of (m_∗/M_⊙)^-1/2 (R_∗/R_⊙)^3/2. In this subsection, we discuss how the mass spectrum may affect the stellar consumption rate estimations and their correlation tendencies and how our above scaling analysis can imply for the rates of different types of stars. We discuss the effects of the mass spectrum on the stellar consumption rate estimations and their correlation tendencies through their effects on and as follows. * : The overall stellar consumption rate due to two-body relaxation, , is not affected significantly by the generalization of the single-mass stellar population assumption (MT99). As shown in Appendix A in MT99, if replacing the single-mass stellar population with an old stellar population (e.g., the Kroupa initial mass function between 0.08M_⊙ and 1M_⊙), will increase by a factor of ∼ 1.66, due to the combined effects of the increased stellar number density and the decreased stellar diffusion rate. The consumption rate of a given type of stars is then roughly proportional to its number fraction among all the stars, if the mass segregation effect is not significant. However, if the mass segregation effect is important, then the consumption rate of high-mass stars may be further enhanced and that of low-mass stars may be weakened. For each species in the stellar system, the consumption rate due to two-body relaxation varies with the changes of the number fraction of the species and the logarithmic terms ln Q and ln R_0^-1() in Equations (<ref>)–(<ref>). If we ignore the generally weak variations of the two logarithmic terms, the rate correlation tendencies due to two-body relaxation apply effectively to disrupted or swallowed stars of different types, including giant stars (with m_∗∼ M_⊙, R_∗∼ 10–1000 R_⊙) and white dwarfs (with m_∗∼ M_⊙, R_∗∼ 0.01 R_⊙). * : The stellar consumption rate due to loss-region draining, , is affected by relaxing the single-type stellar population assumption in the following three aspects. (a) is proportional to the number density of stars (corresponding to the N term in Eq. <ref>). If the single stellar population with solar mass and radius considered above in Section <ref> is replaced by an old population of stars (e.g., the Kroupa initial mass function between 0.08M_⊙ and 1M_⊙), the change of the total stellar number density will lead to an increase of by a factor of 5.33. The rate of each given species is proportional to the number fraction of the given species. (b) can also depend on mass and radius of each type of stars via their different loss-cone size or the consumption radius r_ consp. According to Equations (<ref>) and (<ref>), we have ∝r_ consp^-2α+6 5, which is ∝ (m_∗/M_⊙)^2α-6/15 (R_∗/R_⊙)^-2α+6/5 for MBHs with mass below the mass separatrix between TDEs and direct capture events (Eq. <ref>). For MBHs with mass above the mass separatrix, the variable of r_ consp in is ∝ M and does not contribute to the dependence on m_∗ and R_∗ (Eq. <ref>). (c) can depend on the evolutionary ages of the different types of stars through the factor of T^-2α+1/5 in Equation (<ref>). As for the loss region draining processes, the different stellar species move in the same triaxial galactic potential and are independent of each other, and the rate correlation tendencies with M and α due to loss-region draining obtained in Section <ref> (see Eqs. <ref> and <ref>) can be directly applied to the different stellar species, although the detailed fit correlation cooefficients could be affected quantitatively by their differences in the mass separatrix of MBHs between TDEs and direct capture events. Based on the above analysis, we discuss the implications specifically for the following types of stars which have different characteristic masses and radii. * Giant stars: For giant stars (with m_∗∼ M_⊙, R_∗∼ 10–1000 R_⊙), the upper mass boundary of MBHs being able to produce TDE flares increases to much larger values, i.e., ≫ 10^8 M_⊙. Based on Equation (<ref>) and including the enlarged loss-cone size since the transition to giant star from their main-sequence stages, the ratio of for giant stars to that for solar-type stars can be estimated by the factor of (m_∗/M_⊙)^2α-6/15 (R_∗/R_⊙)^-2α+6/5(T/10)^-2α+1/5f_ g∼ 1, if adopting α=3/2, m_*∼ M_⊙, the lifetime T∼ 1 for R_*∼ 10 R_⊙ giants or T=0.01 for R_*∼ 1000 R_⊙ giants, and if the number ratio of the giants to solar-type stars f_ g is estimated by ∼ T/10. Note that the estimates of for giant stars in the above example are significantly high, in which the TDE rates for giant stars can be up to those for solar-type stars. For TDEs of giant stars, we expect that the correlation between the TDE rates and the MBH mass is similar as obtained from the single solar-type stellar population assumption, i.e., dominated by the negative –M correlation at small M, and by the positive –M correlation at large M, The transitional MBH mass between those two correlation tendencies with M should be smaller than ∼10^7 M_⊙ due to the relative increase in loss-region draining rates . At the mass range below or above the transitional MBH mass, the analysis of the correlation tendencies of the TDE rates with M and α are expected to follow those shown in Section <ref>. * Massive young stars: For massive stars, the upper mass boundary of MBHs being able to produce TDE flares can also increase to much larger values, i.e., ≫ 10^8 M_⊙. Note that the increase of due to the young dynamical age has been discussed in Section <ref>. For massive stars, the dependence of on (m_∗/M_⊙)^2α-6/15 (R_∗/R_⊙)^-2α+6/5 is relatively not strong, for example, can increase by a factor of ∼ 2–4 (for m_∗∼ 10–100 M_⊙ and R_*∝ m_∗^0.8), if adopting α=3/2, due to the enlarged loss-cone size. The analysis on the correlation tendencies of and with M and α and the transitional MBH mass of the correlations for giant stars above can also be applied to the analysis for TDE samples of massive young stars (e.g., T≪ 1). * Stellar compact remnants (white dwarfs, neutron stars, and stellar-mass BHs): Stellar compact remnants can generally be swallowed directly by MBHs when they move sufficiently close to the MBH, with bursts of gravitational waves, except that white dwarfs can be tidally disrupted by MBHs if M≲ 10^5M_⊙ (at the lower boundary of or beyond the mass range considered in this paper). The TDE rates of white dwarfs should be dominated by the loss-region refill rate due to two-body relaxation, and the rate estimation is subject to the uncertainties in the statistics of the MBH population at the low-M end and their stellar environments. For M> 10^5M_⊙, if the mass segregation effect of compact objects at galactic centers is ignored, the direct capture rates of compact objects and their correlation tendencies follow the similar analysis on and shown in Section <ref>, with including the number fraction of the different types of compact objects. Note that the loss-region draining rates of the different types of compact objects are irrelevant with their detailed masses and radii, as their r_ consp =2r_ swl are the same given the same M (see also Eq. <ref>). § CONCLUSIONS In this work, we study the correlations of the stellar consumption rates by the central MBHs of galaxies with both the MBH mass M and the inner slope of the host galaxy stellar number/mass density distribution α. The rates of stellar consumption due to two-body relaxation and stellar orbital precession in nonspherical potentials are considered. By exploiting a simplified power-law model, i.e., considering a single power-law stellar number/mass density distribution under the Keplerian potential of the central MBH, we derive approximated expressions for the stellar consumption rates due to both the mechanisms. Then by inspecting the relative contributions from different terms in the approximated rate expressions to the correlations, we identify the dominant factor(s) responsible for both the slopes and scatters of the correlations. We summarize the main conclusions of this study below. * In both cases of the two-body relaxation in spherical galaxy potentials and the loss-region draining in nonspherical potentials, the stellar consumption rates estimated based on the power-law model and are consistent with the rates estimated based on the full model, i.e., and , respectively. This not only verifies the effectiveness of using the simplified power-law model to inspect the correlations, but also provides an efficient and simple way to estimate the stellar consumption/flaring rates due to both the mechanisms. * correlates negatively with M while positively with α. As for the –M correlation, the best-fit linear relation has a slope of -0.28. Both the slope and the scatter of the correlation are dominated by the term σ^3 M^-1 in the approximated expression of the stellar consumption rate , where σ^2≡ GM/r and the influential radius of the central MBH r is defined as the radius within which the stellar mass equals the MBH mass. As for the –α correlation, the best-fit linear relation has a slope of 1.37. Again, both the slope and the scatter of the correlation are dominated by σ^3 M^-1. * correlates positively with M while negatively with α. The latter correlation appears to be mild due to the large scatter in the relation between and α. As for the –M correlation, the best-fit linear relation has a slope of 0.90. Both the slope and scatter of the correlation are dominated by the term (Eq. <ref>) in the approximated expression of , which scales as ∼σ^3 and ∼σ^3 M^2/5 when the M is below and above the separatrix, respectively, if set α=3/2. As for the –α correlation, the best-fit linear relation has a slope of -0.64. The term χ(α) (Eq. <ref>) in dominates the slope while dominates the scatter of the correlation. * The above correlations of and serve as the backbones of the correlation tendencies of the stellar consumption rates at the low-mass (M≲ 10^7) and the high-mass (M≳ 10^7) ranges of MBHs, respectively. * We use the –α correlation to explain the overrepresentation of TDEs in those rare E+A/poststarburst galaxies found by some recent observational studies (e.g., ). According to the correlation, the expectation value of is increased by a factor of ∼24 when α is increased from 1 to 2, or equivalently, when γ is increased from 0 (core-like) to 1 (cuspy). This factor is broadly consistent with the overrepresentation factor found by <cit.>, i.e., ∼25–48, indicating that the preference of TDEs in these rare subclass of galaxies can be largely explained by the –α correlation. Besides the –α correlation, loss-region draining in these dynamically young systems can also enhance the observed TDE rate by a factor of a few, depending on the dynamical age and the MBH mass. Future observations of TDEs and their host galaxy properties are expected to test those different contribution origins. * The stellar consumption rates and their correlation tendencies are discussed for different types of stars, including giant stars, massive young stars, and stellar compact remnants. We find that the estimates of the TDE rates of giant stars can be high enough to be up to or those of solar-type stars, due to the large tidal disruption radii of giant stars. How to distinguish the different TDEs of giant stars and main-sequence stars in observations deserves further investigations. With the increasing power of the time domain surveys, the TDE samples are expanding rapidly in recent years <cit.>, which enables the study of the MBH demographics from a brand new perspective (e.g., ). For example, TDEs illuminate those dormant MBHs, which provides a powerful tool to study the mass function and occupation fraction of MBHs in quiescent galaxies, especially at the low-mass end where most, if not all, TDEs occur <cit.>. In addition, the upper mass limit of a MBH that can tidally disrupt a star depends sensitively on the MBH spin <cit.>. Therefore, the observed MBH mass distribution near ∼10^8–10^9 from a large sample of TDEs can set strong constraints on the spin distributions of MBHs inside the galaxy centers. Moreover, a stellar system containing a binary MBH or a recoiled MBH could undergo a short period of time during which TDEs promptly happen <cit.>. Despite these merits, the results from TDE observations should be interpreted with cautions when constraining the MBH demographics, since our study reveals that some bias may be induced by the different galaxy properties. Therefore, we conclude that a thorough understanding of the dependence of the TDE rate on both properties of MBHs and their host galaxies is the prerequisite of the MBH demographic study with TDE observations. § ACKNOWLEDGMENTS This work is partly supported by the National Natural Science Foundation of China (grant Nos. 12173001, 11721303, 11873056, 11690024, 11991052), the National SKA Program of China (grant No. 2020SKA0120101), National Key Program for Science and Technology Research and Development (grant Nos. 2020YFC2201400, 2016YFA0400703/4), and the Strategic Priority Program of the Chinese Academy of Sciences (grant No. XDB 23040100). 0 natexlab#1#1 [Alexander(2017)]Alexander17 Alexander, T. 2017, https://ui.adsabs.harvard.edu/abs/2017ARA , 55, 17. doi:10.1146/annurev-astro-091916-055306 [Arcavi et al.(2014)]Arcavi14 Arcavi, I., Gal-Yam, A., Sullivan, M., et al. 2014, https://ui.adsabs.harvard.edu/abs/2014ApJ...793...38A , 793, 38. doi:10.1088/0004-637X/793/1/38 [Auchettl et al.(2018)]Auchettl18 Auchettl, K., Ramirez-Ruiz, E., & Guillochon, J. 2018, https://ui.adsabs.harvard.edu/abs/2018ApJ...852...37A , 852, 37. doi:10.3847/1538-4357/aa9b7c [Bade et al.(1996)]Bade96 Bade, N., Komossa, S., & Dahlem, M. 1996, https://ui.adsabs.harvard.edu/abs/1996A , 309, L35 [Bahcall & Wolf(1976)]BW76 Bahcall, J. N. & Wolf, R. A. 1976, https://ui.adsabs.harvard.edu/abs/1976ApJ...209..214B , 209, 214. doi:10.1086/154711 [Bar-Or et al.(2013)]BarOr13 Bar-Or, B., Kupi, G., & Alexander, T. 2013, https://ui.adsabs.harvard.edu/abs/2013ApJ...764...52B , 764, 52. doi:10.1088/0004-637X/764/1/52 [Bellm et al.(2019)]Bellm19 Bellm, E. C., Kulkarni, S. R., Graham, M. J., et al. 2019, https://ui.adsabs.harvard.edu/abs/2019PASP..131a8002B , 131, 018002. doi:10.1088/1538-3873/aaecbe [Beloborodov et al.(1992)]Beloborodov92 Beloborodov, A. M., Illarionov, A. F., Ivanov, P. B., et al. 1992, https://ui.adsabs.harvard.edu/abs/1992MNRAS.259..209B , 259, 209. doi:10.1093/mnras/259.2.209 [Brockamp et al.(2011)]Brockamp11 Brockamp, M., Baumgardt, H., & Kroupa, P. 2011, https://ui.adsabs.harvard.edu/abs/2011MNRAS.418.1308B , 418, 1308. doi:10.1111/j.1365-2966.2011.19580.x [Cappellari et al.(2013)]Cappellari13 Cappellari, M., Scott, N., Alatalo, K., et al. 2013, https://ui.adsabs.harvard.edu/abs/2013MNRAS.432.1709C , 432, 1709. doi:10.1093/mnras/stt562 [Chambers et al.(2016)]Chambers16 Chambers, K. C., Magnier, E. A., Metcalfe, N., et al. 2016, https://ui.adsabs.harvard.edu/abs/2016arXiv161205560C arXiv:1612.05560 [Chen et al.(2011)]CSMetal11 Chen, X., Sesana, A., Madau, P., et al. 2011, https://ui.adsabs.harvard.edu/abs/2011ApJ...729...13C , 729, 13. doi:10.1088/0004-637X/729/1/13 [Chen et al.(2020a)]CYL20bbh Chen, Y., Yu, Q., & Lu, Y. 2020a, https://ui.adsabs.harvard.edu/abs/2020ApJ...897...86C , 897, 86. doi:10.3847/1538-4357/ab9594 [Chen et al.(2020b)]CYL20tde Chen, Y., Yu, Q., & Lu, Y. 2020b, https://ui.adsabs.harvard.edu/abs/2020ApJ...900..191C , 900, 191. doi:10.3847/1538-4357/aba950 (CYL20) [Cohn & Kulsrud(1978)]CK78 Cohn, H. & Kulsrud, R. M. 1978, https://ui.adsabs.harvard.edu/abs/1978ApJ...226.1087C , 226, 1087. doi:10.1086/156685 [Donley et al.(2002)]Donley02 Donley, J. L., Brandt, W. N., Eracleous, M., et al. 2002, https://ui.adsabs.harvard.edu/abs/2002AJ....124.1308D , 124, 1308. doi:10.1086/342280 [Esquej et al.(2008)]Esquej08 Esquej, P., Saxton, R. D., Komossa, S., et al. 2008, https://ui.adsabs.harvard.edu/abs/2008A , 489, 543. doi:10.1051/0004-6361:200810110 [Ferrarese & Merritt(2000)]Ferrarese00 Ferrarese, L. & Merritt, D. 2000, https://ui.adsabs.harvard.edu/abs/2000ApJ...539L...9F , 539, L9. doi:10.1086/312838 [Fialkov & Loeb(2017)]Fialkov17 Fialkov, A. & Loeb, A. 2017, https://ui.adsabs.harvard.edu/abs/2017MNRAS.471.4286F , 471, 4286. doi:10.1093/mnras/stx1755 [Freitag & Benz(2002)]Freitag02 Freitag, M. & Benz, W. 2002, https://ui.adsabs.harvard.edu/abs/2002A , 394, 345. doi:10.1051/0004-6361:20021142 [French et al.(2016)]French16 French, K. D., Arcavi, I., & Zabludoff, A. 2016, https://ui.adsabs.harvard.edu/abs/2016ApJ...818L..21F , 818, L21. doi:10.3847/2041-8205/818/1/L21 [French et al.(2017)]French17 French, K. D., Arcavi, I., & Zabludoff, A. 2017, https://ui.adsabs.harvard.edu/abs/2017ApJ...835..176F , 835, 176. doi:10.3847/1538-4357/835/2/176 [French et al.(2020)]French20 French, K. D., Wevers, T., Law-Smith, J., et al. 2020, https://ui.adsabs.harvard.edu/abs/2020SSRv..216...32F , 216, 32. doi:10.1007/s11214-020-00657-y [Gebhardt et al.(2000)]Gebhardt00 Gebhardt, K., Bender, R., Bower, G., et al. 2000, https://ui.adsabs.harvard.edu/abs/2000ApJ...539L..13G , 539, L13. doi:10.1086/312840 [Gezari(2021)]Gezari21 Gezari, S. 2021, https://www.annualreviews.org/doi/abs/10.1146/annurev-astro-111720-030029 , 59, 21. doi:10.1146/annurev-astro-111720-030029 [Graham(2016)]Graham16 Graham, A. W. 2016, https://ui.adsabs.harvard.edu/abs/2016ASSL..418..263G Galactic Bulges, 418, 263. doi:10.1007/978-3-319-19378-6_11 [Graur et al.(2018)]Graur18 Graur, O., French, K. D., Zahid, H. J., et al. 2018, https://ui.adsabs.harvard.edu/abs/2018ApJ...853...39G , 853, 39. doi:10.3847/1538-4357/aaa3fd [Greiner et al.(2000)]Greiner00 Greiner, J., Schwarz, R., Zharikov, S., et al. 2000, https://ui.adsabs.harvard.edu/abs/2000A , 362, L25 [Grupe et al.(1999)]Grupe99 Grupe, D., Thomas, H.-C., & Leighly, K. M. 1999, https://ui.adsabs.harvard.edu/abs/1999A , 350, L31 [Hammerstein et al.(2021)]Hammerstein21 Hammerstein, E., Gezari, S., van Velzen, S., et al. 2021, https://ui.adsabs.harvard.edu/abs/2021ApJ...908L..20H , 908, L20. doi:10.3847/2041-8213/abdcb4 [Hills(1975)]Hills75 Hills, J. G. 1975, https://ui.adsabs.harvard.edu/abs/1975Natur.254..295H , 254, 295. doi:10.1038/254295a0 [Hopman & Alexander(2006)]Hopman06RR Hopman, C. & Alexander, T. 2006, https://ui.adsabs.harvard.edu/abs/2006ApJ...645.1152H , 645, 1152. doi:10.1086/504400 [Ivanov et al.(2005)]Ivanov05 Ivanov, P. B., Polnarev, A. G., & Saha, P. 2005, https://ui.adsabs.harvard.edu/abs/2005MNRAS.358.1361I , 358, 1361. doi:10.1111/j.1365-2966.2005.08843.x [Kesden(2012)]Kesden12 Kesden, M. 2012, https://ui.adsabs.harvard.edu/abs/2012PhRvD..85b4037K , 85, 024037. doi:10.1103/PhysRevD.85.024037 [Khabibullin & Sazonov(2014)]Khabibullin14 Khabibullin, I. & Sazonov, S. 2014, https://ui.adsabs.harvard.edu/abs/2014MNRAS.444.1041K , 444, 1041. doi:10.1093/mnras/stu1491 [Komossa & Greiner(1999)]Komossa99 Komossa, S. & Greiner, J. 1999, https://ui.adsabs.harvard.edu/abs/1999A , 349, L45 [Komossa(2015)]Komossa15 Komossa, S. 2015, https://ui.adsabs.harvard.edu/abs/2015JHEAp...7..148K Journal of High Energy Astrophysics, 7, 148. doi:10.1016/j.jheap.2015.04.006 [Kormendy & Ho(2013)]KH13 Kormendy, J. & Ho, L. C. 2013, https://ui.adsabs.harvard.edu/abs/2013ARA , 51, 511. doi:10.1146/annurev-astro-082708-101811 [Krajnović et al.(2013)]Krajnovic13 Krajnović, D., Karick, A. M., Davies, R. L., et al. 2013, https://ui.adsabs.harvard.edu/abs/2013MNRAS.433.2812K , 433, 2812. doi:10.1093/mnras/stt905 [Lauer et al.(1995)]Lauer95 Lauer, T. R., Ajhar, E. A., Byun, Y.-I., et al. 1995, https://ui.adsabs.harvard.edu/abs/1995AJ....110.2622L , 110, 2622. doi:10.1086/117719 [Lauer et al.(2007)]Lauer07bh Lauer, T. R., Faber, S. M., Richstone, D., et al. 2007, https://ui.adsabs.harvard.edu/abs/2007ApJ...662..808L , 662, 808. doi:10.1086/518223 [Lauer et al.(2007)]Lauer07sb Lauer, T. R., Gebhardt, K., Faber, S. M., et al. 2007, https://ui.adsabs.harvard.edu/abs/2007ApJ...664..226L , 664, 226. doi:10.1086/519229 [Law et al.(2009)]Law09 Law, N. M., Kulkarni, S. R., Dekany, R. G., et al. 2009, https://ui.adsabs.harvard.edu/abs/2009PASP..121.1395L , 121, 1395. doi:10.1086/648598 [Law-Smith et al.(2017)]LawSmith17 Law-Smith, J., Ramirez-Ruiz, E., Ellison, S. L., et al. 2017, https://ui.adsabs.harvard.edu/abs/2017ApJ...850...22L , 850, 22. doi:10.3847/1538-4357/aa94c7 [Leloudas et al.(2016)]Leloudas16 Leloudas, G., Fraser, M., Stone, N. C., et al. 2016, https://ui.adsabs.harvard.edu/abs/2016NatAs...1E...2L Nature Astronomy, 1, 0002. doi:10.1038/s41550-016-0002 [Lightman & Shapiro(1977)]LS77 Lightman, A. P. & Shapiro, S. L. 1977, https://ui.adsabs.harvard.edu/abs/1977ApJ...211..244L , 211, 244. doi:10.1086/154925 [Magorrian & Tremaine(1999)]MT99 Magorrian, J. & Tremaine, S. 1999, https://ui.adsabs.harvard.edu/abs/1999MNRAS.309..447M , 309, 447. doi:10.1046/j.1365-8711.1999.02853.x (MT99) [Maksym et al.(2010)]Maksym10 Maksym, W. P., Ulmer, M. P., & Eracleous, M. 2010, https://ui.adsabs.harvard.edu/abs/2010ApJ...722.1035M , 722, 1035. doi:10.1088/0004-637X/722/2/1035 [McConnell & Ma(2013)]MM13 McConnell, N. J. & Ma, C.-P. 2013, https://ui.adsabs.harvard.edu/abs/2013ApJ...764..184M , 764, 184. doi:10.1088/0004-637X/764/2/184 [Mummery & Balbus(2020)]Mummery20 Mummery, A. & Balbus, S. A. 2020, https://ui.adsabs.harvard.edu/abs/2020MNRAS.497L..13M , 497, L13. doi:10.1093/mnrasl/slaa105 [Perets et al.(2007)]Perets07 Perets, H. B., Hopman, C., & Alexander, T. 2007, https://ui.adsabs.harvard.edu/abs/2007ApJ...656..709P , 656, 709. doi:10.1086/510377 [Ramsden et al.(2022)]Ramsden22 Ramsden, P., Lanning, D., Nicholl, M., et al. 2022, https://ui.adsabs.harvard.edu/abs/2022MNRAS.tmp.1745R . doi:10.1093/mnras/stac1810 [Rau et al.(2009)]Rau09 Rau, A., Kulkarni, S. R., Law, N. M., et al. 2009, https://ui.adsabs.harvard.edu/abs/2009PASP..121.1334R , 121, 1334. doi:10.1086/605911 [Rauch & Tremaine(1996)]RT96 Rauch, K. P. & Tremaine, S. 1996, https://ui.adsabs.harvard.edu/abs/1996NewA....1..149R , 1, 149. doi:10.1016/S1384-1076(96)00012-7 [Rees(1988)]Rees88 Rees, M. J. 1988, https://ui.adsabs.harvard.edu/?#abs/1988Natur.333..523R , 333, 523 [Shappee et al.(2014)]Shappee14 Shappee, B., Prieto, J., Stanek, K. Z., et al. 2014, https://ui.adsabs.harvard.edu/abs/2014AAS...22323603S [Stone & Loeb(2011)]Stone11 Stone, N. & Loeb, A. 2011, https://ui.adsabs.harvard.edu/abs/2011MNRAS.412...75S , 412, 75. doi:10.1111/j.1365-2966.2010.17880.x [Stone & Loeb(2012)]Stone12 Stone, N. & Loeb, A. 2012, https://ui.adsabs.harvard.edu/abs/2012MNRAS.422.1933S , 422, 1933. doi:10.1111/j.1365-2966.2012.20577.x [Stone & Metzger(2016)]Stone16 Stone, N. C. & Metzger, B. D. 2016, https://ui.adsabs.harvard.edu/abs/2016MNRAS.455..859S , 455, 859. doi:10.1093/mnras/stv2281 [Tremaine et al.(2002)]Tremaine02 Tremaine, S., Gebhardt, K., Bender, R., et al. 2002, https://ui.adsabs.harvard.edu/abs/2002ApJ...574..740T , 574, 740. doi:10.1086/341002 [van Velzen & Farrar(2014)]vanVelzen14 van Velzen, S. & Farrar, G. R. 2014, https://ui.adsabs.harvard.edu/abs/2014ApJ...792...53V , 792, 53. doi:10.1088/0004-637X/792/1/53 [van Velzen(2018)]vanVelzen18 van Velzen, S. 2018, https://ui.adsabs.harvard.edu/abs/2018ApJ...852...72V , 852, 72. doi:10.3847/1538-4357/aa998e [Vasiliev(2014)]Vasiliev14 Vasiliev, E. 2014, https://ui.adsabs.harvard.edu/abs/2014CQGra..31x4002V Classical and Quantum Gravity, 31, 244002. doi:10.1088/0264-9381/31/24/244002 [Wang & Merritt(2004)]Wang04 Wang, J. & Merritt, D. 2004, https://ui.adsabs.harvard.edu/abs/2004ApJ...600..149W , 600, 149. doi:10.1086/379767 [Wang et al.(2012)]WZKetal12 Wang, T.-G., Zhou, H.-Y., Komossa, S., et al. 2012, https://ui.adsabs.harvard.edu/abs/2012ApJ...749..115W , 749, 115. doi:10.1088/0004-637X/749/2/115 [Wegg & Nate Bode(2011)]Wegg11 Wegg, C. & Nate Bode, J. 2011, https://ui.adsabs.harvard.edu/abs/2011ApJ...738L...8W , 738, L8. doi:10.1088/2041-8205/738/1/L8 [Yu(2002)]Yu02 Yu, Q. 2002, https://ui.adsabs.harvard.edu/abs/2002MNRAS.331..935Y , 331, 935. doi:10.1046/j.1365-8711.2002.05242.x [Yu(2003)]Yu03 Yu, Q. 2003, https://ui.adsabs.harvard.edu/abs/2003MNRAS.339..189Y , 339, 189. doi:10.1046/j.1365-8711.2003.06156.x [Zhang et al.(2019)]Zhang19 Zhang, X., Lu, Y., & Liu, Z. 2019, https://ui.adsabs.harvard.edu/abs/2019ApJ...877..143Z , 877, 143. doi:10.3847/1538-4357/ab1d48
http://arxiv.org/abs/2306.12069v1
20230621073427
Modeling Hierarchical Reasoning Chains by Linking Discourse Units and Key Phrases for Reading Comprehension
[ "Jialin Chen", "Zhuosheng Zhang", "Hai Zhao" ]
cs.CL
[ "cs.CL", "cs.LG" ]
Phononic graded meta-MEMS for elastic wave amplification and filtering Federico Maspero3, Jacopo Maria De Ponti1, Luca Iorio1, Annachiara Esposito2, Riccardo Bertacco3, Andrea di Matteo2, Alberto Corigliano1, and Raffaele Ardito1 1 Dept. of Civil and Environmental Engineering, Politecnico di Milano, Piazza Leonardo da Vinci, 32, 20133 Milano, Italy 2STMicroelectronics, Viale Remo De Feo, 1, 80022 Arzano, Italy 3Dept. of Physics, Politecnico di Milano, Piazza Leonardo da Vinci, 32, 20133 Milano, Italy Manuscript received June, 2023 Corresponding author: A. Corigliano (email: [email protected]) ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Machine reading comprehension (MRC) poses new challenges over logical reasoning, which aims to understand the implicit logical relations entailed in the given contexts and perform inference over them. Due to the complexity of logic, logical relations exist at different granularity levels. However, most existing methods of logical reasoning individually focus on either entity-aware or discourse-based information but ignore the hierarchical relations that may even have mutual effects. In this paper, we propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning, to provide a more fine-grained relation extraction. Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism to improve the interpretation of MRC systems. Experimental results on logical reasoning QA datasets (ReClor and LogiQA) and natural language inference datasets (SNLI and ANLI) show the effectiveness and generalization of our method, and in-depth analysis verifies its capability to understand complex logical relations. § INTRODUCTION Machine reading comprehension (MRC) is a challenging task that requires machines to answer a question according to given passages <cit.>. A variety of datasets have been introduced to push the development of MRC to a more complex and more comprehensive pattern, such as conversational MRC <cit.>, multi-hop MRC <cit.>, and commonsense reasoning <cit.>. In particular, some recent multi-choice MRC datasets pose even greater challenges to the logical reasoning ability of models <cit.> which are not easy for humans to do well, either. Firstly, all the supporting details needed for reasoning are provided by the context, which means there is no additional commonsense or available domain knowledge. Secondly, it is a task of answer selection rather than answer retrieval, which means the best answer is chosen according to their logical fit with the given context and the question, rather than retrieved directly from the context according to the similarity between answers and context. Most importantly, the relations entailed in the contexts are much more complex than that of previous MRC datasets owing to the complexity of logic, which is hard to define and formulate. Without a targeted design for those challenges, existing pre-trained models, e.g., BERT, RoBERTa, fail to perform well in such kind of logical reading comprehension systems <cit.>. Logical reasoning MRC tasks are usually to find an appropriate answer, given a set of context and question. Figure <ref> shows an example from ReClor dataset <cit.> which requires logical reasoning ability to make the correct predictions. As humans, to solve such problems, we usually go through the following steps. Firstly, we divide the context into several fragments and figure out the logical relations between each clause, such as transition, continuity, contrast, etc. Secondly, we extract the important elements in the context, namely, the objects and topics described by the context, and construct the logical graph with these significant elements. Finally, we need to compare the answer statement to the mentioned part in the context and assess its logical fit with the given context. Most existing methods of logical reasoning MRC focus on either entity-aware or discourse-based information but ignore the hierarchical relations that may have mutual effects <cit.>. Motivated by the observation above, we model logical reasoning chains based on a newly proposed holistic graph network (HGN) that incorporates the information of element discourse units (EDU) <cit.> and key phrases (KPH) extracted from context and answer, with effective edge connection rules to learn both hierarchical features and interactions between different granularity levels. Our contributions are summarized as follows. (1) We design an extraction algorithm to extract EDU and KPH elements as the critical basic for logical reasoning. (2) We propose a novel holistic graph network (HGN) to deal with context at both discourse and word level with hierarchical interaction mechanism that yields logic-aware representation for reasoning. (3) Experimental results show our model's strong performance improvements over baselines, across multiple datasets on logical reasoning QA and NLI tasks. The analysis demonstrates that our model has a good generalization and transferability, and achieves higher accuracy with less training data. § METHODOLOGY Logical reasoning MRC tasks aim to find the best answer among several given options based on a piece of context that entails logical relations. Formally, given a natural language context C, a question Q, and four potential answers A={A_1,A_2,A_3,A_4}. We concatenate them as {C,Q,A_i} pairs. To incorporate the principle of human inference into our method, we propose a holistic graph network (HGN) as shown in Figure <ref>. Our model works as follows. First, we use EDU and KPH extraction algorithm to get necessary KPH nodes ({P_j}) and EDU nodes ({E_j}) from the given pairs. They contain information with different granularity levels and complement each other. Based on the extracted KPH-EDU interaction information and pre-defined rules, we construct the holistic graph. The process of constructing the holistic graph is shown in Figure <ref>. Then we measure the interaction between {E_j} and {P_j} to obtain logic-aware representations for reasoning. §.§ Logical Chain Construction Element Discourse Units (EDU) We use clause-like text spans delimited by logical relations to construct the rhetorical structure of texts. These clause-like discourses can be regarded as element units that reveal the overall logic and emotional tone of the text. For example, conjunctions like "” indicate a causal relation which means the following discourse is likely to be the conclusion we need to pay attention to. Parenthesis and clauses like "" in Figure <ref> play a complementary role in context. Also, punctuation indicates a pause or an end of a sentence, containing semantic transition and turning point implicitly. We use an open segmentation tool, SEGBOT <cit.>, to identify the element discourse units (EDUs) from the concatenation of and , ignoring the question whose structure is simple. Conjunctions (e.g., "", ""), punctuation and the beginning of parenthesis and clauses (e.g., "", "") are usually the segment points. They are considered as explicit discourse-level logical relations. To get the initial embedding of EDUs, we insert an external symbol at the start of each discourse, and add a symbol at the end of every type of inputs. Then we use RoBERTa to encode the concatenated tokens. The encoded token represents the following EDU. Therefore, we get the initial embedding of EDUs. Key Phrase (KPH) Key Phrases, including keywords here, play an important role in context. They are usually the object and principle of a context. We use the sliding window to generate n-gram word list, filtering according to the Stopword list, POS tagging, the length of the word, and whether it contains any number.[The stop list is derived by the open-source toolkit Gensim: <https://radimrehurek.com/gensim/>. The POS tagging is derived by the open-source toolkit NLTK: <https://www.nltk.org/>.] The filtering process is based on the following two main criteria: (1) If the n-gram contains a stop word or a number, then delete it. (2) If the length of word is less than the threshold value m, delete it, and if the n-gram length is 1, then only the noun, verb, and adjective are retained. Then, we calculate the TF-IDF features of each n-gram, and select the top-k n-gram as key phrases. k is a hyper-parameter to control the number of KPHs. We restore the selected tokens and retrieve the original expressions containing the key phrase from the original text. For example, as in Figure <ref>, "" is one of the KPHs, while we retrieve the original expression "" and "" from the original text. [The complete algorithm is given in Appendix <ref>.] Given the token embedding sequence K_i = {t_1,…,t_n} of a KPH with length n, its initial embedding is obtained by P_i = 1/|K_i|∑_t_l∈ K_it_l. Holistic Graph Construction Formally, every input sample is a triplet that consists of a context, a question and a candidate answer. EDU and KPH nodes are extracted in the above way. As shown in Figure <ref>, we construct a holistic graph with two types of nodes: EDU Nodes (in blue) and KPH Nodes (in green). For edge connections, there are four distinct types of edges between pairs of nodes. ∙ EDU-EDU continue: the two nodes are contextually associated in the context and answer. This type of edge is directional. ∙ EDU-EDU overlap: the two nodes contain the same KPH. This type of edge is bidirectional. ∙ EDU-KPH mention: the EDU mentions the KPH. This type of edge is bidirectional. ∙ KPH-KPH relate: the two nodes are semantically related. We define two types of semantic relations. One is that the two KPHs are retrieved by the same n-gram as described above. The other one is that the Cosine similarity between the two KPH nodes is greater than a threshold. This type of edge is bidirectional and can capture the information of word pairs like synonyms and antonyms. The construction of the graph is based on intuitive rules, which will not introduce extra parameters or increase model complexity. A further parameter comparison is given in Table <ref>. §.§ Hierarchical Interaction Mechanism Considering a specific node in the holistic graph, neighboring nodes in the same type may carry more salient information, thus affecting each other in a direct way. In the process, the neighboring nodes in the different types may also interact with each other. To capture both the node-level and type-level attention, we apply a Hierarchical Interaction Mechanism to the update of the graph network's representations. Graph Preliminary Formally, consider a graph G={V,E}, where V and E represent the sets of nodes and edges respectively. A is the adjacency matrix of the graph. A_ij>0 means there is an edge from the i-th node to the j-th node. We introduce A'=A + I to take self-attention into account. In order to avoid changing the original distribution of the feature when multiplying with the adjacency matrix, we normalize A', set à = D^-1/2A'D^-1/2 where D is the degree matrix of the graph. D = diag{d_1,d_2…,d_n}, d_i is the number of edges attached to the i-th node. Now, we calculate the attention score from node v' to node v in the following steps. Type Attention Vector We use T(τ) to represent all nodes that belong to type τ, and N(v) to represent all neighboring nodes that are adjacent to v. T is the set of types. Assume that node v belongs to T(τ), h_μ is the feature of node μ, h_τ is the feature of type τ which is computed by h_τ = ∑_μ∈ T(τ)Ã_v μW h_μ. Using the feature of type and node v, we compute the attention score of type τ as: e_τ = σ(μ_τ^T·[Wh_v∥ W_τh_τ]). Then, type-level attention weights α_τ is obtained by normalizing the attention scores across all the types T with the softmax function. σ is an activate function such as leaky-ReLU. α_τ = (σ(μ_τ^T·[Wh_v∥ W_τh_τ]))/∑_τ' ∈ T(σ(μ_τ'^T·[Wh_v∥ W_τh_τ'])). Node Attention Vector α_τ shows the importance of nodes in type τ to node v. While computing the attention score of node v' that is adjacent to node v, we multiply that by the type attention weights α_τ (assume v' belongs to type τ). Similarly, node attention weights are obtained by the softmax function across all neighboring nodes. e_vv' = σ(ν^T·α_τ[Wh_v∥ Wh_v']), α_vv'=(e_vv')/∑_i∈ N(v)(e_vi), where ∥ is the concatenation operator and α_vv' is the attention weight from node v' to v. Update of Node Representation Let h_v^(l) be the representation of the node v at the l-th layer. Then the layer-wise propagation rule is as follows: h_v^(l+1) = σ(∑_v'∈ N(v)α_vv'Wh_v'^(l)). §.§ Answer Selector To predict the best answer that fits the logic entailed in the context, we extract the node representations of the last layer of the graph network and feed them into the downstream predictor. For EDU nodes, since the node order implies the occurrence order in the context, we align them with the output of sequence embedding and add to it as a residual part. Therefore, we feed them into a bidirectional gating recurrent unit (BiGRU). H̃_E = (H_E+H_sent)∈ℝ^l× d, where H_E = [h_v'_1,h_v'_2,…,h_v'_l]∈ℝ^l× d, v'_i belongs to type EDU. l and d are the sequence length and the feature dimension respectively. H_sent is the output of sequence embedding. For KPH nodes, we first expand the embedding of the first token to size 1× d, denoted as H_c. Then, we feed the embedding of token and features of KPH nodes H_K = [h_v_1,h_v_2,…,h_v_n]∈ℝ^n× d (v_i is of KPH type) into an attention layer. α_i =w_α^T[H_c∥ h_v_i]+b_α∈ℝ^1, α̃_̃ĩ =(α_i) ∈ [0,1], H̃_c =W_c∑_iα̃_̃ĩh_v_i+b_c ∈ℝ^1× d, where α̃_̃ĩ is the attention weight of node feature h_v_i. w_α, b_α, W_c, and b_c are parameters. The output of BiGRU and the output of attention layer are concatenated and go through a pooling layer, followed by an MLP layer as the predictor. We take a weighted sum of the concatenation as the pooling operation. The predictor is a two-layer MLP with a tanh activation. Specially, coarse-grained and fine-grained features are further fused here to extract more information. H̃ = W_p[H̃_E∥H̃_c], p =(H̃)∈ℝ, where W_p is a learnable parameter, ∥ is the concatenation operator. For each sample, we get P=[p_1,p_2,p_3,p_4], p_i is the probability of i-th answer predicted by model. The training objective is the cross entropy loss: ℒ = -1/N∑_i^Nlog(p_y_i), where y_i is the ground-truth choice of sample i. N is the number of samples. § EXPERIMENT §.§ Dataset Our evaluation is based on logical reasoning MRC benchmarks (ReClor <cit.> and LogiQA <cit.>) and natural language inference benchmarks (SNLI <cit.> and ANLI <cit.>). ReClor contains 6,138 multiple-choice questions modified from standardized tests. LogiQA has more instances (8678 in total) and is derived from expert-written questions for testing human logical reasoning ability <cit.>. To assess the generalization of models on NLI tasks, we test our model on the Stanford Natural Language Inference (SNLI) dataset, which contains 570k human annotated sentence pairs. The Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset, where the instances are chosen to be difficult for the state-of-the-art models such as BERT and RoBERTa. It can be used to evaluate the generalization and robustness of the model.[The statistics information of these datasets are given in Appendix <ref>.] Implementation details and parameter selection are reported in Appendix <ref> for reproduction.[Our source codes is available at <https://github.com/Cather-Chen/Logical-Reasoning-Graph>.] §.§ Main Result §.§.§ Results on Logical QA Table <ref> presents the detailed results on the development set and the test set of both ReClor and LogiQA datasets. We observe consistent improvements over the baselines. HGN_RoBERTa(B) reaches 51.4% of test accuracy on ReClor, and 35.0% of test accuracy on LogiQA, outperforming other existing models. HGN_RoBERTa(L) reaches 58.7% of test accuracy on ReClor, therein 77.7% on Easy subset and 43.8% on Hard subset, and 39.9% on LogiQA. HGN_DeBERTa achieves 72.3% on the test set of ReClor and 44.2% on LogiQA. If using the same pre-trained language models as the backbones, our proposed model achieves the state-of-the-art results on both ReClor and LogiQA, without extra human annotations. Our model shows great improvement over this task by better utilizing the interaction information , which is ignored by most existing methods. §.§.§ Results on general NLI tasks To verify the generality of our model, we conduct experiments on two widely used entailment datasets for NLI: SNLI and ANLI, in which existing models rarely emphasized the modeling of logical relations. Table <ref> compares the performances of HGN and baseline models on the SNLI dataset with the same proportion of training data for finetuning. We observe that when given a limited number of training data, our HGN has faster adaptation than baseline models as evidenced by higher performances in low-resource regimes (e.g., 0.1%, 1%, and 10% of the training data used). HGN also outperforms BERT_base by 0.3% and RoBERTa_large by 0.5% on the full SNLI. We assess the model’s robustness against adversarial attacks, using a standard adversarial NLP benchmark: ANLI, as shown in Table <ref>. A1, A2 and A3 are three rounds with increasing difficulty and data size. ANLI refers to the combination of A1, A2 and A3. HGN_RoBERTa(L) gains a 15.2% points in test accuracy of ANLI over RoBERTa_large, creating state-of-the-art results on all rounds. Results show that our model has a comprehensive improvement over baseline models, in aspects of faster adaption, higher accuracy and better robustness. §.§ More Results Interpretation of k In this part, we investigate the sensitivity of parameter k, which is the number of KPH node. Figure <ref> shows the accuracies on the development set of our proposed model with different numbers of KPH nodes, which are extracted according to TF-IDF weights. We observe that k=2 or k=3 is an appropriate value for our model. This is consistent with our intuition that a paragraph will have 2 to 3 key phrases as its topic. When k is too small or large, the accuracy of the model does not perform well. Model Complexity With well-defined construction rules and an appropriate architecture, our model enjoys the advantage of high performance with fewer parameters. We display the statistics of model's parameters in Table <ref>. Compared with the baseline model (RoBERTa_large), the increase of our model's parameters is no more than 4.7%. Particularly, our model contains fewer parameters and achieves better performance than DAGN. §.§ Ablation We conduct a series of ablation studies on Graph Construction, Hierarchical Interaction Mechanism and Answer Selector. Results are shown in Table <ref>. All models use RoBERTa-base as the backbone. Holistic Graph Construction The Holistic Graph in our model contains two types of nodes and four types of edges. We remove the nodes of EDU and KPH respectively and the results show that the removal hurts the performance badly. The accuracies drop to 55.8% and 53.9%. Furthermore, we delete one type of edge respectively. The removal of edge type destroys the integrity of the network and may ignore some essential interaction information between EDUs and KPHs, thus causing the drop of the performance. Hierarchical Interaction Mechanism Hierarchical Interaction Mechanism helps to capture the information contained in different node types. When we remove the type-level attention, the model is equivalent to a normal Graph Attention Network (GAT), ignoring the heterogeneous information. As a result, the performance drops to 54.8%. When we remove both types of attention, the performance drops to 55.7%. Answer Selector We make two changes to the answer selector module: (1) deleting the BiGRU, (2) deleting the attention layer. For (1), the output of EDU features concatenates with the output of the attention layer directly and then are fed into the downstream pooling layer. For (2), we ignore the attention between the KPH features and the whole sentence-level features. The resulting accuracies of (1) and (2) drop to 53.2% and 55%, which verify that the further fusion of features with different granularity is necessary in our proposed model. We further analysed the examples that are predicted correctly by our model but not by baselines, and found that the powerful pre-trained language models, such as RoBERTa, would bias for answers with higher similarity to the context or those containing more overlapping words. The model itself does not understand the logical relations, but only compares their common elements for prediction. Instead, our model can not only match synonymic expressions, but also make logical inferences by separating sentences into EDUs and extracting key phrases and establishing logical relations between them. An example is shown in Appendix <ref>. § RELATED WORK §.§ Machine Reading Comprehension MRC is an AI challenge that requires machines to answer questions based on a given passage, which has aroused great research interests in the last decade <cit.>. Although recent systems have reported human-parity performance on various benchmarks <cit.> such as SQuAD <cit.> and RACE <cit.>, whether the machine has necessarily achieved human-level understanding remains controversial <cit.>. Recently, there is increasing interest in improving machines' logical reasoning ability, which can be categorized into symbolic approaches and neural approaches. Notably, analytical reasoning machine (AMR) <cit.> is a typical symbolic method that injects human prior knowledge to deduce legitimate solutions. §.§ Logical Reasoning Neural and symbolic methods have been studied for logical reasoning <cit.>. Compared with the neural methods for logical reasoning, symbolic approaches like <cit.> rely heavily on dataset-related predefined patterns which entails massive manual labor, greatly reducing the generalizability of models. Also, it could introduce propagated errors since the final prediction depends on the intermediately generated functions. Even if one finds the gold programs, executing the program is quite a consuming work as the search space is quite large and not easy to prune. Therefore, we focus on the neural research line in this work, to capture the logic clues from the natural language texts, without the rely on human expertise and extra annotation. Since the logical reasoning MRC task is a new task that there are only a few latest studies, we broaden the discussion to scope of the related tasks that require reasoning, such as commonsense reasoning <cit.>, multi-hop QA <cit.> and dialogue reasoning <cit.>. Similar to our approach of discovering reasoning chains between element discourse and key phrases, <cit.> proposes a hierarchical graph network (HGN) that helps to multi-hop QA. Our method instead avoids the incorporation of external knowledge and designs the specific pattern for logical reasoning. Discourse-aware graph network (DAGN) proposed by <cit.> also uses discourse relations to help logical reasoning. However, only modeling the relation between sentences will ignore more fine-grained information. Focal Reasoner proposed by <cit.>, covering global and local knowledge as the basis for logic reasoning, is also an effective approach. In contrast, our work is more heuristic and has a lighter architecture. Previous approaches commonly consider the entity-level, sentence-level relations, or heavily rely on external knowledge and fail to capture important interaction information, which are obviously not sufficient to solve the problem <cit.>. Instead, we take advantages of inter-sentence EDUs and intra-sentence KPHs, to construct hierarchical interactions for reasoning. The fine-grained holistic features are used for measuring the logical fitness of the candidate answers and the given context. As our method enjoys the benefits of modeling reasoning chains from riddled texts, our model can be easily extended to other types of reasoning and inference tasks, especially where the given context has complex discourse structure and logical relations, like DialogQA, multi-hop QA and other more general NLI tasks. We left all the easy empirical verification of our method as future work. § CONCLUSION This paper presents a novel method to guide the MRC model to better perform logical reasoning tasks. We propose a holistic graph-based system to model hierarchical logical reasoning chains. To our best knowledge, we are the first to deal with context at both discourse level and phrase level as the basis for logical reasoning. To decouple the interaction between the node features and type features, we apply hierarchical interaction mechanism to yield the appropriate representation for reading comprehension. On the logical QA benchmarks (ReClor, LogiQA) and natural language inference benchmarks (SNLI and ANLI), our proposed model has been shown effective by significantly outperforming the strong baselines. acl_natbib § KPHS EXTRACTION ALGORITHM § DATASET INFORMATION ReClor The Reading Comprehension dataset requiring logical reasoning (ReClor) is extracted from standardized graduate admission examinations <cit.>. It contains 6,138 multiple-choice questions modified from standardized tests such as GMAT and LSAT and is randomly split into train/dev/test sets with 4,638/500/1,000 samples respectively. Multiple types of logical reasoning question are included. LogiQA LogiQA is sourced from expert-written questions for testing human Logical reasoning. It contains 8,678 QA pairs, covering multiple types of deductive reasoning. It is randomly split into train/dev/test sets with 7,376/651/651 samples respectively. SNLI The Stanford Natural Language Inference (SNLI) dataset contains 570k human annotated sentence pairs, in which the premises are drawn from the captions of the Flickr30 corpus and hypotheses are manually annotated. The full dataset is randomly split into 549k/9.8k/9.8k. This is the most widely used entailment dataset for natural language inference. It requires models to take a pair of sentence as input and classify their relation types, i.e., entailment,neutral, or contradiction. ANLI The Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. Specifically, the instances are chosen to be difficult for the state-of-the-art models such as BERT and RoBERTa. A1, A2 and A3 are the datasets collected in three rounds. A1 and A2 are sampled from Wiki and A3 is from News. It requires models to take a set of context, hyperthesis and reason classify the label (entailment,neutral, or contradiction). A1 has 18,946 in total and is split into 16,946/1,000/1,000. A2 has 47,460 in total and is split into 45,460/1,000/1,000. A3 has 102,859 in total and is split into 100,459/1,200/1,200. ANLI refers to the combination of A1, A2 and A3. § PARAMETER SELECTION Our model is implemented based on the Transformers Library <cit.>. Adam <cit.> is used as our optimizer. The best threshold for defining semantic relevance is 0.5. We run 10 epochs for ReClor and LogiQA, 5 epochs for SNLI and ANLI, and select the model that achieves the best result in validation. Our models are trained on one 32G NVIDIA Tesla V100 GPU. The training time is around half an hour for each epoch. The maximum sequence length is 256 for ReClor and SNLI, 384 for LogiQA and 128 for ANLI. The weight decay is 0.01. We set the warm-up proportion during training to 0.1. We provide training configurations used across our experiments in Table <ref>. § CASE STUDY To intuitively show how our model works, we select an example from ReClor as shown in Figure <ref>, whose answer is predicted correctly by our model but not by baseline models (RoBERTa). The example shows that powerful pre-trained language models such as RoBERTa may be better at dealing with sentence pairs that contain overlap parts or similar words. For example, the wrong answer chosen by RoBERTa is another expression of the first sentence in the given context. The words are basically the same, only the order changes. The model itself does not understand the logical relation between sentences and phrases, but only compares their common elements for prediction, failing in logical reasoning task. In contrast, our model can not only match synonymic expressions, but also make logical inferences by separating sentences into EDUs and extracting key phrases and establishing logical relations between them. The importance of those elements are interpreted by the attention distribution as shown in the right part, which is derived from the last layer of our model.
http://arxiv.org/abs/2306.04518v1
20230607152912
Optimal sensor placement for reconstructing wind pressure field around buildings using compressed sensing
[ "Xihaier Luo", "Ahsan Kareem", "Shinjae Yoo" ]
physics.flu-dyn
[ "physics.flu-dyn", "cs.LG" ]
label1]Xihaier Luocor1 [email protected] label2]Ahsan Kareem label1]Shinjae Yoo [cor1]Corresponding author. [label1]Computational Science Initiative, Brookhaven National Laboratory, Upton, NY 11973, United States. [label2]NatHaz Modeling Laboratory, University of Notre Dame, Notre Dame, IN 46556, United States. Deciding how to optimally deploy sensors in a large, complex, and spatially extended structure is critical to ensure that the surface pressure field is accurately captured for subsequent analysis and design. In some cases, reconstruction of missing data is required in downstream tasks such as the development of digital twins. This paper presents a data-driven sparse sensor selection algorithm, aiming to provide the most information contents for reconstructing aerodynamic characteristics of wind pressures over tall building structures parsimoniously. The algorithm first fits a set of basis functions to the training data, then applies a computationally efficient QR algorithm that ranks existing pressure sensors in order of importance based on the state reconstruction to this tailored basis. The findings of this study show that the proposed algorithm successfully reconstructs the aerodynamic characteristics of tall buildings from sparse measurement locations, generating stable and optimal solutions across a range of conditions. As a result, this study serves as a promising first step toward leveraging the success of data-driven and machine learning algorithms to supplement traditional genetic algorithms currently used in wind engineering. Sensor placement Compressed sensing Pressure measurements Wind pressure field reconstruction § INTRODUCTION As the world's population shifts to cities, the demand for high-rise buildings is rapidly increasing, as seen in the Pacific Rim region. Tall buildings exposed to wind experience wind-induced loads that create pressure on the building envelope, and their integral effects cause the structure to move in the dominant directions, namely along-wind, across-wind, and torsional <cit.>. The description of the pressure field around a building does not lend itself to a simple functional relationship with approach flow turbulence. As a result, calls for reliance on wind tunnel experiments have been made. These tests rely heavily on pressure taps connected to pressure sensors to monitor pressure fields over the building surface. A basic question is where to deploy available sensors to accurately predict and reconstruct the structure of a wind pressure field from limited and noisy sensor outputs. In fact, the optimal sensor placement problem has garnered considerable attention for a long time, as fast data acquisition, analysis, and decision in high-performance control for complex systems can be archived with a small number of measurements at limited locations. In practice, the best locations for sensors in regular structures with simple geometry and a small number of degrees of freedom can be determined empirically using engineering judgment and past experience. However, for a complicated large-scale structure, a systematic and efficient approach is required because the solution space is far beyond the capabilities of hand calculation <cit.>. Mathematically, the goal is to find m positions from a set of n positions that maximize the information about the behaviors of a structural system: c=n !/m !(n-m) ! where n denotes the number of candidate positions, and m denotes the number of sensors to be placed. It is worth noting that m in equation <ref> is not always a constant; rather, m is often a parameter to be optimized (most of the time to be minimized) to minimize the total cost of sensors and their installation in a model's physically limited space while still providing adequate information on structural behaviors. The current practice is based on wind tunnel facility experience, yet c can be excessively huge for a typical modern tall building. Because a wind tunnel building model cannot be fully covered with sensors, a trade-off between needed information and expense must be made, necessitating automation in sensor position selection. This difficulty is exacerbated when attempting to locate a sensor in the presence of several wind-approaching angles. According to the literature, sensor locations are routinely chosen based on heuristics and intuition. For example, Yao et al. modified the parent selection and children reproduction schemes of genetic algorithms (GA) and applied them to optimal sensor placement on a large space structure for modal identification <cit.>. Later, Liu et al. replaced the existing binary coding-based GA with the decimal two-dimension array coding method, resulting in a reduction in computational iterations <cit.>. Another popular alternative is the Monkey algorithm (MA). For instance, Yi et al. developed an automatic technique for adjusting the MA's climb and watch–jump processes and applied it to the health monitoring of high-rise structures <cit.>. Despite providing quick solutions to planning and scheduling problems, heuristic algorithms such as GA and MA do not always provide the best solution when compared to traditional optimization algorithms <cit.>. Their performance is highly dependent on the algorithmic parameters. Finding an effective set of parameters and iteration stopping criteria is case-dependent and difficult. This study focuses on a data-driven approach for reconstructing tall building aerodynamic characteristics. In comparison to existing sensor placement algorithms, the proposed approach makes use of cutting-edge decomposition-based sensing techniques, making it very efficient and effective in exploring a large space of candidate placements. A thorough investigation revealed that the proposed method not only scales well in terms of both the dimension of the wind pressure measurements and the number of sensors, but also delivers stable solutions in a wide range of wind conditions, such as different building features and wind attack angles. § METHODS §.§ Compressed sensing Many natural signals are highly compressible, which means they can be well-approximated by a small number of non-zero coefficients in an appropriate basis <cit.>. A compressible signal 𝐱∈ℝ^n (e.g., wind pressure distribution) can be written mathematically as a sparse vector 𝐬∈ℝ^n on a new orthonormal basis of Ψ∈ℝ^n × n such that: 𝐱=Ψ𝐬 If 𝐬 in <ref> is a linear combination of only k basis vectors, it is called k-sparse (exactly k non-zero elements). Compressed sensing theory makes use of this idea in order to infer the sparse representations in a known transformed basis system Ψ, where k ≪ n. Without loss of generality, consider a set of observations 𝐲∈ℝ^p (e.g., wind tunnel measurements) and an observation matrix 𝐂∈ℝ^p × n that satisfy 𝐲=𝐂𝐱=(𝐂 Ψ) 𝐬=Θ𝐬 If 𝐱 is sufficiently sparse in Ψ and the matrix Θ follows the principle of restricted isometry, the search for 𝐬 and the reconstruction of 𝐱 can be expressed in an optimization format 𝐬=min _𝐬^'𝐬^'_0 such that 𝐲=Θ𝐬^' <ref> entails a difficult combinatorial search. In practice, the goal of compressed sensing is to find the l_1-norm of a sparsest vector that is consistent with 𝐲 𝐬=min _𝐬^'𝐬^'_1 such that 𝐲=Θ𝐬^' where 𝐬_1=∑_k=1^n|s_k|. By doing so, the combinatorially difficult problem in the nonconvex l_0 minimization is bypassed by relaxing to a convex l_1 minimization <cit.>. §.§ Data-driven sparse sensor placement While compressed sensing employs random measurements to reconstruct high-dimensional unknown data from a universal basis Ψ∈ℝ^n × n, data-driven sparse sensor placement, as discussed in this paper, collects available information about a signal from observed samples to build a tailored basis Ψ_r ∈ℝ^n × r for the respective signal and thus identify optimal sensor placements for low-loss reconstruction <cit.>. Herein, the wind pressure signal in <ref> can be rewritten as an unknown linear combination of basis coefficients: 𝐱=∑_k=1^rψ_k a_k=Ψ_r𝐚 where vector 𝐚∈ℝ^r represents mode amplitudes of 𝐱 in basis Ψ. Similarly, the reduction process described in <ref> can be expressed as: 𝐲=𝐂 𝐱=(𝐂Ψ_r) 𝐚=Θ a The main challenge is to design an incoherent measurement matrix 𝐂 that allows for the identification of the optimal p observations 𝐲 for accurately reproducing the signal 𝐱. In other words, rows of 𝐂 not correlated with columns ψ of Ψ_r. The data-driven sparse sensor placement algorithm addresses this issue by solving 𝐂^⋆=𝐂∈ℝ^p × nmin|𝐱-Ψ(𝐂 Ψ)^†𝐲|_2^2 where † denotes the Moore-Penrose pseudoinverse. Mathematically, <ref> is the Moore-Penrose pseudoinverse of <ref>. It is assumed that 𝐂^⋆ is a mostly sparse subset selection operator composed of rows of the identity, with nonzero entries indicating the chosen measurements. Specifically, 𝐂 is constrained to have the following structure given a p sensor budget and n candidates state components: 𝐂=[[ 𝐞_γ_1^⊤ 𝐞_γ_2^⊤ ⋯ 𝐞_γ_p^⊤ ]] where 𝐞_γ_j^⊤ is the canonical basis vector with a unit entry in the j^th component and zeros everywhere else. In practice, 𝐞_γ_j^⊤ denotes the location of a given sensor. As a result, each row of 𝐂 observes from a single spatial location, which corresponds to the sensor location: 𝐲=𝐂 𝐱=[x_γ_1, x_γ_2, …, x_γ_p]^⊤ For a well-defined linear inverse problem, the index with cardinality |γ|=p and additionally number of sensors n ≥ r of Ψ is designated by γ∈ℕ^p <cit.>. §.§.§ Implementation note 1: Basis functions The measurement matrix is an important aspect of compressed sensing. In the context of data-driven sparse sensor placement, this means that the basis functions Ψ in <ref> will be replaced by Ψ_r, which is built from the training data 𝐗^tr. There are various approaches to building Ψ_r depending on the available information. Identity basis: Compressed sensing is ideal for recovering a high dimensional signal with unknown content by employing random measurements on a global scale. In other words, the raw measurement data is used directly and without modification: Ψ_r=𝐗^tr. Because there is no low-rank approximation of the data, there is no information loss. This, however, comes at the expense of a longer computation time. Random Projection basis: Even in cases with insufficient explicit information about the signal, the computation of the identity basis can be speeded up by projecting the input data onto a randomly generated matrix, that is, multiplying measurements with random Gaussian vectors to project them to a new space Ψ_r=𝐆𝐗^tr, where the entries 𝐆∈ℝ^2 p × m are drawn from a Gaussian density function with mean zero and variance 1/n. This basis is also known as the Random Projection basis <cit.>. SVD/POD basis: If information about the type of signal is available (for example, whether it is a turbulent velocity field or an image of a cat), it is possible to design optimized sensors that are tailored to the specific signals of interest. Dimensionality reduction techniques can be used to extract dominant features from a training data set of representative examples. These low-rank features, derived from data patterns, aid in the development of identifying optimal sensors locations. We used the singular value decomposition (SVD) in this study because of its numerical robustness and efficiency in extracting dominant patterns from low-dimension data <cit.>. SVD can also be used to solve the proper orthogonal decomposition (POD), which is widely used in wind data analysis and reconstruction. For a matrix 𝐗^tr∈ℂ^n × n, the SVD basis is computed as: 𝐗^tr=𝐔Σ𝐕^T=ΨΣ𝐕^T≈Ψ_rΣ_r𝐕_r^T where the matrices Ψ_r and 𝐕_r contain the first r columns of left and right singular vectors, respectively. In this study, we will examine and investigate the three types of basis functions mentioned above in the context of reconstructing aerodynamic wind pressure fields. §.§.§ Implementation note 2: QR pivoting The first implementation focuses on determining the best-fitting basis for the training data. The second implementation note we provided aims to use a sparse sensor selection algorithm that is both computationally efficient and flexible. To achieve the goal, the greedy matrix QR factorization with column pivoting is used to effectively determine the resulting optimal sensor locations that minimize the reconstruction error <cit.>. Specifically, QR factorization breaks a matrix 𝐀∈ℝ^m × n down into a unitary matrix 𝐐, an upper-triangular matrix 𝐑, and a column permutation matrix 𝐂 defined in <ref> such that 𝐀𝐂^T=𝐐 𝐑 QR column pivoting increases the volume of the submatrix built from the pivoted columns by choosing a new pivot column with the highest two-norm and then subtracting from each other column its orthogonal projection onto the pivot column. Pivoting increases the volume of the submatrix by enforcing a diagonal dominance structure. σ_i^2=|r_i i|^2≥∑_j=i^k|r_j k|^2 ; 1 ≤ i ≤ k ≤ m This works because the product of diagonal entries is also the product of matrix volume. |det𝐀|=∏_iσ_i=∏_i|r_i i| Also, the oversampled case p > r can be solved using Ψ_rΨ_r^T's pivoted QR factorization, with the column pivots chosen from n candidate state-space locations based on the observation that detΘ^TΘ=∏_i=1^rσ_i(ΘΘ^T) As a result, QR factorization combined with column pivoting produces r column indices (corresponding to sensor locations) that best sample the r basis modes (columns) Ψ_𝐫^T𝐂^T=𝐐 𝐑 Because the pivots columns represent the sensors, the QR-factorization produces a hierarchical list of all n pivots, with the first p pivots optimized for Ψ_r reconstruction. This means that all wind pressure taps are ranked based on their information content in the aerodynamic reconstruction based on the data. When spatial input data, such as interpolation or model results, is used, all gridded input data cells are ranked based on their information content. As a result, it is possible to make recommendations for the placement of additional sensors in areas with a high information content <cit.>. §.§ Method overview The goal of the proposed method is to estimate and reconstruct a detailed wind pressure field from limited and possibly noise-contaminated measurements. To achieve the goal, we first need to create the training data set and build the measurement operator C using the wind tunnel data. Following the preceding compressive sensing techniques, the solution provides the sparsest solution to an underdetermined linear problem. <ref> depicts compressed sensing graphically (<ref> and <ref>). Next, a tailored basis function for our training data is constructed using a singular value decomposition (<ref>). We next execute a QR decomposition with column pivoting to produce pivots corresponding to our chosen sensor positions (<ref>). Finally, a relaxed convex optimization problem is performed to estimate sparse coefficients (<ref>), ensuring that the reconstructed data is consistent with the noisy measurements. § EXPERIMENTAL DATABASE The Tokyo Polytechnic University (TPU) Wind Engineering Information Center provides a comprehensive wind pressure database derived from a series of wind tunnel tests on a wide range of buildings <cit.>. Specifically, the tests were carried out in a boundary layer wind tunnel with a test section 1.2 m wide by 1.0 m high, and the scaled model (1/400) had an identical cross-section with a width of 0.1 m. Turbulence-generating spires, roughness elements, and a carpet on the upstream floor of the wind tunnel's test section were used to simulate the atmospheric boundary layer. The database was built using various wind profiles. Most experiments were carried out under a power-law wind profile with an exponent of 1/4. The mean wind speed at the top of the building was 11 m/s and the turbulence intensity profiles is in accordance with the category III (suburban terrain). As shown in <ref>, a total of 500 pressure taps were used to collect data at a sampling frequency of 1000 Hz for a sample period of 32.768 secs. Pressure taps were evenly distributed on building surfaces with 0.02 m row spacing and 5 columns on each face, with model dimensions of 100 mm × 100 mm × 500 mm (breadth × depth × height). § RESULTS AND DISCUSSION §.§ Different choices of basis It is critical to choose a good basis for accurately reconstructing the aerodynamics of wind-excited tall buildings. In this section, we start with specifying a set of parameter values to test for the aforementioned three types of tailored basis (identity basis, random projection, and SVD basis). Because the number of sensors and basis modes interacts, it is necessary to independently determine the effects of different types of the tailored basis on these two factors <cit.>. As a result, we designed two experiments. In the first set, we kept the number of sensors constant at 125 and investigate how the number of basis modes used affects the reconstruction error. In the second set, the reconstruction error for a fixed number of basis modes (125) as the number of sensors varies is investigated. The computing results (<ref>) show that the Identity basis performs the worst, while the SVD basis consistently outperforms the other two types of basis. Meanwhile, in these experiments, we have five reduction stages: 20%, 40%, 60%, 80%, and 100%. The reduction ratio is calculated using the total number of installed sensors, i.e., 125. In general, as the number of sensors and basis modes increases, the reconstruction error decreases. Though the presented results are based on windward data, similar results have been observed in the other three building faces. For consistency reasons and based on the grid search results, the SVD basis will be used as the default basis in the following experiments. §.§ Varying the number of sensors/basis modes Choosing the fewest sensors and basis modes for reconstruction is an essential requirement of an optimal sensor placement. Because the proposed data-driven sensor placement optimization relies on low-rank structure in the data and involves an inherent trade-off between the number of sensors/basis modes and reconstruction accuracy. <ref> and <ref> show the results of experiments with varying numbers of basis modes and sensors. In all experiments, 7000 snapshots are used to train the SVD modes and different sensor/basis mode sets, and these different sensor sets are used to reconstruct a set of 3000 validation snapshots that were not used for training features. The computed results show that the reconstruction error decreases rapidly at first, indicating that the proposed method can provide accurate reconstruction with a limited number of sensors/basis modes. Four building surfaces exhibit a similar trend in terms of the number of basis modes, with less than 20 basis modes providing a faithful reconstruction. The windward data, in particular, promotes efficient characterization. When 17 basis modes are used, the proposed algorithm performs 30% better in the windward scenario than in the other three surfaces. Such findings are consistent with the inherent aerodynamics of wind passing around a bluff prism, where more complex wind-structure interactions (e.g., separation and reattachment phenomenon) are frequently detected on two sides and leeward facets. Another point to note is that the relationship between reconstruction performance and the number of sensors is less defined than the number of basis modes. The noise could be causing the fluctuations. Measurements of real-world data are frequently contaminated by sensor noise. The SVD-based sensor selection is theoretically optimal for estimation with measurements corrupted by zero-mean Gaussian white noise. Developing an accurate noise model for these non-stationary and non-Gaussian signals remains a challenge. As a result, the elbow of the reconstruction error curve down and to the left exhibits some fluctuations, particularly when the number of sensors is small (See <ref>). As more sensors are added, the model can better capture intrinsic noise and stabilize performance. When more than 60% of the sensors are used, in our case 80 sensors, the fluctuation becomes very small. §.§ Reconstruction performance: temporal perspective The proposed algorithm's reconstruction performance using a different number of sensors is evaluated using wind pressure monitoring data from the testing dataset. Note the experimental data from wind tunnel tests is pre-processed as the non-dimensional wind pressure coefficient. Usually, wind pressure is defined by the pressure coefficient C_p=P_x-P_0/ρ U_h^2 / 2, where P_x is the static pressure at a given point on the building facade, P_0 is the static reference pressure at freestream, ρ U_h^2 / 2 is the dynamic pressure at freestream, ρ is the air density, and U_h is the wind speed, which is frequently measured at the building height h in the undisturbed flow upstream. <ref>-<ref> show the reconstructed wind pressure data in comparison to the measured wind pressure data. Because each surface has 125 pressure taps, we chose two at random for demonstration. Clearly, the reconstructed data is highly consistent with the measured data, despite the fact that only a few sensors were used (30, 50, and 70 in the presented experiments). The reconstruction performance of four building surfaces is comparable overall, particularly when 50 percent of sensors, i.e., 60 sensors, are available. The algorithm captures the extremes better in the case of 60 sensors. And as the number of sensors increases (up to 125), the performance in matching extreme excursions in data improves. In summary, the proposed algorithm can effectively reconstruct wind pressure data using a small number of sensors, opening up a new avenue for addressing difficult problems such as optimal sensor placement, filling in missed data from faulty sensors, and so on. §.§ Optimal sensor locations In this section, we examine the reconstruction performance from sensor spatial distribution, that is, the optimal sensor locations. We run experiments with an increasing number of sensors, 25 (20%), 50 (40%), 75 (60%), and 100 (80%), for various building surfaces. In the last column of <ref>-<ref>, we extracted the overlapped positions, which are locations that have been identified in all experiment trials. The mean pressure distribution contour of a square building model is shown as the background in these figures. A few observations about the results are made here. First, the optimal windward sensor locations are relatively symmetrical, which agrees with the mean pressure distribution of buildings with square sections. Large-scale coherent structures, such as spanwise vortex, tip vortex, and horseshoe vortex, naturally exist in the flow field around high-rise buildings. Experimental evidence shows the interaction of these vortex structures controls the fluctuating feature of pressure <cit.>. As a result, wind pressure sensors should be placed near two side edges (See <ref>). Furthermore, the windward contour shows that the pressure coefficient reaches its maximum value at approximately 4/5 of the height. And, when more than 50% of sensors are available, the proposed data-driven algorithm automatically distributes some sensors around the maximum value region. In parallel, the lowest value of wind pressure is observed near the ground at the windward edge. Again, the optimal sensors identified include one in the center of the ground line and a few near the bottom corners (See the middle two sensors in the <ref> (e).). These two findings suggest that the optimal sensor locations identified are physically interpretable in terms of first reconstructing the large-scale pressure patterns and then restoring the minimum and maximum value regions. Second, wake fluctuation and vortex shedding are known to influence the aerodynamics of the two side facets. The separated shear layers produced by the leading edges of the windward surface usually roll up to form vortices. The vortex shedding then effects pressure on the building model's side surfaces. Interestingly, the identified sensor locations are not symmetric <cit.>. As a result of vortex shedding not appearing symmetrically due to the strong suction effect, an asymmetrical distribution on the two side surfaces emerges. The background mean pressure contours confirm this. The right side had more turbulent interactions, which resulted in more local patterns. For example, more small contours are detected in the middle of the prism. As a result, a greater number of sensors have been distributed to the corresponding region to ensure reconstruction quality (See the last column of <ref> and <ref>). The ranking of the wind pressure taps is shown in <ref>. The numbers represent the accumulated count of positions identified during an iteration from one sensor to the full rank. Specifically, we run a series of optimal sensor placement experiments. The input is all 125 sensors for each building surface, and the output is the number of sensors we want to keep, which can range from 1 to 125, resulting in a total of 125 experiments. In theory, this means that the lower bound for the importance value is 1 (the experiment in which all of the sensors are kept) and the upper bound is 125 (a sensor identified in each of the 125 experiments). As a result, the higher the value, the more critical the sensor location is for analyzing and reconstructing measured wind pressure data. Overall, the ranking demonstrates that the most important sensors are located in positions that contain information about larger pressure patterns, such as mean, variance, skewness, and kurtosis. Striped and micro-scale patterns identified by the dynamic mode decomposition method, for example, are labeled with a lower rank <cit.>. This finding is consistent with physical nature because these patterns are local and have higher frequency contents, contributing less to data reconstruction at the macro-scale. §.§ Varying the wind angles When wind flows around buildings, characteristics of wind separation, vortex shedding, and wake effect are distinct under different wind directions, and wind pressure coefficients follow accordingly. As a result, when reconstructing wind pressure data, it is critical to consider the effects of wind direction. We investigate the effect of wind angles ranging from 0^∘ to 50^∘ degrees every 5^∘. However, before conducting sensor analysis, we must ensure that our previous assumptions about the number of basis and modes, as well as the type of basis, are applicable to new wind angles. We used proper orthogonal decomposition (POD) to better align the introduced algorithm with current baseline methods for wind data analysis and reconstruction. The SVD and POD are related in that the SVD is an extension of eigendecomposition to rectangular matrices, and POD can be viewed as a decomposition formalism for which the SVD is one approach to determining its solution. Specifically, λ_POD = λ_SVD^2 connects the singular values and the POD eigenvalues <cit.>. <ref> shows the ranked POD modes based on their energy content. The POD eigenvalue distribution depicts the wind energy captured by the accumulated POD modes as a percentage of the reconstructed total turbulent kinetic energy. The energy of POD modes decreases dramatically with mode number, as shown by the line plots. The first POD mode represents the mean distribution of wind pressures. With a few more modes, the accumulated decomposed mode can contain relatively large fluctuating energy. The overall trend is very similar to the findings in <ref>, implying that the SVD basis applies to new wind angles. Furthermore, we observe that as the wind angle increases, the SVD value decreases more rapidly. This finding is consistent with existing experimental findings and means that a smaller number of basis is supported for sensor placement, making the proposed algorithm even more robust and efficient <cit.>. <ref>-<ref> show optimal sensor placements for wind directions of 0^∘, 15^∘, 25^∘, 35^∘, and 45^∘. Clearly, contour lines on the windward surface's corner gradually become dense, indicating that wind pressures change dramatically, primarily due to fluid separation. As a result, many sensors are expected to be placed near corners and edges. Furthermore, in the 45^∘ case, contour lines gradually decrease from left to right. Because mean wind pressures gradually decrease, more sensors in the middle are required to provide a reliable reconstruction (See <ref> (e)). Meanwhile, the vortex appears when wind flows around the prism's side surface. The pressure pattern under a vortex generated by the windward corner and advected along the model's lateral face on the two side surfaces demonstrates the shift in the dense contour lines. Those dense contour lines indicate a strong fluid separation in the underlying area <cit.>. It is worth noting that, despite the complex and dynamic nature of pressure field, the proposed sensor placement method is capable of capturing these intense regions of fluctuation and locating sensors to these positions. §.§ Computational effort In our experiments, pressure taps were evenly distributed on the surfaces of buildings, with 0.35 m row spacing and 5 columns on each face, resulting in 125 sensors for each building surface. A more dense set of pressure taps may be used in other wind tunnel experiments to measure both pressure dynamics and structural responses. We investigated the scalability of the proposed optimal sensor placement algorithm to determine its applicability to larger problems with orders of magnitude more pressure taps. We devised two sets of experiments in particular. First, we set the number of sensors to 125 and paired them with a range of basis numbers ranging from 1 to 124. Second, we set the number of basis to 124 and experimented with various sensor numbers. We used models trained with the aforementioned hyper-parameters to make inferences for the test data and recorded the computational cost in terms of wall-clock time. The scaling results for the various sensors/basis are shown in <ref>. We fitted a linear model to these calculated computational costs to better present the scaling trend (See the line plots in the <ref>). This is accomplished through the use of ordinary least squares optimization. Results show that the model can achieve linear scaling on up to 125 sensors/basis. It should be noted that the SVD basis was used in these experiments. Even with full-rank sensors and basis, the total effort for training a model is less than a second, indicating that the proposed data-driven algorithm is both cost-effective and practical. For a generally larger problem such as three-dimensional turbulence, the identity basis may produce the lowest reconstruction error at a given number of sensors. This is because no empirical information is lost when constructing a low-rank approximation of the data. Still, by the same token, the identity basis can result in impractically long run times for a large data set <cit.>. In these cases, additional speedup can be obtained using a data-parallel implementation of the training algorithm, which distributes the batches across the cluster nodes and replicates the model <cit.>. § CONCLUDING REMARKS This study investigated data-driven sparse sensing approaches and adapted them to identify optimal sensor locations for reconstructing pressure field around a tall building. In order to determine the optimal sensor locations, the wind tunnel data was divided into three groups, i.e., training data, validation data, and testing data. The algorithm tailors a basis to the training data, which is then used in a QR decomposition to rank wind pressure taps by importance based on reconstruction performance. This method allows for the removal and relocation of monitoring sensors with low information content as needed, as well as the provision of higher-ranking monitoring locations with higher-quality sensors or measuring at a higher frequency. This rank can also be used as a decision-making tool when looking for additional monitoring locations. A comparison of the reconstructed wind pressure data and the actual monitored wind pressure data reveals that the monitored wind pressure data can be satisfactorily reconstructed even with a limited number of sensors, indicating that the proposed method can serve as an efficient tool for reconstruction of random pressure fields around building. The proposed scheme is applicable to pressures in the vicinity of other geometries as well as low and mid-rise buildings. Beyond that, we see the proposed algorithm as a reliable component in the design chain of the next generation of intelligent structures, where engineering elements such as sensing, actuating, and signal processing will be integrated into the overall system. Specifically, the rising scale and resolution of modern numerical simulations has resulted in a plethora of fluid-structure data. Although we may attain very high-level fidelity in the simulation, we are often limited in applications to a few noisy sensors. Therefore, we envision the use of large-scale CFD simulation data in conjunction with sensing techniques to develop a hybrid design and health monitoring framework for wind tunnel experiments. Utilizing deep learning, the optimal sensor location approach can be used to reconstruct the pressure field in real-time based on measurements from sparsely acquired data, offering an additional application. § ACKNOWLEDGMENTS This work is supported by the U.S. Department of Energy (DOE), Office of Science, Advanced Scientific Computing Research under Award Number DE-SC-0012704. The authors also thank the anonymous reviewers for their comments and suggestions, which helped to improve the manuscript's quality and clarity. elsarticle-num
http://arxiv.org/abs/2306.03548v1
20230606095038
Learning Dynamical Systems from Noisy Data with Inverse-Explicit Integrators
[ "Håkon Noren", "Sølve Eidnes", "Elena Celledoni" ]
cs.LG
[ "cs.LG", "cs.NA", "math.NA" ]
A hard-sphere quasicrystal stabilized by configurational entropy Etienne Fayen^1, Laura Filion^2, Giuseppe Foffi^1, and Frank Smallenburg^1 July 31, 2023 ============================================================================== We introduce the mean inverse integrator (MII), a novel approach to increase the accuracy when training neural networks to approximate vector fields of dynamical systems from noisy data. This method can be used to average multiple trajectories obtained by numerical integrators such as Runge–Kutta methods. We show that the class of mono-implicit Runge–Kutta methods (MIRK) has particular advantages when used in connection with MII. When training vector field approximations, explicit expressions for the loss functions are obtained when inserting the training data in the MIRK formulae, unlocking symmetric and high-order integrators that would otherwise be implicit for initial value problems. The combined approach of applying MIRK within MII yields a significantly lower error compared to the plain use of the numerical integrator without averaging the trajectories. This is demonstrated with experiments using data from several (chaotic) Hamiltonian systems. Additionally, we perform a sensitivity analysis of the loss functions under normally distributed perturbations, supporting the favorable performance of MII. § INTRODUCTION Recently, many deep learning methodologies have been introduced to increase the efficiency and quality of scientific computations <cit.>. In physics-informed machine learning, deep neural networks are purposely built to enforce physical laws. As an example, Hamiltonian neural networks (HNNs) <cit.> aim at learning the Hamiltonian function from temporal observations. The Hamiltonian formalism was derived from classical mechanics for modeling a wide variety of physical systems. The temporal evolution of such systems is fully determined when the Hamiltonian function is known, and it is characterized by geometric properties such as the preservation of energy, the symplectic structure and the time-reversal symmetry of the flow <cit.>. Numerical integrators that compute solutions preserving such properties are studied in the field of geometric numerical integration <cit.>. Thus, deep learning, classical mechanics and geometric numerical integration are all relevant to the development of HNNs. In this work, we try to identify the optimal strategy for using numerical integrators when constructing loss functions for HNNs that are trained on noisy and sparse data. Generally, we aim at learning autonomous systems of first-order ordinary differential equations (ODE) d/dt y = f(y(t)), y : [0,T] →^n. In the traditional setting, solving an initial value problem (IVP) means computing approximated solutions y_n ≈ y(t_n) when the vector field f(y) and an initial value y(t_0) = y_0 are known. The focus of our study is the corresponding inverse problem; assuming knowledge of multiple noisy samples of the solution, S_N = {ỹ_n}_n=0^N, the aim is to approximate the vector field f with a neural network model f_θ. We will assume that the observations originate from a (canonical) Hamiltonian system, with a Hamiltonian H : ^2d→, where the vector field is given by f(y) = J∇ H(y(t)), J := [ 0 I; -I 0 ]∈^2d × 2d. This allows for learning the Hamiltonian function directly by setting f_θ(y) = J∇ H_θ(y), as proposed initially in <cit.>. Recently, many works highlight the benefit of using symplectic integrators when learning Hamiltonian neural networks <cit.>. Here, we study what happens if, instead of using symplectic methods, efficient and higher-order MIRK methods are applied for inverse problems. We develop different approaches and apply them to learn highly oscillatory and chaotic dynamical systems from noisy data. The methods are general, they are not limited to separable Hamiltonian systems, and could indeed be used to learn any first-order ODE. However, we focus our study on Hamiltonian systems, in order to build on the latest research on HNNs. Specifically, we compare our methods to the use of symplectic integrators to train Hamiltonian neural networks. Our contributions can be summarized as follows: * We introduce the mean inverse integrator (MII), which efficiently averages trajectories of MIRK methods in order to increase accuracy when learning vector fields from noisy data (Definition <ref>). * We present an analysis of the sensitivity of the loss function to perturbations giving insight into when the MII method yields improvement over a standard one-step scheme (Theorem <ref>). * We show that symplectic MIRK methods have at most order p = 2 (Theorem <ref>). Particularly, the second-order implicit midpoint method is the symplectic MIRK method with minimal number of stages. Finally, numerical experiments on several Hamiltonian systems benchmark MII against one-step training and symplectic recurrent neural networks (SRNN) <cit.>, which rely on the Störmer–Verlet integrator. The structural difference between these three approached is presented in Figure <ref>. Additionally, we demonstrate that substituting Störmer–Verlet with the classic Runge–Kutta method (RK4) in the SRNN framework yields a significant reduction in error and allows accurate learning of non-separable Hamiltonian systems. § RELATED WORK Hamiltonian neural networks was introduced in <cit.>. The numerical integration of Hamiltonian ODEs and the preservation of the symplectic structure of the ODE flow under numerical discretization have been widely studied over several decades <cit.>. The symplecticity property is key and could inform the neural network architecture <cit.> or guide the choice of numerical integrator, yielding a theoretical guarantee that the learning target is actually a (modified) Hamiltonian vector field <cit.>, building on the backward error analysis framework <cit.>. Discrete gradients is an approach to numerical integration that guarantees exact preservation of the (learned) Hamiltonian, and an algorithm for training Hamiltonian neural networks using discrete gradient integrators is developed in <cit.> and extended to higher order in <cit.>. Since we for the inverse problem want to approximate the time-derivative of the solution, f, using only ỹ_n, we need to use a numerical integrator when specifying the neural network loss function. For learning dynamical systems from data, explicit methods such as RK4 are much used <cit.>. However, explicit methods cannot in general preserve time-symmetry or symplecticity, and they often have worse stability properties compared to implicit methods <cit.>. Assuming that the underlying Hamiltonian is separable allows for explicit integration with the symplectic Störmer–Verlet method, which is exploited in <cit.>. Symplecticity could be achieved without the limiting assumption of separability by training using the implicit midpoint method <cit.>. As pointed out in <cit.>, this integrator could be turned into an explicit method in training by inserting sequential training data ỹ_n and ỹ_n+1. In fact, the MIRK class <cit.> contains all Runge–Kutta (RK) methods (including the midpoint method) that could be turned into explicit schemes when inserting the training data. This is exploited in <cit.>, where high-order MIRK methods are used to train HNNs, achieving accurate interpolation and extrapolation of a single trajectory with large step size, few samples and assuming zero noise. The assumption of noise-free data limits the potential of learning from physical measurements or applications on data sets from industry. This issue is addressed in <cit.>, presenting symplectic recurrent neural networks (SRNN). Here, Störmer–Verlet is used to integrate multiple steps and is combined with initial state optimization (ISO) before computing the loss. ISO is applied after training f_θ a given number of epochs and aims at finding the optimal initial value ŷ_0, such that the distance to the subsequent observed points ỹ_1,…,ỹ_N is minimized when integrating over f_θ. While <cit.> is limited by only considering separable systems, <cit.> aims at identifying the optimal combination of third-order polynomial basis functions to approximate a cubic non-separable Hamiltonian from noisy data, using a Bayesian framework. § BACKGROUND ON NUMERICAL INTEGRATION Some necessary and fundamental concepts on numerical integration and the geometry of Hamiltonian systems are presented below to inform the discussion on which integrators to use in inverse problems. Further details could be found in Appendix <ref>. Fundamental concepts: An important subclass of the general first-order ODEs (<ref>) is the class of Hamiltonian systems, as given by (<ref>). Often, the solution is partitioned into the coordinates y(t) = [q(t),p(t)]^T, with q(t),p(t) ∈^d. A separable Hamiltonian system is one where the Hamiltonian could be written as the sum of two scalar functions, often representing the kinetic and potential energy, that depends only on q and p respectively, this means we have H(q,p) = H_1(q) + H_2(p). The h flow of an ODE is a map φ_h,f:ℝ^n→ℝ^n sending an initial value y(t_0) to the solution of the ODE at time t_0 + h, given by φ_h,f(y(t_0)) := y(t_0 + h). A numerical integration method Φ_h,f:ℝ^n→ℝ^n is a map approximating the exact flow of the ODE, so that y(t_1)≈ y_1 = Φ_h,f(y_0). Here, y(t_n) represents the exact solution and we denote with y_n the approximation at time t_n = t_0 + n h. It should be noted that the flow map satisfies the following group property: φ_h_1,f∘φ_h_2,f (y(t_0 )) = φ_h_1,f (y(t_0 + h_2 )) = φ_h_1 + h_2,f(y(t_0)). In other words, a composition of two flows with step sizes h_1,h_2 is equivalent to the flow map over f with step size h_1 + h_2. This property is not shared by numerical integrators for general vector fields. The order of a numerical integrator Φ_h,f characterizes how the error after one step depends on the step size h and is given by the integer p such that the following holds: y_1 - y(t_0 + h) = Φ_h,f(y_0) - φ_h,f(y(t_0)) = 𝒪 (h^p+1). Mono-implicit Runge–Kutta methods: Given vectors b,v∈^s and a strictly lower triangular matrix D ∈^s× s, a MIRK method is a Runge–Kutta method where A = D + vb^T <cit.> and we assume that [A]_ij = a_ij is the stage-coefficient matrix. This implies that the MIRK method can be written on the form y_n+1 = y_n + h∑_i=1^s b_i k_i, k_i = f (y_n + v_i(y_n+1 - y_n) + h∑_j=1^s d_ij k_j ). Specific MIRK methods and further details on Runge–Kutta schemes is discussed in Appendix <ref>. Symplectic methods: The flow map of a Hamiltonian system is symplectic, meaning that it's Jacobian Υ_φ := ∂/∂ yφ_h,f(y) satisfies Υ_φ ^T J Υ_φ = J, where J is the same matrix as in (<ref>). As explained in <cit.>, this is equivalent to the preservation of a projected area in the phase space of [q,p]^T. Similarly, a numerical integrator is symplectic if its Jacobian Υ_Φ := ∂/∂ y_nΦ_h,f(y_n) satisfies Υ_Φ ^T J Υ_Φ = J. It is possible to prove <cit.> that a Runge–Kutta method is symplectic if and only if the coefficients satisfy b_i a_ij + b_j a_ji - b_i b_j = 0, i,j = 1,…,s. § NUMERICAL INTEGRATION SCHEMES FOR SOLVING INVERSE PROBLEMS We will now consider different ways to use numerical integrators when training Hamiltonian neural networks and present important properties of MIRK methods, a key component of the MII that is presented in Chapter <ref>. Inverse ODE problems in Hamiltonian form: We assume to have potentially noisy samples S_N = {ỹ}_n=0^N of the solution of an ODE with vector field f. The inverse problem can be formulated as the following optimization problem: *arg min_θ∑_n=0^N-1ỹ_n+1 - Φ_h,f_θ(ỹ_n) , where f_θ = J∇ H_θ is a neural network approximation with parameters θ of a Hamiltonian vector field f, and Φ_h,f_θ is a one-step integration method with step length h. r0.45 (0,0) circle (2cm) (1.5,-0.5) circle (1.5cm) (30:-0.7cm) circle (1.3cm) (180:1.05cm) circle (0.7cm) (-5:1cm) ellipse (1.2cm and 0.9cm) [shift=(3cm,-5cm), fill opacity=0.4] node[above] ; [fill=gray] node ERK; ; [rotate = 45,fill=orange] ; [fill=ntnublue] ; ; [fill=ntnublue] ; [shift=(3cm,-5cm)] node (ERK) at (180:1.05cm) ERK; node (RK) at (-0.2cm,1.5cm) RK; node (MIRK) at (-125:1.2cm) MIRK; node (SRK) at (55:1.45cm) SympRK; node (SymRK) at (-25:1.4cm) SymRK; (mirk4b) at (2,-2.2) I. Euler, MIRK3, MIRK5; (rk4b) at (-1.1,2.2) E. Euler, RK4; (gl4a) at (1.2,0.6) ; (gl4b) at (3.2,0.9) GL4, GL6; (mirk6a) at (0.2,-0.8) ; (mirk6b) at (3.6,-1.2) MIRK4, MIRK6; (midtpoint_a) at (0.2,0.1) ; (midtpoint_b) at (3.3,-0.2) Midpoint; [-stealth,line width=0.4pt] (MIRK.south) to [out=-30,in=165] (mirk4b.west); [-stealth,line width=0.4pt] (ERK.west) to [out=110,in=-80] (rk4b); [-stealth,line width=0.4pt] (gl4a.east) to [out=0,in=190] (gl4b.west); [-stealth,line width=0.4pt] (mirk6a.east) to [out=0,in=190] (mirk6b.west); [-stealth,line width=0.4pt] (midtpoint_a.east) to [out=0,in=190] (midtpoint_b.west); Venn diagram of Runge–Kutta (RK) subclasses: explicit RK (ERK), symplectic RK (SympRK), mono-implicit RK (MIRK) and symmetric RK (SymRK). In the setting of inverse ODE problems, the availability of sequential points S_N could be exploited when a numerical method is used to form interpolation conditions, for f_θ≈ f for each n in the optimization problem (<ref>). For example, ỹ_n and ỹ_n+1 could be inserted in the implicit midpoint method, turning a method that is implicit for IVPs into an explicit method for inverse problems: Φ_h,f_θ(ỹ_n,ỹ_n+1) = ỹ_n + hf_θ( ỹ_n + ỹ_n+1/2). We denote this as the inverse injection, which defines an inverse explicit property for numerical integrators. Assume that ỹ_n,ỹ_n+1∈ S_N. Let the inverse injection for the integrator Φ_h,f(y_n,y_n+1) be given by the substitution (ỹ_n,ỹ_n+1) → (y_n,y_n+1) such that ŷ_n+1 = Φ_h,f(ỹ_n,ỹ_n+1). A numerical one-step method Φ is called inverse explicit if it is explicit under the inverse injection. This procedure is utilized successfully by several authors when learning dynamical systems from data, see e.g. <cit.>. However, this work is the first attempt at systematically exploring numerical integrators under the inverse injection, by identifying the MIRK methods as the class consisting of inverse explicit Runge–Kutta methods. MIRK-methods are inverse explicit. Since the matrix D in (<ref>) is strictly lower triangular, the stages are given by k_1 = f (y_n + v_i(y_n+1 - y_n) ) k_2 = f (y_n + v_i(y_n+1 - y_n) + h d_21 k_1 ) ⋮ k_s = f (y_n + v_i(y_n+1 - y_n) + h∑_j=1^s-1 d_sj k_j ) meaning that if y_n and y_n+1 are known, all stages, and thus the next step ŷ_n+1 = y_n + h∑_i=1^s b_i k_i, could be computed explicitly. Because of their explicit nature when applied to inverse ODE problems, MIRK methods are an attractive alternative to explicit Runge–Kutta methods; in contrast to explicit RK methods, they can be symplectic or symmetric, or both, without requiring the solution of systems of nonlinear equations, even when the Hamiltonian is non-separable. Figure <ref> illustrates the relation between various subclasses and the specific methods are described in Table <ref> in Appendix <ref>. In addition, for s-stage MIRK methods, it is possible to construct methods of order p = s+1 <cit.>. This is in general higher order than what is possible to obtain with s-stage explicit Runge–Kutta methods. Further, computational gains could also be made by reusing evaluations of the vector field between multiple steps, which using MIRK methods allow for, as explained in Appendix <ref>. The dependency structure on the data S_N of explicit RK (ERK) methods, MIRK methods and the SRNN method <cit.> is illustrated in Figure <ref>. Maximal order of symplectic MIRK methods: From the preceding discussion, it is clear that symplectic MIRK methods are of interest when learning Hamiltonian systems from data, since they combine computational efficiency with the ability to preserve useful, geometric properties. Indeed, symplectic integrators in the training of HNNs have been considered in <cit.>. The subclass of symplectic MIRK methods is represented by the middle, dark blue field in the Venn diagram of Figure <ref>. The next result gives an order barrier for symplectic MIRK methods that was, to the best of our knowledge, not known up to this point. The maximum order of a symplectic MIRK method is p=2. This is a shortened version of the full proof, which can be found in Appendix <ref>. A MIRK method is a Runge–Kutta method with coefficients a_ij = d_ij + v_ib_j. Requiring d_ij, b_i and v_i to satisfy the symplecticity conditions of (<ref>) in addition to D being strictly lower triangular, yields the following restrictions b_i d_ij + b_i b_j (v_j + v_i - 1) = 0, if i ≠ j, b_i = 0 or v_i = 1/2, if i = j, d_ij = 0, if i>j. These restrictions result in an RK method that could be reduced to choosing a coefficient vector b∈^s and choosing stages on the form k_i = f(y_n + h/2∑_j^s b_jk_j) for i = 1,…,s. It is then trivial to check that this method can only be of up to order p=2. Note that for s=1 and b_1 = 1 we get the midpoint method. Numerical integrators outside the RK class: While this paper is mainly concerned with MIRK methods, several other types of numerical integrators could be of interest for inverse problems. Partitioned Runge–Kutta methods are an extension and not a subclass of RK methods, and can be symplectic and symmetric, while also being explicit for separable Hamiltonian systems. The Störmer–Verlet integrator of order p=2 is one example. Higher order methods of this type are derived in <cit.> and used for learning Hamiltonian systems in <cit.>. Discrete gradient methods <cit.> are inverse explicit and well suited to train Hamiltonian neural networks using a modified automatic differentiation algorithm <cit.>. This method could be extended to higher order methods as shown in <cit.>. In contrast to symplectic methods, discrete gradient methods preserve the Hamiltonian exactly up to machine precision. A third option is elementary differential Runge–Kutta methods <cit.>, where for instance <cit.> show how to use backward error analysis to construct higher order methods from modifications to the midpoint method. This topic is discussed further in Appendix <ref>, where we also present a novel, symmetric discrete gradient method of order p=4. § MEAN INVERSE INTEGRATOR FOR HANDLING NOISY DATA Noisy ODE sample: It is often the case that the samples S_N are not exact measurements of the system, but are perturbed by noise. In this paper, we model the noise as independent, normally distributed perturbations ỹ_n = y(t_n) + δ_n, δ_n∼𝒩(0,σ^2I), where 𝒩(0,σ^2I) represents the multivariate normal distribution. With this assumption, a standard result from statistics tells us that the variance of a sample-mean estimator with N samples converges to zero at the rate of 1/N. That is, assuming that we have N samples ỹ_n^(1),…,ỹ_n^(N), then [y_n] = [1/N∑_j=1^N ỹ_n^(j)] = σ^2/N. Using the inverse injection with the midpoint method, the vector field is evaluated in the average of ỹ_n and ỹ_n+1, reducing the variance of the perturbation by a factor of two, compared to evaluating the vector field in ỹ_n, as is done in all explicit RK methods. Furthermore, considering the whole data trajectory S_N, multiple independent approximations to the same point y(t_n) can enable an even more accurate estimate. This is demonstrated in the analysis presented in Theorem <ref> and in Figure <ref>. Averaging multiple trajectories: In the inverse ODE problem, we assume that there exists an exact vector field f whose flow interpolates the discrete trajectories S_N, and the flow of this vector field satisfies the group property (<ref>). The numerical flow Φ_h,f for a method of order p satisfies this property only up to an error 𝒪(h^p+1) over one step. In the presence of noisy data, compositions of one-step methods can be used to obtain multiple different approximations to the same point y(t_n), by following the numerical flow from different nearby initial values ỹ_j, j≠ n, and thus reduce the noise by averaging over these multiple approximations. Accumulation of the local truncation error is expected when relying on points further away from t_n. However, for sufficiently small step sizes h compared to the size of the noise σ, one can expect increased accuracy when averaging over multiple noisy samples. As an example, assume that we know the points {ỹ_0,ỹ_1,ỹ_2,ỹ_3}. Then y(t_2) can be approximated by computing the mean of the numerical flows Φ_h,f starting from different initial values: y_2 = 1/3 ( Φ_h,f(ỹ_1) + Φ_h,f∘Φ_h,f(ỹ_0) + Φ^*_-h,f (ỹ_3) ) ≈1/3 (ỹ_0 +ỹ_1 +ỹ_3 + h( Ψ_0,1 + 2 Ψ_1,2 - Ψ_2,3 ) ), where we by Φ^* mean the adjoint method of Φ, as defined in <cit.>, and we let Ψ_n,n+1 be the increment of an inverse-explicit numerical integrator, so that Φ_h,f(ỹ_n,ỹ_n+1) = ỹ_n + h Ψ_n,n+1. For example, for the midpoint method, we have that Ψ_n,n+1 = f(ỹ_n + ỹ_n+1/2). When stepping in negative time in (<ref>), we use the adjoint method in order to minimize the number of vector field evaluations, also when non-symmetric methods are used (which implies that we always use e.g. Ψ_1,2 and not Ψ_2,1). Note that in order to derive the approximation in (<ref>), repeated use of the inverse injection allows the known points ỹ_n to form an explicit integration procedure, where the composition of integration steps are approximated by summation over increments Ψ_n,n+1. This approximation procedure is presented in greater detail in Appendix <ref>. Mean inverse integrator: The mean approximation over the whole trajectory y_n, for n = 0,…,N, could be computed simultaneously, reusing multiple vector field evaluations in an efficient manner. This leads to what we call the mean inverse integrator. For example, when N=3 we get [ y_0; y_1; y_2; y_3; ] = 1/3[ 0 1 1 1; 1 0 1 1; 1 1 0 1; 1 1 1 0; ][ ỹ_0; ỹ_1; ỹ_2; ỹ_3; ] + h/3[ -3 -2 -1; 1 -2 -1; 1 2 -1; 1 2 3 ][ Ψ_0,1; Ψ_1,2; Ψ_2,3 ], and the same structure is illustrated in Figure <ref>. For a sample S_N and an inverse-explicit integrator Ψ_n,n+1, the mean inverse integrator is given by Y = 1/N ( U Ỹ + h W Ψ ) where Ỹ := [ỹ_0, …, ỹ_N]^T ∈^(N+1) × m, Ψ := [Ψ_0,1,…,Ψ_N-1,N]^T∈^N× m. Finally, U∈^(N+1)× (N+1) and W∈^(N+1) × N are given by [U]_ij := 0 if i = j 1 else and [W]_ij := j - 1 - N if j ≥ i j else. By substituting the known vector field f with a neural network f_θ and denoting the matrix containing vector field evaluations by Ψ_θ such that Y_θ := 1/N ( U Ỹ + h W Ψ_θ ), we can formulate an analogue to the inverse problem (<ref>) by *arg min_θỸ - Y_θ. r0.4 0.85 [ main/.style = draw, circle, minimum size=4mm, main_r2/.style = draw, circle, minimum size=4mm, text = white, fill=rgb:white,1;ntnublue,10, scale=1, ] [main_r2] (1) at (0,0) y_0; [main] (2) at (2,0) y_1; [main] (3) at (4,0) y_2; [main] (4) at (6,0) y_3; [Latex[length=2mm]-,line width=1pt] (3) to [out=15,in=165] node[above]-hΨ_2,3 (4); [Latex[length=2mm]-,line width=1pt] (2) to [out=15,in=165] node[above] (3); [Latex[length=2mm]-,line width=1pt] (2) to [out=30,in=150] node[above]-2hΨ_1,2 (3); [Latex[length=2mm]-,line width=1pt] (1) to [out=15,in=165] node[above] (2) ; [Latex[length=2mm]-,line width=1pt] (1) to [out=30,in=150] node[above] (2) ; [Latex[length=2mm]-,line width=1pt] (1) to [out=45,in=135] node[above]-3hΨ_0,1 (2) ; [main] (1) at (0,-1.0) y_0; [main_r2] (2) at (2,-1.0) y_1; [main] (3) at (4,-1.0) y_2; [main] (4) at (6,-1.0) y_3; [-Latex[length=2mm],line width=1pt] (1) to [out=15,in=165] node[above]hΨ_0,1 (2); [Latex[length=2mm]-,line width=1pt] (2) to [out=15,in=165] node[above] (3); [Latex[length=2mm]-,line width=1pt] (2) to [out=30,in=150] node[above]-2hΨ_1,2 (3); [Latex[length=2mm]-,line width=1pt] (3) to [out=15,in=165] node[above]-hΨ_2,3 (4); [main] (1) at (0,-2.0) y_0; [main] (2) at (2,-2.0) y_1; [main_r2] (3) at (4,-2.0) y_2; [main] (4) at (6,-2.0) y_3; [-Latex[length=2mm],line width=1pt] (1) to [out=15,in=165] node[above]hΨ_0,1 (2); [-Latex[length=2mm],line width=1pt] (2) to [out=15,in=165] node[above] (3); [-Latex[length=2mm],line width=1pt] (2) to [out=30,in=150] node[above]2hΨ_1,2 (3); [Latex[length=2mm]-,line width=1pt] (3) to [out=15,in=165] node[above]-hΨ_2,3 (4); [main] (1) at (0,-3.0) y_0; [main] (2) at (2,-3.0) y_1; [main] (3) at (4,-3.0) y_2; [main_r2] (4) at (6,-3.0) y_3; [-Latex[length=2mm],line width=1pt] (1) to [out=15,in=165] node[above]hΨ_0,1 (2); [-Latex[length=2mm],line width=1pt] (2) to [out=15,in=165] node[above] (3); [-Latex[length=2mm],line width=1pt] (2) to [out=30,in=150] node[above]2hΨ_1,2 (3); [-Latex[length=2mm],line width=1pt] (3) to [out=15,in=165] node[above] (4) ; [-Latex[length=2mm],line width=1pt] (3) to [out=30,in=150] node[above] (4) ; [-Latex[length=2mm],line width=1pt] (3) to [out=45,in=135] node[above]3hΨ_2,3 (4) ; Illustration of the structure of the mean inverse integrator for N=3. Analysis of sensitivity to noise: Consider the optimization problems using integrators either as one-step methods or MII by (<ref>) resp. (<ref>). We want to investigate how uncertainty in the data ỹ_n introduces uncertainty in the optimization problem. Assume, for the purpose of analysis, that the underlying vector field f(y) is known. Let 𝒯^OS_n := ỹ_n - Φ_h,f(ỹ_n-1, ỹ_n), 𝒯^MII_n := ỹ_n - [Y]_n be the optimization target or the expression one aims to minimize using a one-step method (OS) and the MII, where Y is given by Definition <ref>. For a matrix A with eigenvalues λ_i(A), the spectral radius is given by ρ(A) := max_i |λ_i(A)|. An analytic expression that approximates ρ(𝒯^OS_n) and ρ(𝒯^MII_n) by linearization of f for a general MIRK method is provided below. Let S_N = {ỹ_n }_n=0^N be a set of noisy samples, equidistant in time with step size h, with Gaussian perturbations as defined by (<ref>) with variance σ^2. Assume that a MIRK integrator Φ_h,f is used as a one-step method. Then the spectral radius is approximated by ρ_n^OS := ρ( [ 𝒯^OS_n ] ) ≈σ^2 2I + hb^T( - 2v)(f' + f'^T) + h^2 Q^OS_2, ρ_n^MII :=ρ( [ 𝒯^MII_n ] ) ≈σ^2/N (1 + N)I + hP_nn + h/N∑_j = 0 j≠ n^s P_nj + h^2/N Q^MII_2, where f' := f'(y_n) and P_nj, Q^OS and Q^MII (defined in (<ref>) in Appendix <ref>) are matrices independent of the step size h. r0.4 < g r a p h i c s > Average of ρ over 10 trajectories. The shaded area represent one standard deviation. The proof is found in Appendix <ref>. Let α := b^T( - 2v) denote the coefficients of the first order term in h of Equation (<ref>). For any explicit RK method we have that v = 0 and since b^T = 1 (method of at least order one) we find that α_ERK = 1. Considering the Butcher tableau of MIRK4 in Figure <ref> we find that α_MIRK4 = 0. Thus, as h → 0 we would expect quadratic convergence of MIRK4 and linear convergence of RK4 for ρ_n^OS to 2σ^2. Considering MII (<ref>) one would expect linear convergence for ρ_n^MII to σ^2 if N is large, as h → 0. A numerical approximation of ρ^OS_n and ρ^MII_n could be realized by a Monte-Carlo estimate. We compute the spectral radius ρ̂_n of the empirical covariance matrix of 𝒯^OS_n and 𝒯^MII_n by sampling 5 · 10^3 normally distributed perturbations δ_n with σ^2 = 2.5· 10^-3 to each point y_n in a trajectory of N+1 points and step size h. We then compute the trajectory average ρ= 1/N+1∑_n=0^Nρ̂_n, fix the end time T = 2.4, repeat the approximations for decreasing step sizes h and increasing N and compute the average of ρ for 10 randomly sampled trajectories S_N from the double pendulum system. The plot in Figure <ref> corresponds well with what one would expect from Theorem <ref> and confirms that first MIRK (with v ≠ 0) and secondly MII reduces the sensitivity to noise in the optimization target. § EXPERIMENTS r0.45 < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > Roll-out in time obtained by integrating over the learned vector fields when training on data from the double pendulum Hamiltonian. Methods and test problems: We train HNNs using different integrators and methods in the inverse problem (<ref>). We use MIRK4 together with the MII method and compare to the implicit midpoint method, RK4 and MIRK4 applied as one-step methods, as well as ISO followed by Störmer–Verlet and RK4 integrated over multiple time-steps. The latter strategy, illustrated in Figure <ref>, was suggested in <cit.>, where Störmer–Verlet is used. Separable networks H_θ(q,p) = H_1,θ(q) + H_2,θ(p) are trained on data from the Fermi–Pasta–Ulam–Tsingou (FPUT) problem and the Hénon–Heiles system. For the double pendulum, which is non-separable, a fully connected network is used for all methods except Störmer–Verlet, which requires separability in order to be explicit. The Hamiltonians are described in Appendix <ref> and all systems have solutions y(t) ∈^4. After using the specified integrators in training, approximated solutions are computed for each learned vector field f_θ using the Scikit-learn implementation of DOP853 <cit.>, which is also used to generate the training data. The error is averaged over M = 10 points and we find what we call the flow error by e(f_θ) = 1/M∑_n=1^M ŷ_n - y(t_n) _2, y(t_n) ∈ S_M^test, ŷ_n+1 = Φ_h,f_θ(y_n). Training data: Training data is generated by sampling N_2 = 300 random initial values y_0 requiring that 0.3 ≤ y_0_2≤ 0.6. The data S_N_1,N_2={ỹ_n^(j)}_n=0,j=0^N_1,N_2 is found by integrating the initial values with DOP853 with a tolerance of 10^-15 for the following step sizes and number of steps: (h,N_1) = (0.4,4),(0.2,8),(0.1,16). The points in the flow are perturbed by noise where σ∈{0,0.05}. Error is measured in M=10 random points in the flow, within the same domain as the initial values. Furthermore, experiments are repeated with a new random seed for the generation of data and initialization of neural network parameters five times in order to compute the standard deviation of the flow error. The flow error is shown in Figure <ref>. Additional results are presented in Appendix <ref>. Neural network architecture and optimization: For all test problems, the neural networks have 3 layers with a width of 200 neurons and tanh( · ) as the activation function. The algorithms are implemented using PyTorch <cit.> and the code for performing ISO is a modification of the implementation by <cit.>[h<ttps://github.com/zhengdao-chen/SRNN> (CC-BY-NC 4.0 License)]. Training is done using the quasi-Newton L-BFGS algorithm <cit.> for 20 epochs without batching. Further details are provided in Appendix <ref> and the code could be found at https://github.com/hakonnoren/learning_hamiltonian_noise/github.com/hakonnoren/learning_hamiltonian_noise. Results: As observed in Figure <ref> and supported by the analytical result illustrated in Figure <ref>, the MII approach facilitates more accurate training from noisy data than one-step methods. However, training with multiple integration steps in combination with ISO yields lower errors when RK4 is used for the Hénon–Heiles problem and similar performance as MII on the double pendulum. We notice that the SRNN approach, i.e. ISO with Störmer–Verlet, is improved when switching to RK4, which means sacrificing symplecticity to achieve higher order. The results for FPUT stand out in Figure <ref>, since both ISO methods have large errors here. The roll-out in time of the learned vector fields is presented in Figure <ref> in Appendix <ref>, where the same can be observed. As also could be seen here, the FPUT Hamiltonian gives rise to highly oscillatory trajectories, and the errors observed in Figure <ref> might indicate that ISO is ill-suited for this kind of dynamical systems. Two observations could be made regarding the one-step methods without averaging or ISO. First, it is likely that the midpoint method has weaker performance for large step sizes due to its lower order, compared to both RK4 and MIRK4, despite the fact that it is a symplectic method. The same is clear from Figure <ref> in Appendix <ref>, which display the flow error when training on data without noise. Secondly, building on the sensitivity analysis, we observe that MIRK4 consistently attains higher accuracy than RK4, as expected from the Monte-Carlo simulation found in Figure <ref>. § CONCLUSION In this work we present the mean inverse integrator, which allows both chaotic and oscillatory dynamical systems to be learned with high accuracy from noisy data. Within this method, integrators of the MIRK class are a key component. To analyse how noise is propagated when training with MII and MIRK, compared to much used explicit methods such as RK4, we developed a sensitivity analysis that is verified both by a Monte-Carlo approximation and reflected in the error of the learned vector fields. Finally, we build on the SRNN <cit.> by replacing Störmer–Verlet with RK4, and observer increased performance. When also considering the weak performance of the implicit midpoint method, this tells us that order might be of greater importance than preserving the symplectic structure when training HNNs. Both the MIRK methods, the mean inverse integrator and initial state optimization form building blocks that could be combined to form novel approaches for solving inverse problems and learning from noisy data. Limitations: The experiments presented here assume that both the generalized coordinates q_n and the generalized momenta p_n could be observed. In a setting where HNNs are to model real and not simulated data, the observations might lack generalized momenta <cit.> or follow Cartesian coordinates, requiring the enforcement of constraints <cit.>. Combining approaches that are suitable for data that is both noisy and follow less trivial coordinate systems is a subject for future research. unsrtnat § TEST PROBLEMS Fermi–Pasta–Ulam–Tsingou: This dynamical system is a model for a chain of 2m +1 alternating stiff and soft springs connecting 2m mass points. The chain is fixed in both ends <cit.>. With the coordinate transformation suggested in <cit.> we have coordinates [q,p]^T ∈^4m where q_i, i = 1,…,m represents a scaled displacement of the i-th stiff spring and q_i+m, i = 1,…,m represents a scaled expansion of the i-th spring. q_i represents their velocities. Letting ω be the angular frequency of the stiff spring, in general the Hamiltonian is given by H(q,p) = 1/2∑_i=1^m( p_i^2 + p_i+m^2 ) + ω^2/2∑_i=1^mq_i+m^2 + 1/4( ∑_i=1^m-1 (q_i+1 - q_i+m+1 - q_i-q_i+m )^4 + (q_1 - q_m+1)^4 + (q_m + q_2m)^4) We consider the most trivial case of m=1 and letting ω=2, yielding the quartic, separable Hamiltonian by H(q_1,q_2,p_1,p_2) = 1/2( p_1^2 + p_2^2 ) + 2q_2^2 + 1/4( (q_1 - q_2)^4 + (q_1 + q_2)^4 ). Double pendulum: Let q_i and p_i denote the angle and angular momentum of pendulum i = 1,2. The double pendulum system has a Hamiltonian that is not separable, where y=[q_1,q_2,p_1,p_2]^T ∈^4 and the Hamiltonian is given by H(q_1,q_2,p_1,p_2) = 1/2p_1^2 + p_2^2 - p_1p_2cos(q_1-q_2)/1+sin^2(q_1-q_2) - 2cos(q_1) -cos(q_2). Hénon–Heiles: This model was introduced for describing stellar motion inside the gravitational potential of a galaxy, as described in <cit.>. This Hamiltonian is separable. However, it is a canonical example of a chaotic system and its properties are discussed more in detail in <cit.>. The Hamiltonian is given by H(q_1,q_2,p_1,p_2) = 1/2(p_1^2 + p_2^2) + 1/2(q_1^2 + q_2^2) + q_1^2q_2 - 1/3q_2^3. § ADDITIONAL NUMERICAL RESULTS Here we present additional numerical experiments. In Figure <ref>, the flow error when learning from data without noise, could be found. The roll-out in time of the learned Hamiltonian for the FPUT and Hénon–Heiles problem is presented in Figure <ref>. § MORE ON NUMERICAL INTEGRATION §.§ Runge–Kutta methods A general Runge–Kutta method for an autonomous system with s stages is a one-step numerical integrator given by y_n+1 = y_n + h∑_j=1^s b_i k_i, k_i = f (y_n + h∑_j=1^s a_ij k_j ), i = 1,…,s. A concrete method is determined by specifying the coefficient matrix A ∈^s× s and the vector b ∈^s, and there are conditions for symplecticity and order associated with these <cit.>. The conditions for order p=1 require that the coefficient c∈^s is determined by c_i = ∑_j=1^s a_ij. A method could be compactly represented by a Butcher tableau which structures the coefficients the following way 1.3c A b^T The two symplectic and symmetric Gauss-Legendre methods (found e.g. in <cit.>) with order p=4,6 and denoted as GL4 and GL6 in Table <ref> are presented in below: §.§ Mono-Implicit Runge–Kutta methods The MIRK methods are specified by a coefficient vector b∈^s, v∈^s in addition to the strictly lower triangular matrix D∈^s× s and could be represented by the an extended Butcher tableau in the following manner 1.3c v D b^T In <cit.> it is proved that the maximum order of an s-stage MIRK method is p = s+1 and several methods with stages s ≤ 5 are presented. Below, we specify the MIRK methods used in the numerical experiments in addition to presenting their extended Butcher tableau in Figure <ref>. * Midpoint: The symmetric and symplectic MIRK method where (s,p) = (1,2) is equivalent to the midpoint method. * MIRK3: The method (s,p)=(2,3) found by choosing c_1 = 1 in <cit.>. * MIRK4: The method (s,p)=(3,4) with x_31 = 1/8 in <cit.> and is first presented in in <cit.>. * MIRK5: The method (s,p)=(4,5) presented in <cit.> choosing c_2=0 and c_3=3/2. It should be noted that as long as c_3 > 1 the method is A-stable, however the particular choice of c_3=3/2 is arbitrary. * MIRK6: The method (s,p)=(5,6) presented in <cit.>, which is the s=5 stage scheme in <cit.> choosing c_3 = 1/2 - √(21)/14. According to <cit.>, this method is an improvement over earlier schemes on the same form which used c_3 = 1/4. §.§ Symmetric methods: The exact flow of an ODE satisfies the following property known as (time) symmetry: y(t_0)=φ_h,f^-1(y(t_0 +h))=φ_-h,f(y(t_0 + h)), where the superscript “-1" denotes the inverse map. This is a desirable property also for the numerical approximation. A numerical integration method Φ_h,f is called symmetric if Φ_h,f = Φ_-h,f^-1. Symmetric numerical methods have the following properties <cit.>: * A symmetric integrator preserves the (time) symmetry of the exact flow. * The order p of a symmetric method is necessarily even. * Solutions of Hamiltonian systems satisfy the following reflection symmetry: if (q(t),p(t)) solves the Hamiltonian ODE, then (q(-t),-p(-t)) is also a solution, with y(t) = [q(t),p(t)]^T. Numerical solutions (q_n,p_n) obtained from a symmetric Runge–Kutta method satisfy the same reflection symmetry <cit.>. A Runge–Kutta method is symmetric if and only if PA + AP - b^T = 0, b = Pb, where := [1,…,1]^T ∈^s and [p]_ij = δ_i,s+1-j <cit.>. That is, P is the reflection of the identity matrix over the first axis. Inserting the definition of a MIRK method from (<ref>), we get PD + DP + (Pv + v - )b^T = 0 b = Pb. Symmetric MIRK methods of order p = 2,4,6 are presented in <cit.> and specific examples are found in Figure <ref>. § DETAILS ON THE INVERSE INJECTION IN MII Assume we are deriving the MII following the example in Equation (<ref>) using the implicit midpoint method, where y_n+1 = y_n + hf( y_n + y_n+1/2) = y_n + hΨ_n,n+1. We thus find that the second term in (<ref>), the composition of two steps starting in ỹ_0 could be approximated by ŷ_2 =Φ_h,f∘Φ_h,f(ỹ_0) = Φ_h,f(ỹ_0) + hf (Φ_h,f(ỹ_0) + ŷ_2/2 ) ≈Φ_h,f(ỹ_0) + hf ( ỹ_1 + ỹ_2/2 ) = ỹ_0 + hf ( ỹ_0 + Φ_h,f(ỹ_0)/2 ) + hΨ_1,2 ≈ỹ_0 + hf ( ỹ_0 + ỹ_1 /2 ) + hΨ_1,2 = ỹ_0 + hΨ_0,1 + hΨ_1,2. where the approximation ≈ is obtained by the substitution ỹ_2 →ŷ_2 and ỹ_1 →Φ_h,f(ỹ_0). The same procedure (repeatedly using the inverse injection) is generalized over longer trajectories and used to arrive at the MII method in Definition <ref>. § DETAILS ON NEURAL NETWORK TRAINING The experiments were performed on a Apple M1 Pro chip with double precision. The PyTorch L-BFGS <cit.> algorithm is run with the following parameters: * History size: 120. * Gradient tolerance: 10^-9. * Termination tolerance on parameter changes: 10^-9. * Line search: Strong Wolfe. Both MII and ISO work better when f_θ has been pre-trained to be a reasonable approximation of the underlying vector field f. Thus, for both MII and ISO training is run 10 epochs on the one-step method before training additional 10 epochs with MII (MII MIRK4) and ISO (ISO Störmer and ISO RK4). The ISO procedure (searching for the optimal initial value ŷ_0) utilizes the L-BFGS optimization algorithm from the SciPy library <cit.> with gradient tolerance of 10^-6 and the maximum number of iterations limited to 10. § PROOF OF THEOREM <REF> As stated in Equation (<ref>) Runge–Kutta method as given by Equation (<ref>) is symplectic if and only if b_i a_ij + b_j a_ji - b_i b_j = 0. Inserting the particular form for the MIRK coefficients, a_ij = d_ij + v_ib_j, we get b_i(v_i b_j + d_ij) + b_j(v_j b_i + d_ji) - b_ib_j = 0 b_i d_ij + b_j d_ji + b_i b_j (v_j + v_i - 1) = 0. As D is strictly lower triangular eiter d_ji = 0 or d_ij = 0, which for Equation (<ref>) implies that b_i d_ij + b_i b_j (v_j + v_i - 1) = 0 if i ≠ j ´ b_i^2 (2v_i - 1) = 0 if i = j Requiring d_ij, b_i and v_i to satisfy the symplecticity condition yields the following restriction b_i d_ij + b_i b_j (v_j + v_i - 1) = 0 if i ≠ j, b_i = 0 or v_i = 1/2 if i = j. Without loss of generality, we assume that the m first entries of b∈^s are zero. Enforcing the conditions of Equation (<ref>) on v∈^s we get for 1 ≤ m≤ s b = [0,…,0,b_m+1,…,b_s]^T, v = [v_1,…,v_m,,…,]^T. In total, this gives the following constraints for v,b,D: [ b_j = 0 b_j ≠ 0; ; b_i = 0 d_ij∈ d_ij∈; v_i, v_j ∈ v_i,v_j ∈; ; ; b_i ≠ 0 d_ij = 0 d_ij =0; v_i,v_j ∈ v_i, v_j = 1/2; ; ] Which for the Runge–Kutta method A = D + vb^T gives a (RK) Butcher tableau of the form 1.2[ 0 0 0 v_1 b_m+1 v_1 b_s; d_21 0 0 ; d_31 d_32 0 ⋮ ⋮; ⋮ ⋱ ; d_m,1 d_m,m-1 0 v_m b_m+1 v_m b_s; 0 0 b_m+1 b_s; ⋮ ⋮ ⋮ ⋮; 0 0 b_m+1 b_s; 0 0 b_m+1 b_s; ] Since the lower left submatrix is the zero matrix, this leaves the stages k_m+1,…,k_s unconnected to the first m stages. In addition, as b_i = 0 for i=1,…,m, these stages are not included in the computation of the final integration step. The method is thus reducible to the lower right submatrix of A and b_m+1,…,b_s. The reduced method is thus in general given by the following stage-values k_i = f(y_n + h/2∑_j^s b_jk_j). It is trivial to check that if ∑_i^s b_i = 1 the method satisfies order conditions up to order p=2, which could be found in <cit.> to be ∑_i b_i = 1, and ∑_i,j b_i a_ij = 1/2. However, the method fails to satisfy the first of the two conditions required for order p=3, since ∑_i,j,k b_i a_ij a_ik = 1/4∑_i,j,k b_i b_j b_k = 1/4≠1/3. Hence, the maximum order of a symplectic MIRK method is p=2. As a remark, it should be noted that the s=1 stage, symplectic MIRK method found by setting b_1 = 1, v_1 = 1/2 and d_11 = 0 is simply the midpoint method y_n+1 = y_n + hk_1 with k_1 = f(y_n + y_n+1/2). § PROOF OF THEOREM <REF> Let s_i(y_n,y_n+1) := y_n + v_i(y_n+1 - y_n) and ỹ_n be noisy data (<ref>). Observe that we can obtain the following approximation to the MIRK stages (<ref>) by k_i = f(ỹ_n + v_i(ỹ_n+1 - ỹ_n) + h∑_j=1^s d_ij k_j ) = f (s_i(y_n,y_n+1) + s_i(δ_n,δ_n+1) + 𝒪(h) ) = f(s_i(y_n,y_n+1)) + f'(s_i(y_n,y_n+1))s_i(δ_n,δ_n+1) + 𝒪(s(δ_n,δ_n+1) ^2) + 𝒪(h) = f(y_n) + f'(y_n)s_i(δ_n,δ_n+1) + 𝒪(s(δ_n,δ_n+1) ^2) + 𝒪(h). Where in the final equality we expand y_n+1 = y_n + hf(y_n) + 𝒪(h^2) to find f(s_i(y_n,y_n+1)) = f(y_n + v_i(y_n+1 - y_n)) = f(y_n + hv_i f(y_n) + 𝒪(h^2)) = f(y_n) + 𝒪(h). And similarly for f'(s_i(y_n,y_n+1)). In total, this means that the next MIRK-step could be approximated by y_n+1 = ỹ_n + h∑_i=1^s b_i k_i = ỹ_n + h∑_i=1^s b_i ( f(y_n) + f'(y_n)s_i(δ_n,δ_n+1) ) + 𝒪(hs(δ_n,δ_n+1)^2 ) + 𝒪(h^2). First note, that if x is a multivariate normally distributed random variable x ∼𝒩(0,Σ) then for a matrix G ∈^n× n the variance of the linear transformation is given by [Gx] := Cov[Gx,Gx] = GΣ G^T. Now, using the approximation in Equation (<ref>), we find the variance of the optimization target 𝒯^OS_n+1 =ỹ_n+1 - Φ_h,f(ỹ_n, ỹ_n+1) by [ ỹ_n+1 - ỹ_n - h∑_i=1^s b_i k_i ] ≈ [ δ_n+1 - δ_n - h∑_i=1^s b_i ( f'(y_n)s_i(δ_n,δ_n+1) ) ] = [ (I - hb^Tv f'(y_n) )δ_n+1 - (I + h b^T( - v) f'(y_n) )δ_n ] = σ^2 (I - hb^Tv f'(y_n) ) (I - hb^Tv f'(y_n) )^T + σ^2 (I + hb^T( - v) f'(y_n) ) (I + hb^T( - v) f'(y_n) )^T = σ^2 [2I + hb^T( - 2v)(f'(y_n) + f'(y_n)^T) + h^2 ((b^Tv)^2 + (b^T( - v))^2)f'(y_n)f'(y_n)^T_:= Q^OS ] = σ^2 [2I + hb^T( - 2v)(f'(y_n) + f'(y_n)^T) + h^2 Q^OS ]. Here, := [1,…,1]^T∈^s. This is the variance estimate we wanted to find for MIRK methods used as one-step integration schemes. Similarly, considering a point computed by the mean inverse integrator y_n, we find, using the stage approximation by Equation (<ref>) that y_n = 1/N∑_j = 0 j≠ n^Nỹ_j + h/N∑_j=0^N-1 w_n,j∑_l=1^s b_l k_l ≈1/N∑_j = 0 j≠ n^Nỹ_j + h/N∑_j=0^N-1 w_n,j∑_l=1^s b_l ( f(y_j) + f'(y_j)s_l(δ_j,δ_j+1) ) Where we note that w_n,j := [W]_nj, from Definition <ref> of the MII. Let y_n := [Y]_n. Computing the variance of the optimization target 𝒯^MII_i = ỹ_n - y_n we find, by introducing P_nj to simplify notation, that [ ỹ_n - y_n] ≈ [ δ_n - 1/N∑_j = 0 j≠ n^Nδ_j - h/N∑_j=0^N-1 w_n,j f'(y_j) ( b^T( - v)δ_j + b^Tv δ_j+1) ] = [ 1/N∑_j = 0 j≠ n^N (I + hf'(y_j) ( w̃_n,j b^T( - v) + w̃_n,j-1b^Tv )_:=P_nj )δ_j ] + [ ( I - h/N f'(y_n) ( w̃_n,n b^T( - v) + w̃_n,n-1b^Tv )_=P_nn ) δ_n ] = [ 1/N∑_j = 0 j≠ n^N (I + hP_nj )δ_j ] + [ ( I - h/NP_nn ) δ_n ] = σ^2 /N^2∑_j = 0 j≠ n^N ( I + hP_nj ) (I + hP_nj )^T + σ^2 ( I - h/NP_nn )( I - h/NP_nn )^T. In the second line, w̃_n,j is introduced which is elements of a matrix W̃ = [ 0 | w_1 | w_2 | … | w_N | 0 ] ∈^N × N+1, or in other words the matrix you obtain by padding W right and left with a column of zeros. Expanding the terms and introducing matrices P_nj and Q^MII we finally find [ ỹ_n - y_n] ≈ σ^2/N[ (1 + N)I + h(P_nn + P_nn^T)_= P_nn + h/N∑_j = 0 j≠ n^s( P_nj + P_nj^T)_:= P_nj + h^2/N∑_j = 0^s P_njP_nj^T_:= Q^MII] = σ^2/N[ (1 + N)I + hP_nn + h/N∑_j = 0 j≠ n^s P_nj + h^2/N Q^MII]. Since for a symmetric matrix A we have that the spectral radii ρ (largest absolute value of eigenvalues) could be found by ρ(A) = A_2, we find for both variance approximations (covariance matrix is always symmetric) that ρ( [ ỹ_n - y_n] ) ≈σ^2/N (1 + N)I + hP_nn + h/N∑_j = 0 j≠ n^s P_nj + h^2/N Q^MII_2 ρ( [ ỹ_n+1 - Φ_h,f(ỹ_n,ỹ_n+1) ] ) ≈σ^2 2I + hb^T( - 2v)(f'(y_n) + f'(y_n)^T) + h^2 Q^OS_2 . Finally, we note that: Q^OS := ((b^Tv)^2 + (b^T( - v))^2)f'(y_n)f'(y_n)^T P_nj := f'(y_j) ( w̃_n,j b^T( - v) + w̃_n,j-1b^Tv ) P_nj := P_nj + P_nj^T Q^MII := ∑_j = 0 ^s P_njP_nj^T. § HIGHER-ORDER INVERSE-EXPLICIT INVARIANT-PRESERVING SYMMETRIC NON-PARTITIONED INTEGRATORS We define invariant-preserving integrators as methods that preserve the Hamiltonian or other invariants of the exact solution, either exactly up to machine precision or within a bound, like symplectic methods. Although we argue in this paper that symplecticity is a less important property when learning Hamiltonian systems from data than for integration of a known system, we do not mean to suggest that invariant-preserving integrators may not be beneficial to some extent and have important qualities in the inverse problem also. However, we urge anyone who seeks to use invariant-preserving methods to also consider the order of the method and whether it is a symmetric inverse-explicit method. Although the maximum order of a symplectic inverse-explicit Runge–Kutta method is two, there exist higher-order inverse-explicit invariant-preserving integrators that are not Runge–Kutta methods. Note that partitioned Runge–Kutta (PRK) methods is an extension that does not belong to the class of Runge–Kutta methods. This is important to clarify since there exist PRK methods that are symmetric and explicit for separable systems. This marks a distinction from non-partitioned RK methods: these cannot be symmetric and explicit in general <cit.>. Several papers suggest using symplectic PRK methods for learning Hamiltonian systems <cit.>, but these methods, although symmetric, only depend on one point to approximate the right-hand side of each integration step, and thus do not average out any noise. §.§ Symplectic elementary differential Runge–Kutta methods Chartier et al. showed in <cit.> that an integrator can be applied to a modified vector field in such a way that it yields a higher-order approximation of the original vector field while inheriting the geometric properties of the given integrator. As an example, they present the fourth-order modified implicit midpoint method y_n+1-y_n/h = f(y̅) + h/12(-Df(y̅)Df(y̅)f(y̅) + 1/2D^2 f(y̅) f(y̅) f(y̅) ), where y̅ = (y_n+y_n+1)/2. This is an example of an elementary differential Runge–Kutta (EDRK) method <cit.>, which relies on the calculation of (multi-order) derivatives of the vector field f, denoted here as D^p f for order p. Automatic differentiation can be utilized also to get higher-order derivatives, and we note that f, Df and D^2f each only have to be evaluated once for each training step since they are only evaluated at the one point y̅. A sixth-order modification of the implicit midpoint method is also presented in <cit.>, but that requires the calculation of up to fourth-order derivatives and might be considered prohibitively expensive. §.§ Discrete gradient methods Discrete gradient methods are a class of integrators that can preserve an invariant, e.g. the Hamiltonian, exactly <cit.>. This is in contrast to symplectic methods, which only preserve a perturbation of the invariant exactly and the exact invariant within some bound. We remark that no method can be both symplectic and exactly invariant-preserving in general <cit.>. Discrete gradient methods are defined strictly for invariant-preserving ODEs, which can be written on the form ẏ = S(y) ∇ H(y), for some skew-symmetric matrix S(y) <cit.>. Then a discrete gradient is a function ∇ H : ℝ^d ×ℝ^d →ℝ satisfying ∇ H(u,v)^T (u-v) = H(u)-H(v), a discrete analogue to the invariant-preserving property Ḣ(y) = ∇ H(y)^Tẏ = 0 of (<ref>). A corresponding discrete gradient method is then given by y_n+1-y_n/h = S(y_n,y_n+1,h) ∇ H(y_n,y_n+1), for some approximation S(y_n,y_n+1,h) of S(y) such that S(y,y,0) = S(y), where h is the step size in time. A discrete gradient can at most be a second-order approximation of the gradient, but appropriate choices of S can yield inverse-explicit integrators of arbitrarily high order <cit.>. Matsubara et al. have developed a discrete version of the automatic differentiation algorithm that makes it possible to efficiently calculate a discrete gradient of neural network functions, and demonstrated its use in training of HNNs <cit.> and for detecting invariants <cit.>. A fourth-order discrete gradient method is suggested for training HNNs in <cit.>, given a constant S in (<ref>). This is the scheme (<ref>) with S(y_n,·,h) = S + 8/9 h S Q(y_n,z_2) S - 1/12 h^2 SD^2 H(z_1)S D^2 H(z_1)S, with z_1 = y_n + 1/2 h f(y_n), z_2 = y_n + 3/4h f(z_1) and Q(u,v) := 1/2 (D_2 ∇H (u,v)^T - D_2 ∇H(u,v)), where D_2 ∇H denotes the derivative of ∇H with respect to the second argument, and D^2H :=D∇ H is the Hessian of H. This is not symmetric, so we propose here instead the fourth-order symmetric invariant-preserving scheme obtained by S(y_n,y_n+1,h) = S + h/2 S (Q(y_n,1/3y_n+2/3y_n+1)-Q(y_n+1,2/3y_n+1/3y_n+1)) S - 1/12 (h)^2 SD^2 H(y̅)S D^2 H(y̅)S. §.§ Numerical comparison of fourth-order integrators We test four different fourth-order integrators on solving an initial value problem of the double pendulum described in Appendix <ref>. We compute an approximation of the error of the solution at each time by comparing to a solution obtained using RK4 with 10 times as many time steps. As seen in the left plot of Figure <ref>, the symmetric methods are clearly superior to the explicit RK4 method, when using the same step size. For integration, the advantage of RK4 is that it is more computationally efficient than the implicit methods, which facilitates taking smaller step sizes. However, as pointed out in Section <ref>, RK4 does not have this advantage over MIRK methods for the inverse problem. Furthermore, although the higher-order MIRK methods we suggest to use in this paper are not symplectic and thus lack general energy preservation guarantees, we see from Figure <ref> that they may still preserve the energy within a bound for specific problems. In fact, for the double pendulum problem considered here, the non-symplectic MIRK4 method preserves the energy slightly better than the symplectic MIMP4 scheme up to time T=500. The invariant-preserving discrete gradient method preserves the Hamiltonian to machine precision. § COMPUTATIONAL COST The fourth-order MIRK method from Table <ref> (MIRK4) is between twice and thrice as expensive as the implicit midpoint method, depending on the training strategy. That is, if no batching is performed and f is evaluated at all points in the training set at each iteration of the optimization, then the number of function evaluations for a trajectory with n points is n-1 for the implicit midpoint method and 2n-1 for MIRK4. However, if batching is done and function evaluations cannot generally be reused for successive points, the total number of function evaluations at each epoch may increase to 3n-3 for MIRK4. In general, the cost of an s-stage MIRK method depends on both the training strategy and whether the end points y and ŷ are two of the stages. If batching is not done and y and ŷ are two of the stages, then computational cost at each epoch is 𝒪(m (n + (s-2)(n-1)) ), where m is the number of trajectories of n points in each. The maximum cost with batching is the same as the cost if y and ŷ are not two of the stages: 𝒪(m s (n-1) ) ). This cost is equivalent to that of an explicit s-stage RK method.
http://arxiv.org/abs/2306.08284v2
20230614064533
Free post-groups, post-groups from group actions, and post-Lie algebras
[ "Mahdi Jasim Hasan Al-Kaabi", "Kurusch Ebrahimi-Fard", "Dominique Manchon" ]
math.QA
[ "math.QA" ]
Post-groups] Free post-groups, post-groups from group actions, and post-Lie algebras M. J. H. Al-Kaabi]Mahdi Jasim Hasan Al-Kaabi Mathematics Department, College of Science, Mustansiriyah University, Palestine Street, P.O.Box 14022, Baghdad, IRAQ. [email protected] K. Ebrahimi-Fard]Kurusch Ebrahimi-Fard Department of Mathematical Sciences, NTNU, NO 7491 Trondheim, Norway. [email protected] https://folk.ntnu.no/kurusche/ D. Manchon]Dominique Manchon LMBP, C.N.R.S.-Université Clermont-Auvergne, 3 place Vasarély, CS 60026, 63178 Aubière, France [email protected] https://lmbp.uca.fr/ manchon/ After providing a short review on the recently introduced notion of post-group by Bai, Guo, Sheng and Tang, we exhibit post-group counterparts of important post-Lie algebras in the literature, including the infinite-dimensional post-Lie algebra of Lie group integrators. The notion of free post-group is examined, and a group isomorphism between the two group structures associated to a free post-group is explicitly constructed. [ [ July 31, 2023 ================= § INTRODUCTION The notions of pre- and post-group have been put forward recently by Chengming Bai, Li Guo, Yunhe Sheng and Rong Tang in the work <cit.>. Both motivation and terminology derive from the corresponding infinitesimal objects known as pre- and post-Lie algebras. The former simultaneously appeared sixty years ago in the works of Murray Gerstenhaber <cit.> and Ernst Vinberg <cit.>. Although, the idea can be traced even further back to work by Arthur Cayley <cit.> on the connection between trees and vector fields; The notion of post-Lie algebra, on the other hand, appeared far more recently, in works by Bruno Vallette in 2007 <cit.> and, independently, by Hans Munthe-Kaas and Will Wright in 2008 <cit.> who introduced the closely related notion of D-algebra and highlighted their relevance in the context of Klein geometries. In fact, pre- as well as post-Lie algebras arise naturally in the context of the geometry of invariant connections defined on manifolds. Examples of pre- and post-groups are easily identified in the works by Daniel Guin and Jean-Michel Oudom <cit.> respectively by Alexander Lundervold, Hans Munthe–Kaas and the second author <cit.>. A (pre-) post-group is a (commutative) group (G,.) together with a family (L_a^)_a ∈ G of group automorphisms such that the relation L^_a ∘ L^_b=L^_a . L^_a(b) holds for any a,b ∈ G. The triangle denotes the additional binary product a b =: L_a^ (b) defined on G. The definition of (pre-) post-group formalises properties of group-like elements in the – completion of the – enveloping algebra of a (pre-) post-Lie algebra <cit.>. Furthermore, one observes that the notion of post-group is equivalent to two notions closely related to set-theoretical solutions of the Yang–Baxter equation <cit.>, namely braided groups <cit.> and skew-braces <cit.>, see <cit.>. We begin this short note with a quick review of the basic properties of post-groups in Section <ref>. A short account of the main results of <cit.> is provided in Section <ref>. Three new nontrivial examples of post-groups are discussed in Section <ref>. The first example is given by maps from a set M into a group G, with the latter acting on M from the right. The second example is the smooth analog of the first one, namely smooth maps from a smooth manifold ℳ into a Lie group G right-acting differentiably on ℳ. In Paragraph <ref>), we show that the post-Lie algebra naturally associated with this post-Lie group coincides with the post-Lie algebra of Lie group integrators considered in <cit.>. The last family of examples (Theorem <ref>) is given by free post-groups generated by left-regular magmas obeying certain regularity conditions, which we named diagonal left-regular magmas. We construct (Proposition <ref>) a group isomorphism 𝒦 between the two group structures of a free post-group, reminiscent of Aleksei V. Gavrilov's K-map between the two Lie algebra structures of a free post-Lie algebra <cit.>, see also <cit.>. Acknowledgements : This study was funded by the Iraqi Ministry of Higher Education and Scientific Research. Thanks to Mustansiriyah University, College of Science, Mathematics Department for their support in carrying out this work. D. Manchon acknowledges a support from the grant ANR-20-CE40-0007 Combinatoire Algébrique, Résurgence, Probabilités Libres et Opérades. K. Ebrahimi-Fard is supported by the Research Council of Norway through project 302831 “Computational Dynamics and Stochastics on Manifolds” (CODYSMA). § BASIC DEFINITIONS AND PROPERTIES <cit.> A post-group (G,.,) is a group (G,.) endowed with a binary map : G× G→ G such that L_a^ (-):=a - is a group automorphism, and such that the following identity holds: (a*b) c=a(b c), for any a,b,c∈ G, where a*b := a.(a b). The inverse of any a∈ G will be denoted by a^.-1. The following statement identifies the product (<ref>) as a group law. <cit.> The binary map *:G× G→ G defined by a*b:=a.(a b) provides G with a second group structure. Both groups (G,.) and (G,*) share the same unit e, and the inverse for * is given by a^*-1=(L_a^)^-1a^.-1. The binary map * is an action of the group (G,*) on the group (G,.) by automorphisms. The proof can be found in <cit.>, we reproduce it for the reader's convenience: associativity of * comes from the associativity of the group law on G by following direct computation: (a*b)*c = (a.(a b))*c = (a.(a b)).((a.(a b)) c) = (a.(a b)).(a(b c)) whereas a*(b*c) = a*(b.(b c)) = a.(a(b.(b c))) = a.((a b).(a(b c))). Let e be the unit of the group (G,.). From a e=e for any a∈ G we easily deduce that a*e=a.(a e)=a. On the other hand, the property e a=a implies e*a=e.(e a)=a, hence e is the unit for the new product *. Finally, defining the element b:=(L_a^)^-1a^.-1, for a ∈ G, we check quickly that a*b = a.(a b) = a.(L_a^(L_a^)^-1a^.-1) = a.a^.-1=e. Any element in G therefore admits a right-inverse with respect to the product *. A standard argument shows that the right-inverse is also a left-inverse, and that the inverse thus obtained is unique. Let (G,.,) be a post-group. The associated Grossman–Larson group[It is called the subjacent group in <cit.>. It makes however sense to take into account that any post-group (G,.,) gives rise to two subjacent groups, namely (G,.) and (G,*). The terminology we have chosen refers to the Grossman–Larson bracket in a post-Lie algebra <cit.>, which itself refers to the Grossman–Larson product of rooted forests <cit.>.] (G,*) has the group law a*b=a.(a b), for any a,b ∈ G. <cit.> A pre-group is a post-group (G,.,) where the group (G,.) is Abelian. This terminology should remind the reader of the well-known fact that a pre-Lie algebra can be seen as a post-Lie algebra with an Abelian Lie bracket. Grossman–Larson groups associated to pre-groups are not Abelian in general. Pre-groups naturally associated with free pre-Lie algebras appear as early as 1981 under the terminology “group of formal flows” <cit.>. Let (G,.,) be a post-group. The opposite post-group is given by (G,,▸) with a b:=b.a and a ▸ b:=a.(a b).a^.-1, for any a,b∈ G. We leave it to the reader to check that (G,,▸) is indeed a post-group, sharing with (G,.,) the same Grossman–Larson product, namely a*b =a.(a b) =a(a▸ b). § QUICK REVIEW OF THE MAIN RESULTS OF REFERENCE <CIT.> We offer here a quick guided tour of the article <cit.> by C. Bai, L. Guo, Y. Sheng and R. Tang, where the reader will find the detailed statements and proofs. §.§ Post-groups, braided groups and skew-braces The main structural result of <cit.> is the equivalence between three different notions, i.e., that of a post-group, a braided group and a skew brace (Section 3 therein). Braided groups appeared in 2000 in an influential article on set-theoretical solutions of the Yang–Baxter equation by J.-H. Lu, M. Yan and Y.-C. Zhu <cit.>. Another construction related to Yang–Baxter equations, namely skew-braces, appeared more recently in the 2017 article by L. Guarnieri and L. Vendramin <cit.>, following the introduction of braces by W. Rump[These braces should not be confused with their homonyms related to pre-Lie algebras and operads, see e.g. <cit.>. Interestingly enough, this conflict of terminology identifies pre-Lie algebras with strongly nilpotent braces in the first sense <cit.>, and with symmetric braces in the second sense <cit.>.] <cit.>. Let 𝒞 be a braided monoidal set category, i.e. a full subcategory of the category of sets, stable by cartesian product and endowed with a braiding σ, that is, a collection of bijective maps σ_XY: X × Y → Y × X indexed by ordered pairs (X,Y) of objects of 𝒞, subject to * functoriality: for any pair of set maps f:X→ X' and g:Y→ Y', the equality σ_X' Y'∘ (f× g)=(g× f)∘σ_XY holds. * compatibility with the cartesian product: σ_X× X', Y× Y' =(Id_Y×σ_XY'×Id_X')∘(σ_XY×σ_X'Y') ∘ (Id_X×σ_X'Y×Id_Y'), * the hexagon equation (σ_YZ×Id_X)∘ (Id_Y×σ_XZ)∘ (σ_XY×Id_Z) =(Id_Z×σ_XY)∘ (σ_XZ×Id_Y)∘ (Id_X×σ_YZ). In particular, any object S of 𝒞 is a braided set, the braiding map σ_SS: S × S → S × S being a bijection satisfying the braid equation (σ×Id_S)∘ (Id_S×σ)∘ (σ×Id_S) =(Id_S×σ)∘ (σ×Id_S)∘ (Id_S×σ), where we have abbreviated σ_SS by σ. The category of sets itself is braided with the flip maps P_XY:X× Y→ Y× X defined by P_XY(a,b)=(b,a) (trivial braiding). A map σ:S× S→ S× S verifies (<ref>) if and only if the map R=P_SS∘σ is a solution of the set-theoretical Yang–Baxter equation[The term "Yang–Baxter equation" is sometimes used for the braid equation (<ref>) in the literature, thus bringing some confusion. We adopt here the conventions of <cit.>.] R_12∘ R_13∘ R_23 = R_23∘ R_13∘ R_12, where, as usual, R_12= R ×Id_S, R_23=Id_S× R, and R_13=(Id_S×τ)∘(R×Id_S)∘ (Id_S×τ). The trivial braiding satisfies the additional symmetry property P_YX∘ P_XY=Id_X× Y for any sets X,Y. <cit.> A braided group is a commutative group in the braided monoidal set category generated by it. To be concrete, it is a pair (G,σ) where G is a group and σ: G × G → G × G is a bijection such that * σ∘(m× m)=(m× m)∘σ, where σ=σ_G× G, G× G:G^4→ G^4 is given by (<ref>), and where m:G× G→ G is the group multiplication, * m∘σ=m. In a braided group (G,σ), it turns out that <cit.> * the map σ in a braided group verifies the braid equation (<ref>), * the two maps ⇀,↼:G× G→ G defined by σ(g,h)=(g⇀ h, g↼ h) are respectively a left action and a right action of the group G on itself. Conversely, if a left action ⇀ and a right action ↼ together fulfil the compatibility condition gh=(g⇀ h).(g↼ h) for any g,h∈ G, then (G,σ) is a braided group, with the braiding σ given by (<ref>). <cit.> For any post-group (G,.,), the Grossman–Larson group (G,*) is a braided group, with braiding σ:G× G→ G× G given by σ(g,h) :=(g h, (g h)^*-1*g*h). Conversely, any braided group (G,σ) with product denoted by * gives rise to a post-group (G,.,) with g h:=g⇀ h and g.h :=g*(g^*-1⇀ h), where we have used the notation σ(g,h):=(g⇀ h, g ↼ h). Both correspondences are mutually inverse. Let (G,.,) be a post-group, and let σ:G× G→ G× G the corresponding braiding given by (<ref>). The braiding corresponding to the opposite post-group (G,,▸) is σ^-1. It is immediate from Definition <ref> that, for any braided group (G,σ), the pair (G,σ^-1) is also a braided group. Now let (G,.,) be a post-group, let σ be the corresponding braiding map, and let σ' be the braiding map of the opposite post-group (G,,▸). From (<ref>), we have σ(g,h)=(H,G) with H=g h and G=(g h)^*-1*g*h. Now we have σ'∘σ(g,h) = σ'(H,G) = (H▸ G, (H▸ G)^*-1*H*G) = (H.(H G).H^.-1, (H.(H G).H^.-1)^*-1*H*G) = (K,K^*-1*H*G), with K:=H.(H G).H^.-1. We therefore have K = (g h).((g h)((g h)^*-1*g*h)).(g h)^.-1 = (g h).{(g h)((g h)^*-1*(g.(g h)))}.(g h)^.-1 = (g h).{(g h){(g h)^*-1.((g h)^*-1(g.(g h)))}}.(g h)^.-1 = (g h).((g h) (g h)^*-1).{(g h)((g h)^*-1(g.(g h))}.(g h)^.-1 = (g h).((g h) (g h)^*-1).(g.(g h)).(g h)^.-1 = (g h).(g h)^.-1.g = g, so that K^*-1*H*G=g^*-1*(g h)*(g h)^*-1*g*h=h. Therefore we have σ' ∘σ(g,h) = (g,h). Both bijective maps σ and σ' are mutually inverse, which proves Proposition <ref>. <cit.> The braiding map corresponding to a pre-group is involutive. This is a direct consequence of the fact that any pre-group is equal to its opposite. <cit.> A skew-left brace is a set G endowed with two group structures (G,.) and (G,*) such that g*(h.k)=(g*h).g^.-1.(g*k) for any g,h,k∈ G. Any post-group (G,.,) gives rise to the left-skew brace (G,.,*) where * is the Grossman–Larson product <cit.>. Conversely, any left skew-brace gives rise to the post-group (G,.,) with g h:=g^.-1.(g*h) for any g,h∈ G. Both correspondences are mutually inverse <cit.>. To complete the picture, the opposite of a skew-brace (G,.,*) will be defined as the skew-brace (G,,*) where, as above, a b:=b.a for any a,b∈ G. A left skew-brace where the dot-group product is commutative is a left brace: see <cit.>, or <cit.> for a recent, purely algebraic account. A final remark is in order. The recent notions of both left-skew brace and post-group, therefore appear to be reformulations of the older notion of braided group developed by J.-H. Lu, M. Yan and Y. Zhu in <cit.>, shedding new light to it. §.§ Post-Lie groups and post-Lie algebras <cit.> A post-Lie group is a post-group (G,.,) where G is a smooth manifold, and where both operations are smooth maps from G× G to G. <cit.> A post-Lie algebra (over some field 𝐤) is a triple (𝔤, [-,-],) where (𝔤, [-,-]) is a Lie algebra, and : 𝔤×𝔤 is a bilinear map such that, for any X,Y,Z ∈𝔤, * X[Y,Z]=[X Y,Z]+[Y,X Z], * [X,Y] Z=X(Y Z)-(X Y) Z-Y(X Z)+(Y X) Z. As key property of post-Lie algebra we recall from <cit.> that (𝔤,-,-) is the corresponding Grossman–Larson Lie algebra[called subjacent Lie algebra in <cit.>.] given by the Grossman–Larson bracket defined by X,Y :=[X,Y]+X Y-Y X, for any X,Y∈𝔤. The older notion of pre-Lie algebra <cit.> follows from that of post-Lie algebra in the case of the latter having an Abelian Lie bracket. The corresponding Grossman–Larson bracket is thus given by the anti-symmetrization of the pre-Lie product, that is, pre-Lie algebras are Lie admissible. We remark that in <cit.> pre-Lie algebras are called chronological algebras. See <cit.> for detailed reviews. Let (𝔤,[-,-],) be a post-Lie algebra. Defining [X,Y]^op:=-[X,Y] and X▸ Y:=X Y+[X,Y], we have that (𝔤, [-,-]^op,▸) is a post-Lie algebra, which we call the opposite post-Lie algebra. Checking the post-Lie algebra axioms for (𝔤, [-,-]^op,▸) is left to the reader. Both post-Lie algebras share the same Grossman–Larson bracket <cit.>. Section 4 of <cit.> elucidates the relationship between post-Lie groups and post-Lie algebras. Namely, the Lie algebra 𝔤 of a post-Lie group G is a post-Lie algebra. The bilinear map is given by X Y:=d/dtt=0d/dss=0exptXexp(sY) (see <cit.>). Moreover, the Lie algebra of the Grossman–Larson group (G,*) is the Grossman–Larson Lie algebra (𝔤,-,-) <cit.>. Let G be a post-Lie group with post-Lie algebra 𝔤. The post-Lie algebra of the opposite post-Lie group of G is the opposite post-Lie algebra of 𝔤. It is well-known (and easily checked) that the Lie algebra of the opposite group is deduced from 𝔤 by changing the sign of the Lie bracket. We now compute: d/dtt=0d/dss=0exp (tX)▸exp (sY) = d/dtt=0d/dss=0exp (tX).(exp (tX)exp (sY)).exp (-tX) = d/dtt=0d/dss=0exp (tX).exp s(exp (tX) Y).exp (-tX) = d/dtt=0Ad (tX).(exp (tX) Y) = (adX+L_X^)(Y) = [X,Y]+X Y=X▸ Y, where, for any g∈ G and Y∈𝔤, the notation g Y stands for d/dss=0(gexp(sY)). §.§ Post-Hopf algebras vs. D-algebras D-algebras were introduced by H. Munthe-Kaas and W. Wright in <cit.> – independently of the introduction of post-Lie algebras by B. Vallette <cit.>. <cit.> The triple (D,.,) consists of a unital associative algebra (D,.) with product m_D(u ⊗ v)= u.v and unit 1, carrying another product : D ⊗ D → D such that 1 v =v for all v ∈ D. Let 𝔡 (D) :={u ∈ D | u (v. w) = (u v).w + v. (u w), ∀ v,w ∈ D}. We call (D,.,) a D-algebra if the algebra product . generates D from {1,𝔡 (D)} and furthermore for any x ∈𝔡 (D) and v,w ∈ D v x ∈𝔡 (D) (x. v) w = a_(x,v,w), where the associator is defined by a_(x,v,w):=x(y w)-(x y) w. Although the enveloping algebra of a post-Lie algebra always carries a D-algebra structure, the converse is not necessarily true. The paradigmatic example given in <cit.> is 𝒟:=C^∞(ℳ,𝒰(𝔤)) where ℳ is a smooth manifold, G is a Lie group acting on ℳ, with Lie algebra 𝔤, and where 𝒰(𝔤) is the universal enveloping algebra of 𝔤. The product is the pointwise product, and the action u - is given by the differential operator on ℳ defined by u through the action of G on ℳ. It is easily seen <cit.> that ℒ:=C^∞(ℳ,𝔤) is the post-Lie algebra 𝔡(𝒟), but 𝒟 is a quotient of 𝒰(ℒ). The D-algebra ℒ carries an associative Grossman–Larson product * corresponding to the composition of differential operators. It verifies u(v w)=(u*v) w for any u,v,w∈𝒟. It is compatible with the post-Lie algebra structure of ℒ in the sense that, for any X,Y∈ℒ we have X*Y-Y*X= X,Y. It is not clear whether any D-algebra admits such a Grossman–Larson product. Post-Hopf algebras have been recently introduced in <cit.>. See also <cit.>. They are examples of "good D-algebras", i.e. they are always equipped with a Grossman–Larson product. <cit.>, <cit.> A post-Hopf algebra (H,.,1,Δ,ε,S,) consists of a cocommutative Hopf algebra (H,.,1,Δ,ε,S), where :H⊗ H→ H is a coalgebra morphism such that, using Sweedler's notation Δ x=∑_(x)x_1⊗ x_2: * for any x,y,z∈ H we have x(y.z)=∑_(x)(x_1 y).(x_2 z), * for any x,y,z∈ H we have x(y z)=∑_(x)(x_1.(x_2 y)) z, * The operator L^:H→End H defined by L^_x(-):=x- admits an inverse β^ for the convolution product in Hom(H,End H). From <cit.> (see also <cit.>), any post-Hopf algebra H admits a Grossman–Larson product * and a linear map S_*:H→ H making H:=(H,*,1,Δ,ε, S_*) another cocommutative Hopf algebra. The Grossman–Larson product is given by x*y=∑_(x)x_1.(x_2 y), and the corresponding antipode is given by S_*(x)=∑_(x)β^_x_1(S(x_2)). Remark 4 in <cit.> can be reformulated as follows: the universal enveloping algebra of a post-Lie algebra is a post-Hopf algebra. Several results in <cit.> can be reformulated and showed in the post-Hopf algebra framework, with literally the same proofs. For example <cit.>, the product . in a post-Hopf algebra can be expressed from the Grossman–Larson product * and the corresponding antipode S_* by the formula x.y=∑_(x)x_1*(S_*(x_2) y). The notion of dual post-Hopf algebra can be introduced, by dualizing the axioms. This leads to a unital algebra carrying two different coproducts, yielding two Hopf algebras in cointeraction <cit.>. A more detailed account will be given in forthcoming work. § EXAMPLES §.§ Two post-groups naturally associated to a group The trivial post-group associated to a group (G,.) is given by (G,.,) where a b=b for any a,b∈ G. All maps L_a^:G→ G are therefore equal to the identity, and the Grossman–Larson product * coincides with the group-law of the group (G,.). The conjugation post-group is the opposite post-group of the trivial post-group. It is given by a b:=b.a and a▸ b:= a.b.a^.-1. The Grossman–Larson product is of course the same as the one of the trivial post-group, namely a*b=a.b. §.§ Post-groups from group actions Let M be a set endowed with a right action ρ: M × G → M of the (G,.). We use the notation mg for ρ(m,g). Let 𝒢:=G^M be the set of maps from M into G, endowed with the pointwise product f.g(m):=f(m).g(m). Let us introduce the map :𝒢×𝒢→𝒢 defined by (f g)(m):=g(mf(m)). The triple (𝒢,.,) is a post-group. The Grossman–Larson product is given by f*g(m)=f(m).g(mf(m)). It is clear that (𝒢,.) is a group. The unit is given by the constant function 𝐞 such that 𝐞(m)=e for any m∈ M, where e is the unit of G. The inverse is given by f^.-1(m):=f(m)^.-1 for any m∈ M. For any f∈𝒢, the map L_f^=f- is a group automorphism: indeed it is clearly bijective, and we have for any f,g,h∈𝒢 and any m∈ M: (f (g.h))(m) = (g.h)(mf(m)) = g(mf(m)).h(mf(m)) = (f g)(m).(f h)(m) = ((f g).(f h))(m). We have f*g(m)=f.(f g)(m)=f(m).g(m.f(m)), and we also have (f*g) h=f(g h), for any f,g,h∈𝒢, as shown by the computation below (where m is any element of M): (f*g) h(m) = h(m(f*g)(m)) = h{m(f(m).g(mf(m)))} whereas f(g h)(m) = (g h)(mf(m)) = h((mf(m))g(mf(m))) = h{m(f(m).g(mf(m)))}. §.§ Post-Lie groups from Lie group actions Suppose now that ℳ is a smooth manifold together with a smooth right action of a Lie group G on it. We denote by 𝒢 the set of smooth maps from ℳ into G, and by ℒ the vector space of smooth maps from ℳ into 𝔤 (thus matching the notation of Paragraph <ref>). Any X ∈ℒ gives rise to a vector field X on ℳ defined by X.φ(m):=d/dtt=0φ(mexp(tX(m))). It is well-known that ℒ is a post-Lie algebra <cit.>. In fact, ℒ is the post-Lie algebra of the "infinite-dimensional post-Lie group" 𝒢. To see this, it must be checked that the action : ℒ×ℒ→ℒ can be derived from the action : 𝒢×𝒢→𝒢 via differentiation as in (<ref>). This is deduced from the following computation: for any X,Y ∈ℒ and m ∈ℳ we have: d/dtt=0d/dss=0exp(tX)exp(sY)(m) = d/dtt=0d/dss=0exp(sY)(m(exp (tX)(m))) = d/dtt=0Y(m(exp (tX)(m))) = X.Y(m) = (X Y)(m). The triple (ℒ,[-,-],), where [-,-] is the pointwise Lie bracket, with the product appearing at the end of the computation above, is the post-Lie algebra described in <cit.>. Using the language of Lie algebroids, (ℒ,[-,-]) is the Lie algebra of sections of the Lie algebroid ℳ×𝔤 of the Lie group bundle ℳ× G above ℳ, whereas (ℒ, -,-) is Lie algebra of sections of the Lie algebroid of the transformation groupoid ℳ⋊ G. The anchor map of the first Lie algebroid is trivial, the anchor map of the second is given by X ↦ X. §.§ The free post-group generated by a diagonal left-regular magma Let 𝐌 be a left-regular magma, i.e. a set endowed with a binary product : 𝐌×𝐌→𝐌 such that the map L_m^=m- is bijective for any m∈𝐌. Equivalently, a left-regular magma is a set 𝐌 endowed with a left action of the free group F_𝐌 generated by 𝐌. This action can in turn be uniquely extended to a left action of F_𝐌 on itself by group automorphisms. The left-regular magma 𝐌 is called diagonal if moreover the map Λ: 𝐌→𝐌, m ↦Λ(m)= (L_m^)^ -1(m) is a bijection. Let 𝐌 be a diagonal left-regular magma. Denoting by a lower dot . the multiplication of the free group F_𝐌, there is a unique way to extend the binary product :𝐌×𝐌→𝐌 to F_𝐌 such that (F_𝐌,.,) is a post-group. The post-group F_𝐌 thus obtained verifies the universal property making it the free post-group generated by the magma 𝐌, namely for any post-group (G,.,) and for any magma morphism φ: 𝐌→ (G,) there is a unique post-group morphism Φ:F_𝐌→ G making the diagram below commute. 𝐌[dr]^φ[d] F_𝐌[r]_Φ G We have to construct a family of group automorphisms L_a^:F_𝐌→ F_𝐌 for any a∈ F_𝐌 such that (<ref>) is verified for any a,b,c∈ F_𝐌. In other words, L_a^∘ L_b^ = L^_a.(a b), which can also be written as L^_a.b = L_a^∘ L_a b^, where a - is a shorthand for the inverse group automorphism (L_a^)^-1. We construct the L_a^'s by induction on the length |a| of the canonical word representation of a, and prove (<ref>) by induction over |a|+|b|. Let us first define the maps L_a^ when a is of length zero or one. The maps L_a^ are given for a∈𝐌 by the magma structure, and (<ref>) easily implies L^_e=Id_F_𝐌. It remains to define L^_a^.-1 for a∈𝐌. From (<ref>) with b=a^.-1 we get Id_F_𝐌 =L^_a.a^.-1 =L_a^∘ L^_a a^.-1, which immediately yields L^_(a a)^.-1=(L_a^)^-1. From the diagonality hypothesis on the left-regular magma 𝐌, the map ψ: 𝐌→𝐌^.-1 ψ: a↦ (a a)^.-1 is a bijection. We can therefore define the group automorphisms L_a^.-1^ for any a∈𝐌 by L_a^.-1^:=(L^_ψ^-1(a^.-1))^-1. The map ψ defined in (<ref>) can be seen as an involution on 𝐌∪𝐌^.-1. Indeed, from (<ref>) the definition ψ:a'↦ (a' a')^.-1 also makes sense for a'∈𝐌^.-1, and we get for any a∈𝐌: ψ^2(a) = (ψ(a)ψ(a))^.-1 = ((a a)^.-1 (a a)^.-1)^.-1 = (a a)^.-1 (a a) 8mm (from the group automorphism property) = (a a^.-1) (a a) 8mm (from the group automorphism property) = ((L^_a a^.-1)^-1∘ (L_a^)^-1)(a) = (L_a^∘ L^_a a^.-1)^-1(a) = a 8mm (from (<ref>)). Any a'∈𝐌^.-1 is uniquely written as a'=ψ(a) with a ∈𝐌. We have ψ^2(a')=ψ^3(a)=ψ(a)=a', which terminate the proof of the statement, from which we easily get the mirror of (<ref>), namely Id_F_𝐌 =L^_a^.-1.a =L_a^.-1^∘ L^_a^.-1 a. Let us now define L_u^ for any u∈ F_𝐌. If u is of length n≥ 2, it can be uniquely written u=u'a with a∈𝐌∪𝐌^.-1 and u'∈ F_𝐌 of length n-1. From (<ref>) we have L^_u=L^_u'∘ L^_u' a. Iterating the process, we get the following expression for L_u^ in terms of the canonical representation u=a_1⋯ a_n in letters a_k∈𝐌∪𝐌^.-1: L_u^=L_u_1^∘ L^_u_1 a_2∘⋯∘ L^_u_n-1 a_n, where u_k is the prefix a_1⋯ a_k for any k=1,…,n-1. It remains to check the post-group property in full generality, namely L^_uv=L^_u∘ L^_u v for any u,v∈ℱ_𝐌. We proceed by induction on |u|+|v|. The cases with |u|=0 or |v|=0 are immediate. The case |u|=|v|=1 is covered by (<ref>) when u and v are not mutually inverse and by (<ref>) and (<ref>) when they are, which completely settles the case |u|+|v|≤ 2. Suppose that the post-group property is true for any u,v∈ F_𝐌 with |u|+|v|≤ n, and choose u,v with |u|+|v|=n+1. If |v|=0 there is nothing to prove, and if |v|=1 we are done by using (<ref>). Otherwise we have v=v'.a with |v'|=|v|-1 and a∈𝐌∪𝐌^.-1. We can now compute L^_u.v = L^_u.v'.a = L^_u.v'∘ L_(u.v') a 8mmby (<ref>) = L^_u∘ L^_u v'∘ L^_(L^_uv')^-1(a) 8mmby induction hypothesis = L^_u∘ L^_u v'∘ L^_(L^_u∘ L^_u v')^-1(a) 8mmby induction hypothesis = L^_u∘ L^_u v'∘ L^_(u v')(u a) = L^_u∘ L_(u v').(u a)^ 8mmby induction hypothesis = L^_u∘ L^_u(v'.a) 8mmby the group automorphism property of u - = L^_u∘ L^_u v. Finally, if G is another post-group, any set map φ:𝐌→ G, a fortiori any magma morphism, admits a unique group morphism extension Φ:F_𝐌→ G making the diagram (<ref>) commute. Checking that Φ is a morphism of post-groups is left to the reader. §.§ A group isomorphism Given a post-group (G,.,), the two groups (G,.) and (G,*) are in general not isomorphic: for example, in a typical pre-group, the first one is Abelian and the second one is not. We shall however see that the free post-group generated by left-regular diagonal magma provides an isomorphism between both associated groups. We keep the notations of the previous paragraph. There is a unique group isomorphism 𝒥:(F_𝐌,.)⟶ (F_𝐌,*) such that 𝒥𝐌=Id_𝐌. Existence and uniqueness of the group morphism 𝒥 is immediate from the definition of a free group. It remains to show that 𝒥 is an isomorphism. From (<ref>) we have 𝒥(a^.-1)=(a a)^.-1=ψ(a)^.-1 for any a∈𝐌. We remark that 𝒥 is therefore a bijection from 𝐌∪𝐌^.-1 onto itself. For any element u=a_1⋯ a_n∈ F_𝐌 where the letters a_k are in 𝐌∪𝐌^.-1, we have 𝒥(u)=a'_1*⋯ *a'_n =a'_1.(a'_1 a'_2).(a'_1(a'_2 a'_3))… (L^_a'_1∘⋯∘ L^_a'_n-1)(a'_n) with a'_k=𝒥(a_k). In order to prove that any word v:=b_1⋯ b_n can be written in the form 𝒥(u) with u=a_1⋯ a_n, we have to solve the triangular system b_1=a'_1, b_2=a'_1 a'_2,…, b_n= (L^_a'_1∘⋯∘ L^_a'_n-1)(a'_n). The letters a'_k are therefore recursively given by a'_1=b_1 and a'_n=(L^_a'_1∘⋯∘ L^_a'_n-1)^-1(b_n). The first ones are a'_1=b_1, a'_2=b_1 b_2, a'_3 =(b_1 b_2)(b_1 b_3), a'_4 =((b_1 b_2)(b_1 b_3))((b_1 b_2)(b_1 b_4)),…, and the letters a_k are then given by a_k=𝒥^-1(a'_k). This builds up an inverse 𝒦 for 𝒥:F_𝐌→ F_𝐌 and therefore finishes the proof of Proposition <ref>. §.§ An analogy with Gavrilov's K-map Recall <cit.> that the free Lie algebra ℒ_M (over some ground field 𝐤) generated by any magmatic 𝐤-algebra (M,) carries a unique post-Lie algebra structure such that the action :ℒ_M×ℒ_M→ℒ_M extends the magmatic product of M. The universal enveloping algebra 𝒰(ℒ_M) is the tensor algebra T(M). As the former is a Hopf algebra with respect to the cocommutative unshuffle coproduct Δ, the latter carries a cocommutative post-Hopf algebra structure. Gavrilov introduced the so-called K-map, which is crucial for building higher-order covariant derivatives on a smooth manifold endowed with an affine connection <cit.>. See also <cit.> for a more recent account extending Gavrilov's main results. It is a linear endomorphism of T(M) recursively defined as follows: K(x)=x for any x ∈ M and K(x_1⋯ x_k+1) = x_1 . K(x_2 ⋯ x_k+1) - K(x_1(x_2⋯ x_k+1)), for any elements x_1,…, x_k+1∈ M (note that for notational transparency we denote by a simple dot the concatenation product in T(M)). In particular, K maps x_1 . x_2 ∈ T(M) to K(x_1 . x_2) = x_1 . x_2 - x_1 x_2 and for x_1 . x_2. x_3 ∈ T(M) we obtain K(x_1 . x_2. x_3) =x_1. K(x_2. x_3)-K (x_1(x_2 . x_3)) = x_1. x_2. x_3 - x_1. (x_2 x_3) - (x_1 x_2). x_3 - x_2 . (x_1 x_3) + x_2 (x_1 x_3) + (x_1 x_2) x_3. In the second equality we used that x_1(x_2 . x_3) = (x_1 x_2) . x_3 + x_2 . (x_1 x_3). The map K is clearly invertible, as K(U)-U is a linear combination of terms of strictly smaller length than the length of the element U ∈ T(M). The inverse K^-1 admits a closed formula in terms of ordered set partitions <cit.>. Defining the Grossman–Larson product on the cocommutative post-Hopf algebra T(M) A * B = A_1 . (A_2 B), for A,B ∈ T(M), it follows from <cit.> and <cit.> that K is a Hopf algebra isomorphism from H=(T(M),*,1,Δ,S_*,ε) onto (T(M),.,1,Δ,S,ε), that is, for A,B ∈ T(M), we have K(A * B)=K(A).K(B). The map K admits a unique extension by continuity to the completion T(M) of T(M) with respect to the grading. The set G of group-like elements in T(M) is a post-group, and the restriction K G is a group isomorphism from (G,*) onto (G,.). The group isomorphism 𝒦=𝒥^-1:(F_𝐌,*)→ (F_𝐌,.) defined in the previous paragraph can therefore be seen as a group-theoretical version of Gavrilov's K-map. Recall J. H. C. Whitehead's definition of crossed group morphisms <cit.>. <cit.> Let (G,.) be a group. Suppose that the group Γ acts on G by automorphism. The action is denoted by : Γ× G → G. A crossed homomorphism ϕ: Γ→ G is a map such that for any h_1,h_2 ∈Γ ϕ(h_1 h_2)=ϕ(h_1) . (h_1 ϕ(h_2)). Let (G,.,) be a post-group with Grossman–Larson product *. The identity map 𝕀: (G,*) → (G,.) is a crossed homomorphism. The statement follows directly from the definition of the Grossman–Larson product (Definition <ref>). For the post-group (G,.) associated to the free post-Lie algebra generated by a magmatic algebra (M,), Gavrilov showed in <cit.> that K^-1 is a crossed homomorphism from the post-group (G,.,) into itself in the sense of Whitehead. Indeed, K is a group isomorphism between (G,*) and (G,.) <cit.>. Hence, K^-1 = 𝕀∘ K^-1 : (G,.) → (G,.) is a crossed morphism for the action g_1 ▸ g_2:= K^-1(g_1) g_2. of (G,.) on itself. We then have K^-1(g_1 . g_2) = K^-1(g_1) * K^-1(g_2) = K^-1(g_1) . ( K^-1(g_1) K^-1(g_2) ) = K^-1(g_1) . ( g_1 ▸ K^-1(g_2) ). We close the paper by pointing at an interesting observation regarding Gavrilov's K-map extended to T(M) and flow equations on the post-group G. Consider the initial value problem d/dtexp^.(tx) = exp^.(tx) . x in T(M). Using (<ref>) and the fact that T(M) is a cocommutative post-Hopf algebra, the righthand side can be written exp^.(tx) . x = exp^.(tx) * ( S_*(exp^.(tx)) x), because exp^.(tx) is a group-like and S_* is the antipode of H. Introducing α(tx):=S_*(exp^.(tx)) x, we see that d/dtK(exp^.(tx))=K(exp^.(tx).x)=K(exp^.(tx)). α(tx), where we used that K(α(tx)) = α(tx) as α(tx)∈ M. The solution of (<ref>) can be expressed in terms of the right-sided Magnus expansion Ω(α(tx)) ∈Lie (M) as K(exp^.(tx)) = exp^.(Ω(α(tx))), which implies K(exp^.(tx)) =exp^.(K∘ K^-1(Ω(α(tx)))) =K(exp^* K^-1(Ω(α(tx)))) =K(exp^* (Ω_*(α(tx)))), where we used that K^-1[A,B] = K^-1(A) * K^-1(B) - K^-1(B) * K^-1(A) =[[K^-1(A),K^-1(B)]]. The last equality defines the Grossman–Larson Lie bracket and Ω_*(α(tx)) is the right-sided Magnus expansion defined in terms of the Grossman–Larson Lie bracket, recursively given by Ω_*(α(tx)) = ∫_0^t ∑_n ≥ 0 B_n/n!ad^*n_Ω_*(α(tx))(α(tx)), where for n > 0, we define ad^*n_a(b):=[[a, ad^*n-1_a(b)]] and ad^*0_a(b):=b. The B_n's are the modified Bernoulli numbers 1,1/2,1/6,0,-1/30,0,1/42,… Moreover, a simple computation gives d/dtα(tx) = S_*(d/dtexp^.(tx) ) x = S_*(exp^.(tx) * ( S_*(exp^.(tx)) x)) x = (S_*( S_*(exp^.(tx)) x ) * S_*(exp^.(tx)) ) x = ( -(S_*(exp^.(tx)) x) * S_*(exp^.(tx)) ) x = -(S_*(exp^.(tx)) x) ( S_*(exp^.(tx)) x ) = - α(tx) α(tx). Compare with Gavrilov <cit.> as well as <cit.>. A more detailed study of the last identity in the context of pre- and post-Lie groups is postponed to forthcoming work. abcdsfgh AG1981 A. Agrachev and R. Gamkrelidze, Chronological algebras and nonstationary vector fields, J. Sov. Math. 17, (1981), 1650–1675. AEM21 M. J. H. Al-Kaabi, K. Ebrahimi-Fard, and D. Manchon, Post-Lie Magnus expansion and BCH-recursion, SIGMA 18, (2022), 023, 16 pages. AEMM2022 M. J. H. Al-Kaabi, K. Ebrahimi-Fard, D. Manchon and H. Munthe-Kaas, Algebraic aspects of connections: from torsion, curvature, and post-Lie algebras to Gavrilov's double exponential and special polynomials, preprint, arXiv:2205.04381 (2022). BGST2023 C. Bai, L. Guo, Y. Sheng and R. Tang, Post-groups, (Lie-)Butcher groups and the Yang–Baxter equation, Math. Ann. https://doi.org/10.1007/s00208-023-02592-z10.1007/s00208-023-02592-z (2023) Burde2006 D. Burde, Left-symmetric algebras, or pre-Lie algebras in geometry and physics Cent. Eur. J. Math. 4, (2006), 323–357. C1881 A. Cayley. On the Analytical Forms Called Trees, Amer. J. Math. 4, No. 1-4, (1881), 266–268. C2002 F. Chapoton, Un théorème de Cartier–Milnor–Moore–Quillen pour les bigèbres dendriformes et les algèbres braces, J. Pure Appl Algebra 168 No. 1, (2002), 1–18. CEO20 C. Curry, K. Ebrahimi-Fard, and B. Owren, The Magnus Expansion and Post-Lie Algebras, Math. Comp. 89, (2020), 2785–2799. ELM2015 K. Ebrahimi-Fard, A. Lundervold, and H. Munthe-Kaas, On the Lie enveloping algebra of post-Lie algebra, J. Lie Theory 25, No. 4, (2015), 1139–1165. KMMK2017Rmatrix K. Ebrahimi-Fard, A. Lundervold, I. Mencattini, and H. Munthe-Kaas, Post-Lie Algebras and Isospectral Flows, SIGMA 11 (2015), 093, 16 pages. ESS99 P. Etingof, T. Schedler and A. Soloviev, Set-theoretical solutions to the quantum Yang–Baxter equation, Duke Math. J. 100, No. 2, (1999), 169–209. Foissy2018 L. Foissy, Extension of the Product of a Post-Lie Algebra and Application to the SISO Feedback Transformation Group, in Computation and Combinatorics in Dynamics, Stochastic and Control, Abel Symp. 2016, Springer, Cham, 2018, 369–399. Gavrilov2006 A. V. Gavrilov, Algebraic properties of the covariant derivative and composition of exponential maps, Sib. Adv. Math. 16:3, (2006), 54–70. Gavrilov2007 A. V. Gavrilov, The double exponential map and covariant derivation, Sib. Math. J. 48:1, (2007), 56–61. Gavrilov2008 A. V. Gavrilov, Higher covariant derivatives, Sib. Math. J. 49, No. 6, (2008), 997–1007. Gavrilov2012 A. V. Gavrilov, The Leibniz formula for the covariant derivative and some of its applications, Siberian Adv. Math. 22 No. 2, (2012), 80–94. G1963 M. Gerstenhaber, The cohomology structure of an associative ring, Ann. Math. 78, (1963), 267–288. GL1989 R. Grossman and R. G. Larson, Hopf-algebraic structure of families of trees, J. Algebra 126, (1989), 184–210. GV2017 L. Guarnieri and L. Vendramin, Skew braces and the Yang–Baxter equation. Math. Comp. 86, (2017), 2519–2534. GO2008 D. Guin and J.-M. Oudom, On the Lie enveloping algebra of a pre-Lie algebra, J. of K-Theory 2 No. 1, (2008), 147–167. LST2022 Y. Li, Y. Sheng and R. Tang, Post-Hopf algebras, relative Rota–Baxter operators and solutions of the Yang–Baxter equation, preprint, arXiv:2203.12174 (2022). LYZ2000 J. Lu, M. Yan and Y. Zhu, On the set-theoretical Yang–Baxter equation, Duke Math. J. 104, (2000), 1–18. LEFMK2015 A. Lundervold, K. Ebrahimi-Fard and H. Munthe-Kaas, On the Lie enveloping algebra of a post-Lie algebra, J. Lie Theory 25, No. 4, (2015), 1139–1165. Manchon2011 D. Manchon, A short survey on pre-Lie algebras, E. Schrödinger Institute Lectures in Mathematical Physics, European Mathematical Society, A. Carey Ed., 2011. Manchon2018cointeraction D. Manchon, A Review on Comodule-Bialgebras, in Computation and Combinatorics in Dynamics, Stochastics and Control, Eds.: E. Celledoni, G. Di Nunno, K. Ebrahimi-Fard, and H. Munthe-Kaas), Abel Symp. 13, Springer, Cham, 2018, 579–597. MQS2020 I. Mencattini, A. Quesney and P. Silva, Post-symmetric braces and integration of post-Lie algebras, J. Algebra 556, (2020), 547–580. HA13 H. Munthe-Kaas and A. Lundervold, On post-Lie algebras, Lie–Butcher series and Moving Frames, Found. Comput. Math. 13, Issue 4, (2013), 583–613. MKW2008 H. Munthe-Kaas and W. Wright, On the Hopf algebraic structure of Lie group integrators, Found. Comput. Math. 8, (2008) 227–257. R2007 W. Rump, Braces, radical rings, and the quantum Yang–Baxter equation, J. Algebra 307, No. 1, (2007), 153–170. S2022 A. Smoktunowicz, Algebraic approach to Rump’s results on relations between braces and pre-Lie algebras, J. Alg. and its Appl. 21, No. 3, (2022), 2250054. T2001 M. Takeuchi, Survey on matched pairs of groups-an elementary approach to the ESS-LYZ theory, Banach Center Publ. 61, (2001), 305–331. BV2007 B. Vallette, Homology of generalized partition posets, J. Pure Appl. Algebra 208 No. 2, (2007), 699–725. V1963 E. B. Vinberg, The theory of homogeneous convex cones, Tr. Mosk. Mat. Obs. 12, (1963), 303–358. W1949 J. H. C. Whitehead, Combinatorial homotopy. II., Bull. Amer. Math. Soc. 55, (1949), 453–496.
http://arxiv.org/abs/2306.01424v1
20230602103037
Partial Counterfactual Identification of Continuous Outcomes with a Curvature Sensitivity Model
[ "Valentyn Melnychuk", "Dennis Frauen", "Stefan Feuerriegel" ]
stat.ML
[ "stat.ML", "cs.LG" ]
Radio Sources Segmentation and Classification with Deep Learning [ July 31, 2023 ================================================================ Counterfactual inference aims to answer retrospective “what if” questions and thus belongs to the most fine-grained type of inference in Pearl's causality ladder. Existing methods for counterfactual inference with continuous outcomes aim at point identification and thus make strong and unnatural assumptions about the underlying structural causal model. In this paper, we relax these assumptions and aim at partial counterfactual identification of continuous outcomes, i.e., when the counterfactual query resides in an ignorance interval with informative bounds. We prove that, in general, the ignorance interval of the counterfactual queries has non-informative bounds, already when functions of structural causal models are continuously differentiable. As a remedy, we propose a novel sensitivity model called . This allows us to obtain informative bounds by bounding the curvature of level sets of the functions. We further show that existing point counterfactual identification methods are special cases of our when the bound of the curvature is set to zero. We then propose an implementation of our in the form of a novel deep generative model, which we call . Our implementation employs (i) residual normalizing flows with (ii) variational augmentations. We empirically demonstrate the effectiveness of our . To the best of our knowledge, ours is the first partial identification model for Markovian structural causal models with continuous outcomes. § INTRODUCTION Counterfactual inference aims to answer retrospective “what if” questions. Examples are: Would a patient's recovery have been faster, had a doctor applied a different treatment? Would my salary be higher, had I studied at a different college? Counterfactual inference is widely used in data-driven decision-making, such as root cause analysis <cit.>, recommender systems <cit.>, responsibility attribution <cit.>, and personalized medicine <cit.>. Counterfactual inference is also relevant for various machine learning tasks such as safe policy search <cit.>, reinforcement learning <cit.>, algorithmic fairness <cit.>, and explainability <cit.>. Counterfactual queries are located at the top of Pearl's ladder of causation <cit.>, , at the third layer ℒ_3 of causation <cit.> (see Fig. <ref>, right). Counterfactual queries are challenging as they do reasoning in both the actual world and a hypothetical one where variables are set to different values than they have in reality. State-of-the-art methods for counterfactual inference typically aim at point identification. These works fall into two streams. (1) The first stream <cit.> makes no explicit assumptions besides assuming a structural causal model (SCM) with Markovianity (, independence of the latent noise) and thus gives estimates that can be invalid. However, additional assumptions are needed in counterfactual inference of layer ℒ_3 to provide identifiability guarantees <cit.>. (2) The second stream <cit.> provides such identifiability guarantees but makes strong assumptions that are unnatural or unrealistic. Formally, the work by  <cit.> describes bijective generation mechanisms (BGMs), where, in addition to the original Markovianity of SCMs, the functions in the underlying SCMs must be monotonous (strictly increasing or decreasing) with respect to the latent noise. The latter assumption effectively sets the dimensionality of the latent noise to the same dimensionality as the observed (endogenous) variables. However, this is highly unrealistic in real-world settings and is often in violation of domain knowledge. For example, cancer is caused by multiple latent sources of noise (e.g., genes, nutrition, lifestyle, hazardous exposures, and other environmental factors). In this paper, we depart from point identification for the sake of more general assumptions about both the functions in the SCMs and the latent noise. Instead, we aim at partial counterfactual identification of continuous outcomes. Rather than inferring a point estimation expression for the counterfactual query, we are interested in inferring a whole ignorance interval with informative bounds. Informative bounds mean that the ignorance interval is a strict subset of the support of the distribution. The ignorance interval thus contains all possible values of the counterfactual query for SCMs that are consistent with the assumptions and available data. Partial identification is still very useful for decision making, , when the ignorance interval for a treatment effect is fully below or above zero. We focus on a Markovian SCM with two observed variables, namely, a binary treatment and a continuous outcome. We consider a causal diagram as in Fig. <ref> (left). We then analyze the expected counterfactual outcome of [un]treated abbreviated by ECOU [ECOT]. This query is non-trivial in the sense that it can not be simplified to a ℒ_1/ℒ_2 layer, as it requires knowledge about the functions in SCM, but can still be inferred by means of a 3-step procedure of abduction-action-prediction <cit.>. ECOU [ECOT] can be seen as a continuous version of counterfactual probabilities <cit.> and allows to answer a retrospective question about the necessity of interventions: what would have been the expected counterfactual outcome for some treatment, considering knowledge about both the factual treatment and the factual outcome? In our paper, we leverage geometric measure theory and differential topology to prove that, in general, the ignorance interval of ECOU [ECOT] has non-informative bounds. We show theoretically that this happens immediately when we relax the assumptions of (i) that the latent noise and the outcome have the same dimensionality and (ii) that the functions in the SCMs are monotonous (and assume they are continuously differentiable). As a remedy, we propose a novel (), in which we bound the curvature of the level sets of the functions and thus yield informative bounds. We further show that we obtain the BGMs from <cit.> as a special case when setting the curvature to zero. Likewise, we yield non-informative bounds when setting it to infinity. Therefore, our provides a sufficient condition for the partial counterfactual identification of the continuous outcomes with informative bounds. We develop an instantiation of in the form of a novel deep generative model, which we call (). Our uses (i) residual normalizing flows with (ii) variational augmentations to perform the task of partial counterfactual inference. Specifically, our allows us to (1) fit the observational/interventional data, (2) perform abduction-action-prediction in a differentiable fashion, and (3) bound the curvature of the SCM functions, thus yielding informative bounds for the whole ignorance interval. Finally, we demonstrate its effectiveness across several numerical experiments. Overall, our main contributions are following:[Code is available at <https://anonymous.4open.science/r/AnonymousPartialCounterfactualIdent-8C84>.] * We prove that the expected counterfactual outcome of [un]treated has non-informative bounds in the class of continuously differentiable functions of SCMs. * We propose a novel () to obtain informative bounds. Our is the first sensitivity model for the partial counterfactual identification of continuous outcomes in Markovian SCMs. * We introduce a novel deep generative model called () to perform partial counterfactual inference under our . We further validate it numerically. § RELATED WORK r4.75cm 4.75cm! 4.75cm < g r a p h i c s > Flow chart of identifiability for a counterfactual query. We briefly summarize prior works on (1) point and (2) partial counterfactual identification below but emphasize that none of them can be straightforwardly extended to our setting. We provide an extended literature overview in Appendix <ref>. (1) Point counterfactual identification has been recently addressed through neural methods <cit.> but without identifiability results. To ensure identifiability, prior works usually make use of (i) symbolic identifiability methods or (ii) put restrictive assumptions on the model class, if (i) led to non-identifiability (see Fig. <ref>). (i) Symbolic (non-parametric) identifiability methods <cit.> aim to provide a symbolic probabilistic expression suitable for point identification if a counterfactual query can be expressed via lower-layer information only. Examples of the latter include the effect of treatment of the treated (ETT) <cit.> and path-specific effects <cit.>. However, these are not suited for partial counterfactual identification. Alternatively, identifiability can be achieved by (ii) making restrictive but unrealistic assumptions about the SCMs <cit.>. A notable example is the BGMs <cit.>, which assumes a Markovian SCM where the functions in the SCMs must be monotonous (strictly increasing or decreasing) with respect to the latent noise. The latter assumption effectively sets the dimensionality of the latent noise to the same dimensionality as the observed (endogenous) variables, yet this is unrealistic in medicine. As a remedy, we depart from point identification and instead aim at partial counterfactual identification, which allows us to relax the assumptions of BGMs.  <cit.> build sensitivity models for unobserved confounding in semi-Markovian SCMs, but still assume a restricted functional class of SCMs, namely, an additive noise model <cit.>. We, on the other hand, build a sensitivity model around the extended class of functions in the SCMs, which is non-trivial even in the Markovian SCMs. (2) Partial counterfactual identification has been studied previously, but only for discrete SCMs <cit.>. These works are based on either (i) the assumption of monotonicity <cit.> or (ii) the response function framework <cit.> and (generalization of the latter) canonical partitioning <cit.>. Yet, these works are all for discrete SCMs and do not generalize to the continuous setting. Likewise, it is also not possible to extend partial interventional identification of continuous outcomes <cit.> from the ℒ_2 layer of causation to ℒ_3, unless it explicitly assumes an underlying SCM. Research gap. To the best of our knowledge, we are the first to propose a sensitivity model for partial counterfactual identification of continuous outcomes in Markovian SCMs. § PARTIAL COUNTERFACTUAL IDENTIFICATION OF CONTINUOUS OUTCOMES In the following, we derive one of our main results: the ignorance interval of the ECOU [ECOT] has non-informative bounds if we relax the assumptions that (i) both the outcome and the latent noise are of the same dimensionality and that (ii) the functions in the SCMs are monotonous. §.§ Preliminaries Notation. Capital letters A, Y, U, denote random variables and small letters a, y, u their realizations from corresponding domains 𝒜, 𝒴, 𝒰. Bold capital letters such as 𝐔 = {U_1, …, U_n} denote finite sets of random variables. Further, ℙ(Y) is an (observational) distribution of Y; ℙ(Y | a) = ℙ(Y | A = a) is a conditional (observational) distribution; ℙ(Y_A=a) = ℙ(Y_a) an interventional distribution; and ℙ(Y_A=a| A' = a', Y' = y') = ℙ(Y_a | a', y') a counterfactual distribution. We use a superscript such as in ℙ^ℳ to indicate distributions that are induced by the SCM ℳ. We denote the conditional cumulative distribution function (CDF) of ℙ(Y | a) by 𝔽_a(y). We use ℙ(Y = ·) to denote a density or probability mass function of Y and 𝔼(Y) = ∫ y ℙ(y) to refer to its expected value. Interventional and counterfactual densities or probability mass functions are defined accordingly. A function is said to be in class C^k, k ≥ 0, if its k-th derivative exists and is continuous. Let ·_2 denote the L_2-norm, ∇_x f(x) a gradient of f(x), and Hess_x f(x) a Hessian matrix of f(x). The pushforward distribution or pushforward measure is defined as a transfer of a (probability) measure ℙ with a measurable function f, which we denote as f_♯ℙ. SCMs. We follow the standard notation of SCMs as in <cit.>. An SCM ℳ is defined as a tuple ⟨𝐔, 𝐕, ℙ(𝐔), ℱ⟩ with latent (exogenous) noise variables 𝐔, observed (endogenous) variables 𝐕 = {V_1, …, V_n}, a distribution of latent noise variables ℙ(𝐔), and a collection of functions ℱ = {f_V_1, …, f_V_n}. Each function is a measurable map from the corresponding domains of 𝐔_V_i∪𝐏𝐚_V_i to V_i, where 𝐔_V_i⊆𝐔 and 𝐏𝐚_V_i⊆𝐕∖ V_i are parents of the observed variable V_i. Therefore, the functions ℱ induce a pushforward distribution of observed variables, i.e., ℙ(𝐕) = ℱ_♯ℙ(𝐔). Each V_i is thus deterministic (non-random), conditionally on its parents, , v_i f_V_i(𝐩𝐚_V_i, 𝐮_V_i). Each SCM ℳ induces an (augmented) causal diagram 𝒢(ℳ), which is assumed to be acyclic. We provide a background on geometric measure theory and differential geometry in Appendix <ref>. §.§ Counterfactual Non-Identifiability In the following, we relax the main assumption of bijective generation mechanisms (BGMs) <cit.>, , that all the functions in Markovian SCMs are monotonous (strictly increasing) with respect to the latent noise variable. To this end, we let the latent noise variables have arbitrary dimensionality and further consider functions to be of class C^k. We then show that counterfactual distributions under this relaxation are non-identifiable from ℒ_1 or ℒ_2 data. Let 𝔅(C^k, d) denote the class of SCMs ℳ = ⟨𝐔, 𝐕, ℙ(𝐔), ℱ⟩ with the following endogenous and latent noise variables: 𝐔 = {U_A ∈{0, 1}, U_Y∈ [0, 1]^d} and 𝐕 = {A ∈{0, 1}, Y ∈ℝ}, for d ≥ 1. The latent noise variables have the following distributions ℙ(𝐔): U_A ∼Bern(p_A), 0 < p_A < 1, and U_Y ∼Unif(0, 1)^d and are all mutually independent [Uniformity of the latent noise does not restrict the definition, see the discussion in Appendix <ref>.]. The functions are ℱ = {f_A(U_A), f_Y(A, U_Y) } and f_Y(a, ·) ∈ C^k for u_Y ∈ (0, 1)^d ∀ a ∈{0, 1}. All SCMs in 𝔅(C^k, d) induce similar causal diagrams 𝒢(ℳ), where only the number of latent noise variables d differs. Further, it follows from the above definition that BGMs are a special case of 𝔅(C^1, 1), where the functions f_Y(a, ·) are monotonous (strictly increasing). Task: counterfactual inference. Counterfactual queries are at layer ℒ_3 of the causality ladder and are defined as probabilistic expressions with random variables belonging to several worlds, which are in logical contradiction with each other <cit.>. For instance, ℙ^ℳ(Y_a = y, Y_a' = y), a ≠ a' for an SCM ℳ from 𝔅(C^k, d). In this paper, we focus on the expected counterfactual outcome of [un]treated ECOU [ECOT], which we denote by Q^ℳ_a' → a(y') = 𝔼^ℳ(Y_a | a', y'). For an SCM ℳ from 𝔅(C^k, d), ECOU [ECOT] can be inferred by the following three steps. (1) The abduction step infers a posterior distribution of the latent noise variables, conditioned on the evidence, i.e., ℙ(U_Y | a', y'). This posterior distribution is defined on the level set of the factual function given by {u_Y: f_Y(a', u_Y) = y'}, , all points in the latent noise space mapped to y'. (2) The action step alters the function for Y to f_Y(a, U_Y). (3) The prediction step is done by a pushforward of the posterior distribution with the altered function, i.e. f_Y(a, ·)_♯ℙ(U_Y | a', y'). Afterwards, the expectation of it is then evaluated. The existence of counterfactual queries, which are non-identifiable with ℒ_1 or ℒ_2 data in Markovian SCMs, was previously shown in <cit.> (Ex. 8, App. D3) for discrete outcomes and in <cit.> (Ex. D.7) for continuous outcomes. Here, we construct an important example to (1) show non-identifiability of counterfactual queries from ℒ_1 or ℒ_2 data under our relaxation from Def. <ref>, and to consequently (2) give some intuition on informativity of the bounds of the ignorance interval, which we will formalize later. [Counterfactual non-identifiability in Markovian SCMs] Let ℳ_1 and ℳ_2 be two Markovian SCMs from 𝔅(C^0, 2) with the following functions for Y: ℳ_1: f_Y (A, U_Y^1, U_Y^2) = A (U_Y^1 - U_Y^2 + 1) + (1 - A) (U_Y^1 + U_Y^2 - 1) } , ℳ_2: f_Y (A, U_Y^1, U_Y^2) = U_Y^1 + U_Y^2 - 1, A = 0, U_Y^1 - U_Y^2 + 1, A = 1 ∧ (0 ≤ U_Y^1≤ 1) ∧ (U_Y^1≤ U_Y^2≤ 1), F^-1(0, U_Y^1, U_Y^2), otherwise, where F^-1(0, U_Y^1, U_Y^2) is the solution in Y of the implicitly defined function F(Y, U_Y^1, U_Y^2) = U_Y^1-U_Y^2-2 (Y-1) -U_Y^1-U_Y^2+1-1+√((Y-2)^2 (8 (Y-1)^2+1)) = 0. It turns out that the SCMs ℳ_1 and ℳ_2 are observationally and interventionally equivalent (relative to the outcome Y). That is, they induce the same set of ℒ_1 and ℒ_2 queries. For both SCMs, it can be easily inferred that a pushforward of uniform latent noise variables U_Y^1 and U_Y^2 with f_Y (A, U_Y^1, U_Y^2) is a symmetric triangular distribution with support 𝒴_0 = [-1,1] for A = 0 and 𝒴_1 = [0, 2] for A = 1, respectively. We plot the level sets of f_Y (A, U_Y^1, U_Y^2) for both SCMs in Fig. <ref> (left). The pushforward of the latent noise variables preserves the transported mass; , note the equality of (1) the area of each colored band between the two level sets in the latent noise space and (2) the area of the corresponding band under the density graph of Y Fig. <ref> (left). Despite the equivalence of ℒ_1 and ℒ_2, the SCMs differ in their counterfactuals; see Fig. <ref> (right). For example, the counterfactual outcome distribution of untreated, ℙ^ℳ(Y_1 = y| A'=0, Y'=0), has different densities for both SCMs ℳ_1 and ℳ_2. Further, the ECOU, Q^ℳ_0 → 1(0) = 𝔼^ℳ(Y_1 | A'=0, Y'=0), is different for both SCMs, , Q_0 → 1^ℳ_1(0) = 1 and Q_0 → 1^ℳ_2≈ 1.114. Further details for the example are in Appendix <ref>. The example provides an intuition that motivates how we generate informative bounds later. By “bending” the bundle of counterfactual level sets (in blue in Fig. <ref> left) around the factual level set (in orange in Fig. <ref>, right), we can transform more and more mass to the bound of the support. We later extend this idea to the ignorance interval of the ECOU. Importantly, after “bending” the bundle of level sets, we still must make sure that the original observational/interventional distribution is preserved. §.§ Partial Counterfactual Identification and Non-Informative Bounds We now formulate the task of partial counterfactual identification. To do so, we first present two lemmas that show how we can infer the densities of observational and counterfactual distributions from both the latent noise distributions and C^1 functions in SCMs of class 𝔅(C^1, d). [Observational distribution as a pushforward with f_Y]lemmaobspush Let ℳ∈𝔅(C^1, d). Then, the density of the observational distribution, induced by ℳ, is ℙ^ℳ(Y = y | a) = ∫_E(y,a)1/∇_u_Y f_Y(a, u_Y)_2ℋ^d-1(u_Y), where E(y,a) is a level set (preimage) of y, , E(y,a) = {u_Y ∈ [0, 1]^d: f_Y(a, u_Y) = y}, and ℋ^d-1(u_Y) is the Hausdorff measure (see Appendix <ref> for the definition). We provide an example in Appendix <ref> where we show the application of Lemma <ref> (therein, we derive the standard normal distribution as a pushforward using the Box-Müller transformation). Lemma <ref> is a generalization of the well-known change of variables formula. This is easy to see, when we set d = 1, so that ℙ^ℳ(Y = y | a) = ∑_u_Y ∈ E(y,a)∇_u_Y f_Y(a, u_Y)^-1. Furthermore, the function f_Y(a, u_Y) can be restored (up to a sign) from the observational distribution, if it is monotonous in u_Y, such as in BGMs <cit.>. In this case, the function coincides (up to a sign) with the inverse CDF of the observed distribution, , f_Y(a, u_Y) = 𝔽^-1_a(± u_Y ∓ 0.5 + 0.5 ) (see Corollary <ref> in Appendix <ref>). lemmacounterpush Let ℳ∈𝔅(C^1, d). Then, the density of the counterfactual outcome distribution of the [un]treated is ℙ^ℳ(Y_a = y | a', y') = 1/ℙ^ℳ(Y = y' | a')∫_E(y',a')δ(f_Y(a, u_Y) - y)/∇_u_Y f_Y(a', u_Y)_2ℋ^d-1(u_Y), where δ(·) is a Dirac delta function, and the expected counterfactual outcome of the [un]treated, i.e., ECOU [ECOT], is Q_a' → a^ℳ(y') = 𝔼^ℳ(Y_a | a', y') = 1/ℙ^ℳ(Y = y' | a')∫_E(y',a')f_Y(a, u_Y)/∇_u_Y f_Y(a', u_Y)_2ℋ^d-1(u_Y), where E(y',a') is a (factual) level set of y', , E(y',a') = {u_Y ∈ [0, 1]^d: f_Y(a', u_Y) = y'} and a' ≠ a. r4cm 4cm! 4cm < g r a p h i c s > -0.3cm “Bending” the bundle of counterfactual level sets {E(y, a): y ∈ [1, 2]} in blue around the factual level set E(y', a') in orange. Equations (<ref>) and (<ref>) implicitly combine all three steps of the abduction-action-prediction procedure: (1) abduction infers the level sets E(y',a') with the corresponding Hausdorff measure ℋ^d-1(u_Y); (2) action uses the counterfactual function f_Y(a, u_Y); and (3) prediction evaluates the overall integral. In the specific case of d=1 and a monotonous function f_Y(a, u_Y) with respect to u_Y, we obtain two deterministic counterfactuals, which are identifiable from observational distribution, , Q_a' → a^ℳ(y') = 𝔽^-1_a(±𝔽_a'(y') ∓ 0.5 + 0.5 ). For details, see Corollary <ref> in Appendix <ref>. For larger d, as already shown in Example <ref>, both the density and ECOU [ECOT] can take arbitrary values for the same observational (or interventional) distribution. Given the continuous observational distribution ℙ(Y | a) for some SCM of class 𝔅(C^k, d). Then, partial counterfactual identification aims to find bounds of the ignorance interval [Q_a' → a(y'), Q_a' → a(y')] given by Q_a' → a(y') = inf_ℳ∈𝔅(C^k, d) Q_a' → a^ℳ(y') s.t. ∀ a ∈{0, 1}: ℙ(Y | a) = ℙ^ℳ(Y | a), Q_a' → a(y') = sup_ℳ∈𝔅(C^k, d) Q_a' → a^ℳ(y') s.t. ∀ a ∈{0, 1}: ℙ(Y | a) = ℙ^ℳ(Y | a), where ℙ^ℳ(Y | a) is given by Eq. (<ref>) and Q_a' → a^ℳ(y') by Eq. (<ref>). Hence, the task of partial counterfactual identification in class 𝔅(C^k, d) can be reduced to a constrained variational problem, namely, a constrained optimization of ECOU [ECOT] with respect to f_Y(a, ·) ∈ C^k. Using Lemma <ref>, we see that, , ECOU [ECOT] can be made arbitrarily large (or small) for the same observational distribution in two ways. First, this can be done by changing the factual function, f_Y(a', ·), , if we increase the proportion of the volume of the factual level set E(y', a') that intersects only a certain bundle of counterfactual level sets. Second, this can be done by modifying the counterfactual function, f_Y(a, ·), by “bending” the bundle of counterfactual level sets around the factual level set. The latter is schematically shown in Fig. <ref>. We formalize this important observation in the following theorem. [Non-informative bounds of ECOU (ECOT)]thrmnonid Let the continuous observational distribution ℙ(Y | a) be induced by some SCM of class 𝔅(C^∞, d). Let ℙ(Y | a) have a compact support 𝒴_a = [l_a, u_a] and be of finite density ℙ(Y = y| a) < +∞. Then, the ignorance interval for the partial identification of the ECOU [ECOT] of class 𝔅(C^∞, d), d ≥ 2, has non-informative bounds: Q_a' → a(y') = l_a and Q_a' → a(y') = u_a. Theorem <ref> implies that, no matter how smooth the class of functions is, the partial identification of ECOU [ECOT] will have non-informative bounds. Hence, with the current set of assumptions, there is no utility in considering more general classes. This includes various functions, such as C^0 and the class of all measurable functions f_Y(a, ·): ℝ^d →ℝ, as the latter includes C^∞ functions. In the following, we introduce a sensitivity model which nevertheless allows us to obtain informative bounds. § CURVATURE SENSITIVITY MODEL In the following, we develop our () to restrict the class 𝔅(C^k, d) so that the ECOU [ECOT] obtains informative bounds. Our uses the intuition from the proof of Theorem <ref> in that, to construct the non-informative SCM ℳ_non-inf, we have to “bend” the bundle of the counterfactual level sets. As a result, our provides sufficient conditions for informative bounds in the class 𝔅(C^2, d) by bounding the principal curvatures of level sets globally. κ Let ℳ be of class 𝔅(C^2, d), d ≥ 2. Let E(y,a) be the level sets of functions f_Y(a, u_Y) for a ∈{0, 1}, which are thus d-1-dimensional smooth manifolds. Let us assume that principal curvatures i ∈{1, …, d-1} exist at every point u_Y ∈ E(y,a) for every a ∈{0, 1} and y ∈ (l_a, u_a) ⊂𝒴_a, and let us denote them as κ_i(u_Y). Then, we assume that κ≥ 0 is the upper bound of the maximal absolute principal curvature for every y, a, and u_Y ∈ E(y,a): κ = max_a ∈{0, 1}, y ∈ (l_a, u_a), u_Y ∈ E(y,a) max_i ∈{1, …, d-1}κ_i(u_Y). Principal curvatures can be thought of as a measure of the non-linearity of the level sets, and, when they are all close to zero at some point, the level set manifold can be locally approximated by a flat hyperplane. In brevity, principal curvatures can be defined via the first- and second-order partial derivatives of f_Y(a, ·), so that they describe the degrees of curvature of a manifold in different directions. We refer to Appendix <ref> for a formal definition of the principal curvatures κ_i. An example is in Appendix <ref>. Now, we state the main result of our paper that our allows us to obtain informative bounds for ECOU [ECOT]. [Informative bounds with our ]thrmcsm Let the continuous observational distribution ℙ(Y | a) be induced by some SCM of class 𝔅(C^2, d), d ≥ 2, which satisfies Assumption <ref>. Let ℙ(Y | a) have a compact support 𝒴_a = [l_a, u_a]. Then, the ignorance interval for the partial identification of ECOU [ECOT] of class 𝔅(C^2, d) has informative bounds, dependent on κ and d, which are given by Q_a' → a(y') = l(κ, d) > l_a and Q_a' → a(y') = u(κ, d) < u_a. Theorem <ref> has several important implications. (1) Our is applicable to a wide class of functions 𝔅(C^2, d), for which the principal curvature is well defined. (2) We show the relationships between different classes 𝔅(C^k, d) and our (κ) in terms of identifiability in Fig. <ref>. For example, by increasing κ, we cover a larger class of functions. For infinite curvature, our almost coincides with the entire 𝔅(C^2, d). (3) Our (κ) with κ≥ 0 always contains both (i) identifiable SCMs and (ii) (informative) non-identifiable SCMs. Examples of (i) include SCMs for which the bundles of the level sets coincide for both treatments, , {E(y, a): y ∈𝒴_a} = {E(y, a'): y ∈𝒴_a'}. In this case, it is always possible to find an equivalent BGM when the level sets are flat and thus κ = 0 (see ℳ_flat from Example <ref> in the Appendix <ref>) or curved and thus κ = 1 (see ℳ_curv from Example <ref> in the Appendix <ref>). For (ii), we see that, even when we set κ=0, we can obtain non-identifiability with informative bounds. An example is when we align the bundle of level sets perpendicularly to each other for both treatments, as in ℳ_perp from Example <ref> in the Appendix <ref>. r5cm 5cm! 5cm < g r a p h i c s > < g r a p h i c s > Results for partial counterfactual identification of the ECOU across two datasets. Reported: theoretical bounds of BGMs and mean bounds of over five runs. We make another observation regarding the choice of the latent noise dimensionality d. In practice, it is sufficient to choose d=2 (in the absence of further assumptions on the latent noise space). This choice is practical as we only have to enforce a single principal curvature, which reduces the computational burden, i.e., κ_1(u_Y) = - 1/2∇_u_Y( ∇_u_Y f_Y(a, u_Y)/∇_u_Y f_Y(a, u_Y)_2), u_Y ∈ E(y,a). Importantly, we do not lose generality with d = 2, as we still cover the entire identifiability spectrum by varying κ (see Corollary <ref> in Appendix <ref>). We discuss potential extensions of our in Appendix <ref>. § AUGMENTED PSEUDO-INVERTIBLE DECODER We now introduce an instantiation of our : a novel deep generative model called () to perform partial counterfactual identification under our of class 𝔅(C^2, 2). Architecture: The two main components of our are (1) residual normalizing flows with (2) variational augmentations (see Fig. <ref>). The first component are two-dimensional normalizing flows <cit.> to estimate the function f_Y(a, u_Y), u_Y ∈ [0, 1]^2, separately for each treatment a ∈{0, 1}. Specifically, we use residual normalizing flows <cit.> due to their ability to model free-form Jacobians (see the extended discussion about the choice of the normalizing flow in Appendix <ref>). However, two-dimensional normalizing flows can only model invertible transformations, while the function f̂_Y(a, ·): [0, 1]^2 →𝒴_a ⊂ℝ is non-invertible. To address this, we employ an approach of pseudo-invertible flows <cit.>, namely, the variational augmentations <cit.>, as described in the following. The second component in our are variational augmentations <cit.>. Here, we augment the estimated outcome variable Ŷ with the variationally sampled Ŷ_aug∼ N(g^a(Ŷ), ε^2), where g^a(·) is a fully-connected neural network, and ε^2 > 0 is a hyperparameter. Using the variational augmentations, our then models f̂_Y(a, ·) through a two-dimensional transformation F̂_a = (f̂_Y_aug(a, ·), f̂_Y(a, ·)): [0, 1]^2 →ℝ×𝒴_a. We refer to the Appendix <ref> for further details. Inference: Our proceeds in first steps: (P1) it first fits the observational/interventional distribution ℙ(Y | a), given the observed samples; (P2) it then performs counterfactual inference of ECOU [ECOT] in a differentiable fashion; and (P3) it finally penalize functions with large principal curvatures of the level sets, by using automatic differentiation of the estimated functions of the SCMs. We achieve (P1)–(P3) through the help of variational augmentations: ∙ For (P1) we fit the two-dimensional normalizing flow with a negative log-likelihood. ∙ For (P2), we sample points from the level sets of f̂_Y(a, ·). The latter is crucial as it follows an abduction-action-prediction procedure and thus generates estimates of the ECOU [ECOT] in a differential fashion. To evaluate ECOU [ECOT], we first perform the abduction step with the inverse transformation of the factual normalizing flow, i.e., F̂^-1_a'. This transformation maps the variationally augmented evidence, , (y', {Y'_aug}_j=1^b ∼ N(g^a(y'), ε^2)), where b is the number of augmentations, to the latent noise space. Then, the action step selects the counterfactual normalizing flow via the transformation F̂_a. Finally, the prediction step performs a pushforward of the latent noise space, which was inferred during the abduction step. Technical details are in Appendix <ref>. ∙ In (P3), we enforce the curvature constraint of our . Here, we use automatic differentiation as provided by deep learning libraries, which allows us directly evaluate κ_1(u_Y) according to Eq. (<ref>). Training: Our is trained based on observational data 𝒟 = {A_i, Y_i}_i=1^n drawn i.i.d. from some SCM of class 𝔅(C^2, 2). To fit our , we combine several losses in one optimization objective: (1) a negative log-likelihood loss ℒ_NLL with noise regularization <cit.>, which aims to fit the data distribution; (2) a counterfactual query loss ℒ_Q with a coefficient λ_Q, which aims to maximize/minimize the ECOU [ECOT]; and (3) a curvature loss ℒ_κ with coefficient λ_κ, which penalizes the curvature of the level sets. Both coefficients are hyperparameters. We provide details about the losses, the training algorithm, and hyperparameters in Appendix <ref>. Importantly, we incorporated several improvements to stabilize and speed up the training. For example, in addition to the negative log-likelihood, we added the Wasserstein loss ℒ_𝕎 to prevent posterior collapse <cit.>. To speed up the training, we enforce the curvature constraint only on the counterfactual function, f̂_Y(a, ·), and only for the level set, which corresponds to the evaluated ECOU [ECOT], u_Y ∈ E(Q̂_a' → a(y'), a). § EXPERIMENTS Datasets. To show the effectiveness of our at partial counterfactual identification, we use two synthetic datasets. This enables us to access the ground truth CDFs and quantile functions. Then, with the help of Corollary <ref>, we can then compare our with the BGM (= a special case of with κ = 0). Both synthetic datasets comprise samples from observational distributions ℙ(Y | a), which we assume to be induced by some (unknown) SCM of class 𝔅(C^2, 2). In the first dataset, ℙ(Y | 0) = ℙ(Y | 1) is the standard normal distribution, and in second, ℙ(Y | 0) and ℙ(Y | 1) are different mixtures of normal distributions. We draw n_a = 1,000 observations from ℙ(Y | a) for each treatment a ∈{0, 1}, so that n = n_0 + n_1 = 2,000. Although both distributions have infinite support, we consider the finite sample minimum and maximum as estimates of the support bounds [l̂_1, û_1]. Further details on our synthetic datasets are in Appendix <ref>. In sum, the estimated bounds from our are consistent with the theoretical values, thus showing the effectiveness of our method. Results. Fig. <ref> shows the results of point/partial counterfactual identification of the ECOU, i.e., Q̂_0 → 1 (y'), for different values of y'. Point identification with BGMs yields two curves, corresponding to strictly increasing and strictly decreasing functions. For partial identification with our , we set λ_Q = 2.0 and vary λ_κ∈{0.5, 1.0, 5.0, 10.0} (higher values correspond to a higher curvature penalization). We observe that, as we increase λ_κ, the bounds are moving closer to the BGM, and, as we decrease it, the bound are becoming non-informative, namely, getting closer to [l̂_1, û_1]. We report additional results in Appendix <ref> (, for with λ_Q = 1.0). Conclusion. Our work is the first to present a sensitivity model for partial counterfactual identification of continuous outcomes in Markovian SCMs. Our work rests on the assumption of the bounded curvature of the level sets, yet which should be sufficiently broad and realistic to cover many models from physics and medicine. As a broader impact, we expect our bounds to be highly relevant for decision-making in safety-critical settings. § EXTENDED RELATED WORK §.§ Counterfactual inference In the following, we explain why existing work does not straightforwardly generalize to the partial counterfactual identification of continuous outcomes. In particular, we do the following: * We pinpoint that symbolic (non-parametric) counterfactual identifiability methods only provide the probabilistic expression suitable for point identification, if a certain query can be expressed via lower layer information. * We note that existing point identification methods do not have any guidance towards partial identification, and often do not even provide identifiability guarantees. In this case, valid point identification can only be achieved via the functional class restrictions in SCMs. * We discuss why methods for partial identification of both (i) discrete counterfactual queries and (ii) continuous interventional queries can not be extended to our setting. Ultimately, we summarize related works on both point and partial identification for two layers of causality, namely interventional and counterfactual, in Table <ref>. 1. Symbolic identifiability and point identification with probabilistic expressions. Complete and sound algorithms were proposed for symbolic (non-parametric) identifiability of the counterfactuals based on observational (ℒ_1) or interventional (ℒ_2) information, and causal diagram of a semi-Markovian SCM. For example, <cit.> adapted d-separation for the parallel-worlds networks <cit.> for simple counterfactual queries, and <cit.> extended it to nested counterfactuals with counterfactual unnesting theorem. These methods aim to provide a probabilistic expression, in the case the query is identifiable, which is then used in downstream point identification methods. Rather rare examples of identifiable queries include the treatment effect of the treated (ETT) <cit.> and path-specific effects <cit.> for certain SCMs, , when the treatment is binary. If the query is non-identifiable, like in the case with ECOU [ECOT] as in our paper, no guidance is provided on how to perform partial identification and methods can not be used. 2. Point counterfactual identification with functional class restrictions. Another way to perform a point identification when the counterfactual query is symbolically non-identifiable is to restrict a functional class in SCMs. Examples of restricted Markovian SCMs include nonlinear additive noise models (ANMs) <cit.>, post nonlinear models (PNL) <cit.>, location-scale noise models (LSNMs) <cit.>, and bijective generation mechanisms (BGMs) <cit.>. The latest, BGMs, also work in semi-Markovian settings with additional assumptions. Numerous deep generative models were also proposed for point identification in both Markovian and semi-Markovian settings. Examples are normalizing flows <cit.>; diffusion models <cit.>; variational inference <cit.>; adversarial learning <cit.>, and neural expectation-maximization <cit.>. As it was shown in <cit.>, all the aforementioned deep generative models need to assume the BGM of the data to yield valid counterfactual inference.[In semi-Markovian SCM additional assumptions are needed about the shared latent noise <cit.>.] Alternative definitions of the counterfactuals exist, , transport-based counterfactuals <cit.>, but they were only proven to coincide with standard SCM-based counterfactuals when the latent noise can be deterministically defined with observed data. Therefore, they coincide with BGMs. The assumption of monotonicity of BGMs effectively sets the dimensionality of the latent noise to the same of the observed variables, which is rarely a realistic assumption. In our paper, we relax this assumption, and let the latent noise have arbitrary dimensionality. 3(i). Discrete partial counterfactual identification. Partial counterfactual identification was rigorously studied only for discrete SCMs or for discrete outcomes. For example, bounds under no assumptions or under the monotonicity assumption were derived for counterfactual probabilities <cit.>. The monotonicity assumption was later adopted to general queries <cit.>, but is too restrictive[Informally, monotonicity requires, that one potential outcome is strictly larger than the other for every fixed latent noise value.] for practical application to continuous outcomes. More general counterfactual queries can also be tackled with response functions framework <cit.> and, more generally, canonical partitioning <cit.> in combination with deep neural networks <cit.>. The same idea was used for causal marginal problem <cit.> when we want to combine different experimental and observational data with overlapping observed variables. However, response functions framework and canonical partitioning can not be extended to the continuous setting in practice, as their computational complexity grows exponentially with respect to the cardinality of the observed variables. Hence, the aforementioned methods are not relevant baselines in our setting. 3(ii). Partial interventional identification of continuous outcomes. The problem of partial interventional identification arises in semi-Markovian SCMs and usually aims at hidden confounding issues of treatment effect estimation <cit.>. For example, instrumental variable (IV) models always obtain informative bounds under the assumption of instrumental validity <cit.>, so that partial identification is formulated as an optimization problem <cit.>. In other cases, hidden confounding causes non-informative bounds and additional assumptions about its strength are needed, , a sensitivity model. For example, the marginal sensitivity model (MSM) assumes the odds ratio between nominal and true propensity scores and derives informative bounds for average or conditional average treatment effects (ATE and CATE, respectively) <cit.>. The confounding functions framework <cit.> was also introduced for the sensitivity analysis of ATE and CATE. <cit.> develops a sensitivity model for ATE assuming a noise level of proxy variables. All the mentioned sensitivity models operate on conditional distributions and, therefore, do not extend to counterfactual partial identification, as the latter requires assumptions about the SCM. Some methods indeed restrict functional classes in SCMs to achieve informative bounds. For example, linear SCMs are assumed for partial identification of average treatment derivative <cit.>. By developing our in our paper, we discuss what restrictions are required for SCMs to achieve informative bounds for partial counterfactual identification of continuous outcomes. Nevertheless, the methods for partial interventional identification are not aimed at counterfactual inference, and, thus, are not directly applicable in our setting. §.§ Identifiability of latent variable models and disentanglement The question of identifying latent noise variables was also studied from the perspective of nonlinear independent component analysis (ICA) <cit.>.  <cit.> showed that the joint distribution over observed and latent noise variables is in general unidentifiable. Although, nonlinear ICA was applied for interventional inference <cit.>, these works did not consider SCMs or counterfactual queries. § BACKGROUND MATERIALS Geometric measure theory. Let δ(x): ℝ→ℝ be a Dirac delta function, defined as zero everywhere, except for x = 0, where it has a point mass of 1. Dirac delta function induces a Dirac delta measure, so that (with the slight abuse of notation) ∫_ℝ f(x) δ( x) = ∫_ℝ f(x) δ(x) x = f(0), where f is a C^0 function with compact support. Dirac delta function satisfies the following important equality ∫_ℝ f(x) δ(g(x)) x = ∑_i f(x_i)/g'(x_i), where f is a C^0 function with compact support, g is a C^1 function, g'(x_i) ≠ 0, and x_i are roots of the equation g(x) = 0. In addition, we define the s-dimensional Hausdorff measure ℋ^s, as in <cit.>. Let E ⊆ℝ^n be a s-dimensional smooth manifold (s ≤ n) embedded into ℝ^n. Then, E is a Borel subset in ℝ^n, is Hausdorff-measurable, and s-dimensional Hausdorff measure ℋ^s(E) is the s-dimensional surface volume of E. For example, if s = 1, the Hausdorff measure coincides with a line integral, and, if s = 2, with surface integral: s=1: E = [ x_1(t); ⋮; x_n(t) ] = 𝐱(t) ⇒ ℋ^1(𝐱) = 𝐱(t)/ t_2 t, s=2: E = [ x_1(s, t); ⋮; x_n(s, t) ] = 𝐱(s, t) ⇒ ℋ^2(𝐱) = 𝐱(s, t)/ s×𝐱(s, t)/ t_2 s t, where × is a vector product, x_i(t) is a t-parametrization of the line, and x_i(s, t) is a (s, t)-parametrization of the surface. Also, ℋ^n(E) = vol^n(E) for Lebesgue measurable subsets E ⊆ℝ^n, where vol^n is a standard n-dimensional volume (Lebesgue measure). In the special case of s = 0, Hausdorff measure is a counting measure: ∫_E f(x) ℋ^0(x) = ∑_x ∈ E f(x). Importantly, the Hausdorff measure is related to the high-dimensional Dirac delta measure via the coarea formula <cit.> (Theorem 6.1.5), <cit.> (Theorem 3.3): ∫_ℝ^n f(𝐱) δ(g(𝐱)) 𝐱 = ∫_E: { g(𝐱) = 0}f(𝐱)/∇_𝐱 g(𝐱)_2ℋ^n-1(𝐱), where f is C^0 function with compact support, and g is a C^1 function with ∇_𝐱 g(𝐱) > 0 for 𝐱∈ E. Functions for which ∇_𝐱 g(𝐱) > 0 for 𝐱∈ℝ^d holds are called regular. We define a bundle of level sets of the function y = f(x) as a set of the level sets, indexed by y, i.e., {E(y); y ∈𝒴∈ℝ}, where E(y) = {y ∈ℝ: f(x) = y } and 𝒴⊆ℝ. A bundle of level sets of regular functions are closely studied in the Morse theory. In particular, we further rely on the fundamental Morse theorem, , the level set bundles of a regular function are diffeomorphic to each other. This means that there exists a continuously differentiable bijection between every pair of the bundles. Differential geometry of manifolds. In the following, we formally define the notion of the curvature for level sets. As the result of the implicit function theorem, level sets E(y) of a regular function f of class C^1 are Riemannian manifolds <cit.>, namely smooth differentiable manifolds. Riemannian manifolds can be locally approximated via Euclidean spaces, , they are equipped with a tangent space T_𝐱(E(y)) at a point 𝐱, and a dot product defined on the tangent space. The tangent space, T_𝐱(E(y)), is orthogonal to the normal of the manifold, ∇_𝐱 f(𝐱). For the level sets of regular functions of the class C^2, we can define the so-called curvature <cit.>. Informally, curvature defines the extent a Riemannian manifold bends in different directions. Convex regions of the manifold correspond to a negative curvature (in all directions), and concave regions to positive curvature, respectively. Saddle points have curvatures of different signs in different directions. Formally, curvature is defined via the rate of change of the unit normal of the manifold, which is parameterized with the orthogonal basis {x̃_1, …, x̃_n-1} of the tangent space, T_𝐱(E(y)). This rate of change, namely a differential, forms a shape operator (or second fundamental form) on the tangent space, T_𝐱(E(y)). Then, eigenvalues of the shape operator are called principal curvatures, κ_i(𝐱) for i ∈{1, …, n-1}. Principal curvatures are a measure of the extrinsic curvature, , the curvature of the manifold with respect to the embedding space, ℝ^n. Principal curvatures for Riemannian manifolds, defined as the level sets of a regular C^2 function f, can be also expressed via the gradient and the Hessian of the following function <cit.>: κ_i(𝐱) = - root_i {[ Hess_𝐱 f(𝐱) - λ I ∇_𝐱 f(𝐱); (∇_𝐱 f(𝐱))^T 0 ] = 0 }/∇_𝐱 f(𝐱)_2, i ∈{1, …, n-1}, where root_i are roots of the equation with respect to λ∈ℝ. For 𝐱∈ℝ^2, the level sets E(y) are curves, and there is only one principal curvature κ_1(𝐱) = - 1/2∇_𝐱( ∇_𝐱 f(𝐱)/∇_𝐱 f(𝐱)_2). One of the important properties of the principal curvatures is that we can locally parameterize the manifold as the second-order hypersurface, , y = 1/2 (κ_1 x̃_1^2 + … + κ_n-1x̃_n-1^2) + O(𝐱̃_2^3), where {x̃_1, …, x̃_n-1} is the orthogonal basis of the tangent space T_𝐱(E(y)). § EXAMPLES exmpl:markov-non-idCounterfactual non-identifiability in Markovian-SCMs Here, we continue the example and provide the inference of observational (interventional) and counterfactual distributions for the SCMs ℳ_1 and ℳ_2. Let us consider ℳ_1 first. It is easy to see that a pushforward of uniform distribution in a unit square with f_Y (A, U_Y^1, U_Y^2) = A (U_Y^1 - U_Y^2 + 1) + (1 - A) (U_Y^1 + U_Y^2 - 1) induces triangular distributions. For example, for f_Y (0, U_Y^1, U_Y^2), a cumulative distribution function (CDF) of ℙ^ℳ_1(Y | A = 0) = ℙ^ℳ_1(Y_a = 0) will have the form 𝔽^ℳ_1(y | A = 0) = ℙ^ℳ_1(Y ≤ y | A = 0) = ℙ(U_Y^1 + U_Y^2 - 1 ≤ y) = 0, (y ≤ -1) ∨ (y > 1), ∫_{u_Y^1 + u_Y^2 - 1 ≤ y } u_Y^1 u_Y^2, otherwise = 0, (y ≤ -1) ∨ (y > 1), (y + 1)^2/2, y ∈ (-1, 0], 1 - (1 - y)^2/2, y ∈ (0, 1], which is the CDF of a triangular distribution. Analogously, a CDF of ℙ^ℳ_1(Y | A = 1) = ℙ^ℳ_1(Y_a = 1) is 𝔽^ℳ_1(y | A = 1) = ℙ^ℳ_1(Y ≤ y | A = 1) = ℙ(U_Y^1 - U_Y^2 + 1 ≤ y) = 0, (y ≤ 0) ∨ (y > 2), ∫_{u_Y^1 - u_Y^2 + 1 ≤ y } u_Y^1 u_Y^2, otherwise = 0, (y ≤ 0) ∨ (y > 2), y^2/2, y ∈ (0, 1], 1 - (2 - y)^2/2, y ∈ (1, 2]. To infer the counterfactual outcome distribution of the untreated, ℙ^ℳ_1(Y_a=1| A'=0, Y'=0), we make use of Lemma <ref> and properties of the Dirac delta function (, Eq. (<ref>)). We yield ℙ^ℳ_1(Y_a = 1 = y | A'=0, Y'=0) = ∫_{u_Y^1 + u_Y^2 - 1 = 0}δ(u_Y^1 - u_Y^2 + 1 - y)/∇_u_Y (u_Y^1 + u_Y^2 - 1)_2ℋ^1(u_Y) (*)=1/√(2)∫_0^2 √((1/2)^2 + (1/2)^2)δ(t - y) t = 1/2∫_0^2 δ(t - y) t = 1/2, y ∈ [0, 2], 0, otherwise, where (*) introduces a parametrization of the line {u_Y^1 + u_Y^2 - 1 = 0} with t, namely [ u_Y^1; u_Y^2 ] = [ 1/2 t; 1 - 1/2 t ], t ∈ [0, 2] ⇒ ℋ^1(u_Y) = √((1/2)^2 + (1/2)^2) t. Therefore, the ECOU for ℳ_1 is Q^ℳ_1_0 → 1(0) = ∫_0^2 1/2 y y = 1. Now, let us consider ℳ_2. Hence, f_Y(0, U_Y^1, U_Y^2) is the same for both ℳ_1 and ℳ_2, so are the observational (interventional) distributions, , ℙ^ℳ_1(Y | A = 0) = ℙ^ℳ_2(Y | A = 0). The same is true for f_Y(1, U_Y^1, U_Y^2) with (0 ≤ U_Y^1≤ 1) ∧ (U_Y^1≤ U_Y^2≤ 1) or, equivalently, ℙ^ℳ_1(Y = y | A = 1) = ℙ^ℳ_2(Y = y | A = 1) with y ∈ (0, 1]. Now, it is left to check whether ℙ^ℳ_2(Y = y | A = 1) is the density of a triangular distribution for y ∈ (1, 2]. For that, we define level sets of the function f_Y(1, U_Y^1, U_Y^2) in the remaining part of the unit square in a specific way. Formally, they are a family of “bent” lines in the transformed two-dimensional space ũ_Y^1, ũ_Y^2 with ũ_Y^2 = 8t^2 ũ_Y^1 + b(t), t ∈ [0, 1]; ũ_Y^1∈ [-1, 1]; ũ_Y^2∈ [0, 1 - ũ_Y^1], where b(t) is a bias, depending on t. The area under the line should change with quadratic speed, as it will further define the CDF of the induced observational distribution, 𝔽^ℳ_2(y | A = 1), y ∈ (1, 2], S(t) = 2t - t^2, so that S(0) = 0 and S(1) = 1. On the other hand, the area under the line from the family is S(t) = 2 ∫_0^ũ_* (8t^2 ũ_Y^1 + b(t)) ũ_Y^1 + (1 - ũ_*)^2 = 8t^2ũ_*^2 + 2ũ_*b(t) + (1 - ũ_*)^2, where ũ_* = 1 - b(t)/8t^2 + 1. Therefore, we can find the dependence of b(t) on t, so that the area S(t) is preserved: 2t - t^2 = 8t^2ũ_*^2 + 2ũ_*b(t) + (1 - ũ_*)^2 ⟺ b(t) = 1 - √((t - 1)^2 (8t^2 + 1)). Let us reparametrize t with t = y - 1, so that the line from a family with t = 0 corresponds to y = 1 and t = 1 to y = 2, respectively. We then yield ũ_Y^2 = 2 (y - 1) ũ_Y^1 + 1 - √((y - 2)^2 (8(y - 1)^2 + 1)). In order to obtain the function f_Y(1, U_Y^1, U_Y^2) in the original coordinates u_Y^1, u_Y^2, we use a linear transformation T given by T: [ ũ_Y^1; ũ_Y^2 ]→[ - u_Y^1 - u_Y^2 + 1; u_Y^1 - u_Y^2 ]. Hence, the family of “bent” lines can be represented as the following implicit equation: F(y, u_Y^1, u_Y^2) = u_Y^1-u_Y^2-2 (y-1) -u_Y^1-u_Y^2+1-1+√((y-2)^2 (8 (y-1)^2+1)) = 0. Importantly, the determinant of the Jacobian of the transformation T is equal to 2; therefore, the area under the last line S(1) shrinks from 1 (in space ũ_Y^1, ũ_Y^2) to 0.5 (in space u_Y^1, u_Y^2). Thus, we can also easily verify with Eq. (<ref>) that the CDF of the induced observational distribution coincides with the CDF of the triangular distribution for y ∈ (1, 2]: 𝔽^ℳ_2(y | A = 1) = 1/2 + ℙ^ℳ_2(1 ≤ Y ≤ y | A = 1) = 1/2 + 1/2 S(y - 1) = 1 - (2 - y)^2/2. To infer the counterfactual outcome distribution of the untreated, i.e., ℙ^ℳ_2(Y_a=1| A'=0, Y'=0), we again use the Lemma <ref> and properties of Dirac delta function (, Eq. (<ref>)). We yield ℙ^ℳ_2(Y_a=1 = y | A'=0, Y'=0) = 1/2, y ∈ (0, 1], ℙ^ℳ_2(Y_a=1 = y | A'=0, Y'=0) = ∫_{u_Y^1 + u_Y^2 - 1 = 0}δ(F(y, u_Y^1, u_Y^2))/∇_u_Y (u_Y^1 + u_Y^2 - 1)_2ℋ^1(u_Y) (*)=1/√(2)∫_0^1 √((1/2)^2 + (1/2)^2)δ(t - 1 + √((y-2)^2 (8 (y-1)^2+1))) t = 1/2- (√((y-2)^2 (8 (y-1)^2+1)))' = (5 - 4y)^2 (2 - y)/2 √((y - 2)^2 (8(y -1)^2 + 1)), y ∈ (1, 2], where (*) introduces a parametrization of the line {u_Y^1 + u_Y^2 - 1 = 0} with t, namely [ u_Y^1; u_Y^2 ] = [ 1/2 t + 1/2; 1/2 - 1/2 t ], t ∈ [0, 1] ⇒ ℋ^1(u_Y) = √((1/2)^2 + (1/2)^2) t. Finally, the ECOU for ℳ_2 can be calculated numerically via Q^ℳ_2_0 → 1(0) = ∫_0^1 1/2 y y + ∫_1^2 (5 - 4y)^2 (2 - y)/2 √((y - 2)^2 (8(y -1)^2 + 1)) y ≈1/4 + 0.864 ≈ 1.114. [Box-Müller transformation] Here, we demonstrate the application of the Lemma <ref> to infer the standard normal distribution with the Box-Müller transformation. The Box-Müller transformation is a well-established approach to sample from the standard normal distribution, which omits the usage of the inverse CDF of the normal distribution. Formally, the Box-Müller transformation is described by an SCM ℳ_ of class 𝔅(C^1, 2)[Formally, f_Y ∈ C^1 only for u_Y ∈ (0, 1)^2.] with the following function for Y: ℳ_: f_Y(A, U_Y^1, U_Y^2) = f_Y(U_Y^1, U_Y^2) = √(-2 log(U_Y^1)) cos(π U_Y^2). We will now use the Lemma <ref> to verify that ℙ^ℳ_(Y | a) = N(0, 1). We yield ℙ^ℳ_(Y = y | a) = ∫_{√(-2 log(u_Y^1)) cos(π u_Y^2) = y}1/∇_u_Y (√(-2 log(u_Y^1)) cos(π u_Y^2))_2ℋ^1(u_Y) = ∫_{√(-2 log(u_Y^1)) cos(π u_Y^2) = y}1/√(-cos^2(π u_Y^2)/2 u_Y^1^2 log(u_Y^1) - 2 π^2 log(u_Y^1) sin^2(π u_Y^2))ℋ^1(u_Y) (*)=∫_0^1/2√(1 + (- π y^2 sin(π t)/cos^3(π t)exp( -y^2/2 cos^2(π t)) )^2)/√(cos^4(π t)/y^2exp(y^2/cos^2(π t)) + π^2 sin^2(π t) y^2/cos^2(π t)) t = ∫_0^1/2yexp(- y^2/2 cos^2(π t))/cos^2(π t) t = 1/√(2 π)exp(- y^2/2) = N(y; 0, 1), where (*) introduces a parametrization of the level set {√(-2 log(u_Y^1)) cos(π u_Y^2) = y}, y > 0 with t, namely [ u_Y^1; u_Y^2 ] = [ exp( -y^2/2 cos^2(π t)); t ], t ∈ [0, 1/2) ⇒ ℋ^1(u_Y) = √((- π y^2 sin(π t)/cos^3(π t)exp( -y^2/2 cos^2(π t)) )^2 + 1) t. This parametrization is also valid for y < 0 and t ∈ (1/2, 1]. Due to the symmetry of the function f_Y, we only consider one of the cases (see Fig. <ref> (left) with the level sets). [Connected components of the factual level sets] Here, we construct the function with the level sets consisting of multiple connected components. For that, we extend Example <ref> with the Box-Müller transformation. We define a so-called oscillating Box-Müller transformation f_Y(U_Y^1, U_Y^2) = √(-2 log(U_Y^1)) cos(2^-log_2(U_Y^2)π U_Y^2). We plot the level set for y = -0.5 in Fig. <ref>, namely E(y) = {u_Y ∈ [0, 1]^2: f_Y(u_Y^1, u_Y^2) = y} . Here, we see that the level sets consist of an infinite number of the connected components with an infinite total length. [Curvature of level sets] Let us consider three SCMs of class 𝔅(C^2, 2), which satisfy the Assumption <ref> with κ = 50. Then, the curvature of the level sets is properly defined for the function f_Y (see Eq. (<ref>)). Fig. <ref> provides a heatmap with the absolute curvature, κ_1(u_Y), for the counterfactual level sets, {E(y, a): y ∈𝒴_a}. Here, all three SCMs are instances of our . [ℳ_] There exist BGMs in class 𝔅(C^1, 1), for which the curvature of level sets are not properly defined at certain points. For example, an SCM ℳ_, which induces triangular distributions, like in Example <ref>. For this example, ℳ_ will have the following function for Y: ℳ_: f_Y(A, U_Y) = A √(2 U_Y) + (A - 1) (√(2 U_Y) - 1), if U_Y ∈ [0.0, 0.5], A (2 - √(2 (U_Y + 1))) + (A - 1) (1 - √(2 (U_Y + 1))) , if U_Y ∈ (0.5, 1.0]. [ℳ_] Here, we provide examples of SCMs in the class 𝔅(C^∞, d), for which the bundles of the level sets coincide for both treatments, , {E(y, a): y ∈𝒴_a} = {E(y, a'): y ∈𝒴_a'}, and all the level sets are flat hypersurfaces (κ = 0). For example, SCMs with the following functions for Y: ℳ_: f_Y(A, U_Y) = g(A, w_1 U_Y^1 + … + w_d U_Y^d), where g(a, ·) is an invertible function in class C^∞ and w_1, …, w_d are coefficients from the linear combination. After a reparametrization, it is always possible to find an equivalent BGM with the function: g(A, w_1 U_Y^1 + … + w_d U_Y^d) = g(A, Ũ_Y) = g(A, 𝔽^-1_Ũ_Y(U_Y^1)), where 𝔽^-1_Ũ_Y(·) is an inverse CDF of Ũ_Y = w_1 U_Y^1 + … + w_d U_Y^d. The smoothness of the inverse CDF then defines the smoothness of the BGM. [ℳ_] This example is the generalization of the Example <ref>. Here, we also use the same bundles of the level sets for both treatments, but allow for the curvature of κ > 0: ℳ_: f_Y(A, U_Y) = g(A, h(U_Y)), where g(a, ·) is an invertible function in class C^∞, and h(·): [0, 1]^d →ℝ is a function in class C^∞ with bounded by κ curvature of the level sets. In this case, we also can reparametrize the latent space with a non-linear transformation and find an equivalent BGM: g(A, h(U_Y)) = g(A, Ũ_Y) = g(A, 𝔽^-1_Ũ_Y(U_Y^1)), where 𝔽^-1_Ũ_Y(·) is an inverse CDF of Ũ_Y = h(U_Y). The smoothness of the inverse CDF then analogously defines the smoothness of the BGM. [ℳ_] We can construct an SCM of class 𝔅(C^∞, 2), so that the level sets are all straight lines (κ = 0) and perpendicular to each other for different a ∈{0, 1}, , ℳ_: f_Y(A, U_Y^1, U_Y^2) = A g_1(U_Y^1) + (A - 1) g_2(U_Y^2), where g_1(·), g_2(·) are invertible functions in class C^∞. In this case, the ECOU for y' ∈𝒴_a' always evaluates to the conditional expectation Q_0 → 1^ℳ_(y') = 1/ℙ^ℳ_(Y = y' | A = 0)∫_{u_Y^2 = g_2^-1(y')}g_1(u_Y^1)/∇_u_Y^2 g_2(u_Y^2)ℋ^1(u_Y) = ℙ^ℳ_(Y = g_2(g_2^-1(y')) | A = 0)/ℙ^ℳ_(Y = y' | A = 0)∫_0^1 g_1(u_Y^1) u_Y^1 = 𝔼^ℳ_(Y | A = 1). Similarly, the ECOT is 𝔼^ℳ_(Y | A = 0). This result is, in general, different from the result of BGMs, which, , would yield Q_a' → a^ℳ(y') = 𝔽_a(0.5) for y' = 𝔽^-1_a'(0.5). Thus, point identification is not guaranteed with κ = 0. § PROOFS Uniformity of the latent noise. Classes of SCMs 𝔅(C^k, d) assume independent uniform latent noise variables, , U_Y ∼Unif(0, 1)^d. This assumption does not restrict the distribution of the latent noise. Namely, for a bivariate SCM ℳ_non-unif with d-dimensional non-uniform continuous latent noise, it is always possible to find an equivalent[Equivalence of the SCMs is defined as the almost surely equality of all the functions in two SCMs <cit.>.] SCM ℳ_unif of class 𝔅(C^k, d). This follows from the change of variables formula and the inverse probability transformation <cit.>: ℳ_non-unif: Y = f̃_Y(A, Ũ_Y), Ũ_Y ∼ℙ( Ũ_Y); f̃_Y(a, ·) ∈ C^k, ⟺ ℳ_unif: Y = f̃_Y(A, T(U_Y)) = f_Y(A, U_Y), U_Y ∼Unif(0, 1)^d; f_Y(a, ·) ∈ C^min(1, k), where Ũ_Y = T(U_Y), and T(·): (0, 1)^d →ℝ^d is a diffeomorphism (bijective C^1 transformation) and a solution to the following equation: ℙ(Ũ_Y = ũ_Y)_∈ C^0 = |(T^-1(ũ_Y)^∈ C^1/ũ_Y)_∈ C^0| For further details on the existence and explicit construction of T(·), see <cit.>. Thus, f_Y(a, ·) is of class C^min(1, k), as the composition of T(·) and f̃_Y(a, ·). In our we require the functions of the SCMs to be of class C^2. In this case, 𝔅(C^2, d) also includes all the SCMs with non-uniform continuous latent noise with densities of class C^1. This also follows from the change of variables theorem, see Eq. (<ref>). More general classes of functions, , all measurable functions, or more general classes of distributions, , distributions with atoms, fall out of the scope of this paper, even though we can find bijective (but non-continuous) transformations between probability spaces of the same cardinality (see the isomorphism of Polish spaces, Theorem 9.2.2 in <cit.>). * Lemma is a result of the coarea formula (see Eq. (<ref>)): ℙ^ℳ(Y = y | a) = ∫_[0, 1]^dℙ^ℳ(Y = y, U_Y = u_Y | a) u_Y (*)=∫_[0, 1]^dℙ^ℳ(Y = y | u_Y, a) ℙ^ℳ(U_Y = u_Y) u_Y = ∫_[0, 1]^dδ(f_Y(a, u_Y) - y) 1 u_Y = ∫_E(y, a)1/∇_u_Y f_Y(a, u_Y)_2ℋ^d-1(u_Y), where (*) holds, as U_Y is independent of A. In a special case, when we set d=1, we obtain a change of variables formula ℙ^ℳ(Y = y | a) = ∫_E(y, a)1/∇_u_Y f_Y(a, u_Y)ℋ^0(u_Y) = ∑_u_Y ∈ E(y,a)∇_u_Y f_Y(a, u_Y)^-1. The function f_Y(a, u_Y) can be identified (up to a sign) given the observational distribution ℙ(Y | a), induced by some SCM ℳ of class 𝔅(C^1, 1) if it is strictly monotonous in u_Y. In this case, the function is f_Y(a, u_Y) = 𝔽^-1_a(± u_Y ∓ 0.5 + 0.5) where 𝔽^-1_a is an inverse CDF of the observational distribution, and the sign switch corresponds to the strictly monotonically increasing and decreasing functions. As d=1, we can apply the change of variables formula from Eq. (<ref>): ℙ(Y = y | a) = ∑_u_Y ∈ E(y,a)∇_u_Y f_Y(a, u_Y)^-1 = ∇_u_Y f_Y(a, u_Y)^-1, u_Y = f^-1_Y(a, y) where f^-1_Y(a, y) is an inverse function with respect to u_Y (it is properly defined, as the function is strictly monotonous). Thus, to find f_Y, we have to solve the following differential equation <cit.>: ℙ(Y = f_Y(a, u_Y) | a) = ∇_u_Y f_Y(a, u_Y)^-1. By the properties of derivatives, this equation is equivalent to ℙ(Y = y | a) = ∇_y f_Y^-1(a, y). Since the latter holds for every y, we can integrate it from -∞ to f_Y(a, u_Y): ∫_-∞^f_Y(a, u_Y)ℙ(Y = y | a) y = ∫_-∞^f_Y(a, u_Y)∇_y f_Y^-1(a, y) y = ∫_f_Y^-1(a,-∞)^u_Y t = ∫_0^u_Y t = u_Y, f_Y is strictly increasing, - ∫_f_Y^-1(a,-∞)^u_Y t = ∫_u_Y^1 t = 1 - u_Y, f_Y is strictly decreasing. Therefore, 𝔽_a(f_Y(a, u_Y)) = ± u_Y ∓ 0.5 + 0.5. Lemma <ref> also implies that critical points of f_Y(a, u_Y), i.e., {u_Y ∈ [0, 1]^d: ∇_u_Y f_Y(a, u_Y)_2 = 0 }, are mapped onto points y with infinite density. Therefore, the assumption, that the function is regular, namely ∇_u_Y f_Y(a, u_Y) > 0, is equivalent to the assumption of the continuous observational density with the finite density. * Both counterfactual queries can be inferred with the abduction-action-prediction procedure. (1) Abduction infers the posterior distribution of the latent noise variables, conditioned on the evidence: ℙ^ℳ(U_Y = u_Y | a',y') = ℙ^ℳ(U_Y = u_Y, A = a', Y = y')/ℙ^ℳ(A = a', Y = y')(*)=ℙ^ℳ(Y = y' | a', u_Y) ℙ^ℳ(U_Y = u_Y)/ℙ^ℳ(Y = y' | a') = δ(f_Y(a', u_Y) - y')/ℙ^ℳ(Y = y' | a'), where (*) holds, as U_Y is independent of A. (2)-(3) Action and prediction are then pushing forward the posterior distribution with the counterfactual function. For example, the density of the counterfactual outcome distribution of the [un]treated is ℙ^ℳ(Y_a = y | a', y') = ∫_[0, 1]^dℙ^ℳ(Y_a = y, U_Y = u_Y | a', y') u_Y = ∫_[0, 1]^dℙ^ℳ(Y_a = y | a', y', u_Y) ℙ^ℳ(U_Y = u_Y | a',y') u_Y (*)=1/ℙ^ℳ(Y = y' | a')∫_[0, 1]^dδ(f_Y(a, u_Y) - y) δ(f_Y(a', u_Y) - y') u_Y = 1/ℙ^ℳ(Y = y' | a')∫_E(y',a')δ(f_Y(a, u_Y) - y)/∇_u_Y f_Y(a', u_Y)_2ℋ^d-1(u_Y), where (*) holds due to the independence of Y from Y' and A', conditional on U_Y (see the parallel worlds network in Fig. <ref>). The inference of the ECOU [ECOT] is analogous Q_a' → a^ℳ(y') = 𝔼^ℳ(Y_a | a', y') = ∫_𝒴_a∫_[0, 1]^dℙ^ℳ(Y_a = y, U_Y = u_Y | a', y') u_Y y = ∫_[0, 1]^d𝔼^ℳ(Y_a | a', y', u_Y) ℙ^ℳ(U_Y = u_Y | a',y') u_Y (*)=1/ℙ^ℳ(Y = y' | a')∫_[0, 1]^d f_Y(a, u_Y) δ(f_Y(a', u_Y) - y') u_Y = 1/ℙ^ℳ(Y = y' | a')∫_E(y',a')f_Y(a, u_Y)/∇_u_Y f_Y(a', u_Y)_2ℋ^d-1(u_Y), where (*) holds due to the independence of Y from Y' and A', conditional on U_Y (see the parallel worlds network in Fig. <ref>), and due to the law of the unconscious statistician. The ECOU [ECOT] can be identified (up to a sign) given the observational distribution ℙ(Y | a), induced by some SCM ℳ of class 𝔅(C^1, 1) if it is strictly monotonous in u_Y. In this case, the ECOU [ECOT] is Q_a' → a^ℳ(y') = 𝔽^-1_a(±𝔽_a'(y') ∓ 0.5 + 0.5). The corollary is a result of the the Lemma <ref> and the Corollary <ref>: Q_a' → a^ℳ(y') = 1/ℙ^ℳ(Y = y' | a')∫_E(y',a')f_Y(a, u_Y)/∇_u_Y f_Y(a', u_Y)ℋ^0(u_Y) = ∇_u_Y f_Y(a', u_Y) f_Y(a, u_Y)/∇_u_Y f_Y(a', u_Y) = f_Y(a, u_Y), u_Y = f^-1_Y(a', y'), Therefore, the ECOU [ECOT] is Q_a' → a^ℳ(y') = 𝔽^-1_a(± u_Y ∓ 0.5 + 0.5) = 𝔽^-1_a(±𝔽_a'(y') ∓ 0.5 + 0.5). * Without the loss of generality, let us consider the lower bound of the ECOU [ECOT], namely, Q_a' → a(y') = inf_ℳ∈𝔅(C^k, d) Q_a' → a^ℳ(y') s.t. ∀ a ∈{0, 1}: ℙ(Y | a) = ℙ^ℳ(Y | a). The proof then proceeds in two steps. First, we prove the statement of the theorem for d=2, , when latent noise is two-dimensional. Then, we extend it to arbitrary dimensionality. Step 1 (d=2). Lemma <ref> suggests that to minimize the ECOU [ECOT], we have to either increase the proportion of the length (one-dimensional volume) of the factual level set, which intersects the bundle of counterfactual level sets, or change the counterfactual functions, by “bending” the bundle of counterfactual level sets around a factual level set. Here, we focus on the second case, which is sufficient to construct an SCM with non-informative bounds. Formally, we can construct a sequence of SCMs {ℳ_non-inf^ε(y): y ∈𝒴_a, 0 < ε(y) < 1} of class 𝔅(C^∞, 2), for which Q_a' → a^ℳ_non-inf^ε(y)(y') gets arbitrarily close to l_a as y → l_a and ε(y) → 0. For all {ℳ_non-inf^ε(y): y ∈𝒴_a, 0 < ε(y) < 1}, we choose the same factual function f_Y(a', U_Y^1, U_Y^2) = 𝔽^-1_a'(U_Y^2), where 𝔽^-1_a'(·) is the inverse CDF of the observational distribution ℙ(Y | a'). This is always possible as a result of the Corollaries <ref> and <ref>. Hence, all the level sets of the factual function are horizontal straight lines of length 1 (see Fig. <ref>), due to E(y', a') = {𝔽^-1_a'(u_Y^2) = y'}. Now, we construct the counterfactual functions in the following way. For a fixed ε(y), we choose the level sets of the function f_Y(a, ·), { E^ε(y)(y, a), y ∈𝒴_a}, so that they satisfies the following properties. (1) Each E^ε(y)(y, a) intersects the factual level set E(y', a') at a certain point once and only once and ends at the boundary of the boundaries of the unit square. (2) Each E^ε(y)(y, a) splits the unit square on two parts, an interior with the area 𝔽_a(y) and an exterior, with the area 1 - 𝔽_a(y), respectively (where 𝔽_a(·) is the CDF of the counterfactual distribution). This property ensures that the induced CDF coincides with the observational CDF at every point, namely, ∀ y ∈𝒴_a: 𝔽^ℳ_non-inf^ε(y)_a(y) = 𝔽_a(y). (3) All the level sets have the nested structure, namely, all the points of the interior of E^ε(y)(y, a) are mapped to y < y, and points of the exterior, to y > y. Additionally, for some fixed y ∈𝒴_a, we set ε = ε(y) as the proportion of the factual level set, E(y', a'), fully contained in the exterior of the counterfactual level set, E(y, a). Thus, the interior of the counterfactual level set covers 1 - ε(y) of the factual level set. Therefore, the ECOU [ECOT] for ℳ_non-inf^ε(y) for some fixed y ∈𝒴_a is Q_a' → a^ℳ_non-inf^ε(y)(y') = (1-ε(y)) y + ε(y) y, y∈ [l_a, y], y∈ [y, u_a], which follows from the mean value theorem for integrals, since, for every y ∈𝒴_a, we can choose ε(y) arbitrarily close to zero, and, thus inf_y ∈𝒴_a, 0 < ε(y) < 1 Q_a' → a^ℳ_non-inf^ε(y)(y') = inf_y ∈𝒴_a, 0 < ε(y) < 1 (1-ε(y)) y + ε(y) y = l_a. Step 2 (d>2). The construction of the factual and counterfactual level sets extends straightforwardly to the higher dimensional case. In this case, the factual level sets are straight hyperplanes and the counterfactual level sets are hypercylinders, “bent” around the factual hyperplanes. * We will now show that ECOU [ECOT] always have informative bounds for every possible SCM satisfying the assumptions of the theorem. Without the loss of generality, let us consider the lower bound, , Q_a' → a(y') = inf_ℳ∈𝔅(C^k, d) Q_a' → a^ℳ(y') s.t. ∀ a ∈{0, 1}: ℙ(Y | a) = ℙ^ℳ(Y | a). The proof contains three steps. First, we show that the level sets of C^2 functions with compact support consist of the countable number of connected components, each with the finite Hausdorff measure. Each connected component of almost every level set is thus a d-1-dimensional Riemannian manifold. Second, we demonstrate that, under Assumption <ref>, almost all the level set bundles have a nested structure, are diffeomorphic to each other, have boundaries, and their boundaries always coincide with the boundary of the unit hypercube [0, 1]^d. Third, we arrive at a contradiction in that, to obtain non-informative bounds, we have to fit a d-ball of arbitrarily small radius to the interior of the counterfactual level set (which is not possible with bounded absolute principal curvature). Therefore, with Assumption <ref> has informative bounds. Step 1. The structure of the level sets of C^2 functions with the compact support (or more generally, Lipschitz functions) were studied in <cit.>. We employ Theorem 2.5 from <cit.>, which is a result of Sard's theorem. Specifically, the level sets of C^2 functions with the compact support consist of the countable number of connected components, each with the finite Hausdorff measure (surface volume), namely, ℋ^d-1(E(y, a)) < ∞ for a ∈{0, 1}. In Example <ref>, we provide an SCM with the level sets with indefinitely many connected components. Furthermore, connected components of almost every level set are (d-1)-dimensional Riemannian manifolds of class C^2. Some points y have (d-2)- and lower dimensional manifolds, but their probability measure is zero (, the level sets of the bounds of the support, E(l_a, a) and E(u_a, a)). Step 2. The Assumption <ref> assumes the existence of the principal curvatures at every point of the space (except for the boundaries of the unit hypercube [0, 1]^d), for both the factual and the counterfactual function. Thus, the functions of the SCMs are regular, as otherwise the principal curvatures would not be defined at the critical points (see Eq. (<ref>)). As a result of the fundamental Morse theorem (see Appendix <ref>), almost all the level set bundles are nested and diffeomorphic to each other. Specifically, the level set bundles {E(y, a): y ∈ (l_a, u_a)} for a ∈{0, 1}. The latter exclude the level sets of the bounds of the 𝒴_a, , E(l_a, a) and E(u_a, a) for a ∈{0, 1}, which in turn are laying completely in the boundary of the unit hypercube [0, 1]^d. Another important property is that all the level sets have a boundary (otherwise, due to the diffeomorphism, the critical point would exist in their interior) and this boundary lies at the boundary of the unit hypercube [0, 1]^d. Step 3. Let us fix the factual level set, E(y', a'). By the observational distribution constraint, we know that the interior occupies 𝔽_a'(y') of the total volume of the unit hypercube [0, 1]^d, where 𝔽_a'(·) is the CDF of the observational distribution. Also, its absolute principal curvatures are bounded by κ. Then, let us assume there exists a counterfactual function, which obtains non-informative bounds of ECOU [ECOT]. For this, a counterfactual level set for some y close to l_a has to contain exactly F_a(y) volume in the interior. At the same time, this level set has to contain as much of the surface volume of the factual level set, ℋ^d-1(E(y', a')), as possible in its interior. This is only possible by “bending” the counterfactual level set along one of the directions, so that the boundaries of the counterfactual level set lay at the boundaries of the unit hypercube, [0, 1]^d. Due to the bound on the maximal absolute principal curvature, we have to be able to fully contain a d-ball with radius 1/κ inside the interior of the counterfactual level set. This d-ball occupies the volume Vol^d(d-ball) = π^n/2/Γ(d/2 + 1)1/κ^d, where Γ(·) is a Gamma function. Therefore, the volume of the interior of the counterfactual level set has to be at the same time arbitrarily close to zero, as F_a(y) → 0 as y → l_a, and at least of the volume of the d-ball with radius 1/κ. Thus, we arrived at the contradiction, which proves the theorem. In general, upper and lower bounds under assumed (κ) are different for different d ≥ 2, , l(κ, d_1) ≠ l(κ, d_2) for d_1 ≠ d_2. This follows from the properties of the high-dimensional spaces, , we one can fit more d-balls of the fixed radius to the hypercube [0, 1]^d with the increase of d. Nevertheless, without the loss of generality and in the absence of the information on the dimensionality, we can set d = 2, as we still cover the entire identifiability spectrum by varying κ. § DISCUSSION OF EXTENSIONS AND LIMITATIONS Sharp bounds. To obtain the sharp bounds under (κ), one has to exactly solve the constrained variational problem task formulated in the Definition <ref> in the space of the SCMs, which satisfy the Assumption <ref>. This is a very non-trivial task, the distributional constraint and the constraint of level sets of bounded curvature both include the partial differential equation with the Hausdorff integrals. For this task, in general, an explicit solution does not exist in a closed form (unlike, , the solution of the marginal sensitivity model <cit.>). Extension of to discrete treatments and continuous covariates. Our (κ) naturally scales to bivariate SCMs with categorical treatments, A ∈{0, …, K}; and multivariate Markovian SCMs with additional covariates, X ∈ℝ^m, which are all predecessors of Y. These extensions, nevertheless, bring additional challenges from estimating ℙ(Y | A, X) from the observational (interventional) data. Hence, additional smoothness assumptions are required during modeling. Extension of to semi-Markovian SCMs. (κ) is limited to the Markovian SCMs. For semi-Markovian SCMs, , a setting of the potential outcomes framework <cit.>, additional assumptions are needed. Specifically, in semi-Markovian SCMs, the latent noise variables could be shared for X and Y, and this complicates counterfactual inference. Future work may consider incorporating other sensitivity models to our (κ), , marginal sensitivity model, MSM (Γ) <cit.>, as described in Fig. <ref>. § DETAILS ON ARCHITECTURE AND INFERENCE OF Our provides an implementation of (κ) for SCMs of class 𝔅(C^2, 2). In the following, we list several important requirements, that the underlying probabilistic model has to satisfy. Then, we explain how deep normalizing flows <cit.> meet these requirements. * A probabilistic model has to explicitly model an arbitrary function f̂_Y(a, u_Y), u_Y ∈ [0, 1]^2, of class C^2. Normalizing flows with free-form Jacobians can implement arbitrary invertible C^2 transformations from F̂_a: ℝ^2 →ℝ^2. Then, by omitting one of the outputs, we can model functions f_Y(a, ·): [0, 1]^2 →𝒴_a ⊂ℝ. We discuss normalizing flows with free-form Jacobians in Sec. <ref>. * A model has to fit the observational (interventional) distribution, given a sample from it, 𝒟 = {A_i, Y_i}_i=1^n ∼ℙ(A, Y). This is possible for the normalizing flows, as they are maximizing the log-likelihood of the data, ℙ̂(Y=Y_i | A = A_i), directly with the gradient-based methods. Importantly, we also employ variational augmentations <cit.> to evaluate the log-likelihood. We discuss this in detail in Sec. <ref>. * A probabilistic model should be able to perform the estimation of ECOU [ECOT], through abduction-action-prediction steps. In normalizing flows, this can be achieved with the help of the variational augmentations <cit.>. Specifically, for evidence, y' and a', our proposed can infer arbitrarily many points from the estimated factual level set Ê(y', a'), which follow the estimated posterior distribution of the latent noise, , ℙ̂(U_Y | y', a'). We discuss this in detail in Sec. <ref>. * All the level sets of the modeled function need to have a bounded curvature. This is possible for normalizing flows as a deep learning model. Namely, the curvature κ_1(u_Y) can be evaluated via automatic differentiation tools and added the loss of the model. We discuss this in detail in Sec. <ref>. §.§ Choice of a normalizing flow with free-form Jacobians Normalizing flows (NFs) <cit.> differ in how flexible the modeled transformations in high dimensions are. Some models, like planar NF <cit.> or Sylvester NF <cit.>, only allow for transformations with low-rank Jacobians. Other models employ masked auto-regressive networks <cit.> or coupling blocks <cit.> to construct transformations with lower-triangular (structured) Jacobians. Recently, several models were proposed for modeling free-form Jacobian transformations, , (i) continuous NFs <cit.> or (ii) residual NFs <cit.>. However, this flexibility comes at a computational cost. For (i) continuous NFs, we have to solve a system of ordinary differential equations (ODEs) for every forward and reverse transformation. As noted by <cit.>, numerical ODE solvers only work well for the non-stiff differential equations (=for non-steep transformations). For (ii) residual NFs, the computational complexity stems from the evaluation of the determinant of the Jacobian, which is required for the log-likelihood, and, from the reverse transformation, where we have to employ fixed point iterations. We experimented with both continuous NFs and residual NFs but we found the latter to be more stable. The drawbacks of residual NFs can be partially fixed under our (κ). First, the determinant of the Jacobian can be evaluated exactly, as we set d=2. Second, the numerical complexity of the reverse transformation[Note that, in our , forward transformation corresponds to the inverse function, , f̂_Y(a, ·).] can be lowered by adding more residual layers, so that each layer models less steep transformation and the total Lipschitz constant is larger for the whole transformation. Hence, in our work, we resorted to residual normalizing flows <cit.>. §.§ Variational augmentations and pseudo-invertibility Variational augmentations were proposed for increasing the expressiveness of normalizing flows <cit.>. They augment the input to a higher dimension and then employ the invertible transformation of the flow. Hence, normalizing flows with variational augmentations can be seen as pseudo-invertible probabilistic models <cit.>. We use variational augmentations to model the inverse function, f̂_Y(a, )̇, , sample points from the level sets. For this, we augment the (estimated) outcome Y ∈𝒴 with Y_aug∼ N(g^a(Y), ε^2) ∈ℝ, where g^a(·) is a fully-connected neural network with one hidden layer and parameters θ_a, and ε^2 is a hyperparameter. As such, our proposed models f̂_Y(a, ·) through a two-dimensional transformation F̂_a = (f̂_Y_aug(a, ·), f̂_Y(a, ·)): [0, 1]^2 →ℝ×𝒴_a. Variational augmentations facilitate our task in two ways. (1) They allow evaluating the log-likelihood of the data via logℙ̂_β_a, θ_a(Y = Y_i | a) = 𝔼_Y_aug, i[logℙ̂_β_a(Y_aug = Y_aug, i, Y = Y_i) - log N(Y_aug, i; g^a(Y_i), ε^2)], where β_a are the parameters of the residual normalizing flow, and N(·; g^a(Y_i), ε^2) is the density of the normal distribution. Here, we see that, by increasing the ε^2, the variance of the sample estimate of the log-likelihood of the data increases. On the other hand, for ε^2 → 0, the transformations of the residual flow are becoming steeper, as we have to transport the point mass to the unit square, [0, 1]^2. (ii) Variational augmentations enable the abduction-action-prediction to estimate ECOU [ECOT] in a differential fashion. At the abduction step, we infer the sample from the posterior distribution of the latent noise, defined at the estimated factual level set Ê(y', a'). For that, we variationally augment the evidence y' with b samples from (y', {Y'_aug, j}_j=1^b ∼ N(g^a'(y'), ε^2)), and, then, transform them to the latent noise space with the factual flow {(ũ_Y^1,j, ũ_Y^2, j)}_j=1^b = F̂^-1_a'({(y', Y'_aug, j)}_j=1^b). Then, the action step selects the counterfactual flow, F̂_a(·), and the prediction step transform the abducted latent noise with it: {(Ŷ_aug,j, Ŷ_j)}_j=1^b = F̂_a({(ũ_Y^1,j, ũ_Y^2, j)}_j=1^b ). In the end, ECOU [ECOT] is estimated by averaging Ŷ_j: Q̂_a' → a(y') = 1/b∑_j=1^b Ŷ_j. In our , the parameters of the residual normalizing flow, β_a, and the parameters of the variational augmentations, θ_a, are always optimized jointly. We use the reparametrization trick, to back-propagate through sampling of the augmentations. §.§ Penalizing curvatures of the level sets Although there exist deep learning models which explicitly bound the curvature of the modeled function (, <cit.>), none of the works (to the best of our knowledge) did enforce the curvature of the level sets of the modeled function. In our , the curvature κ_1(u_Y) is evaluated using automatic differentiation exactly via Eq. (<ref>) and then incorporated into the loss. As the calculation of the second derivatives is a costly operation, we heuristically evaluate the curvature only at the points of the counterfactual level set, which corresponds to the estimated ECOU [ECOT], namely (u_Y^1,u_Y^2) ∈Ê(Q̂_a' → a(y'), a). For this, we again use the variationally augmented sample of size b for ŷ = Q̂_a' → a(y'): (ŷ, {Ŷ_aug, j}_j=1^b ∼ N(g^a(ŷ), ε^2)). Then, we use the counterfactual flow (such as in Eq. (<ref>)) {(û_Y^1,j, û_Y^2, j)}_j=1^b = F̂^-1_a({(ŷ, Ŷ_aug, j)}_j=1^b), while the curvature has to be only evaluated at b points given by {κ_1(û_Y^1,j, û_Y^2, j)}_j=1^b. § DETAILS ON TRAINING OF §.§ Training objective Our satisfies all the requirements set by (κ), by combining several losses with different coefficients (see the overview in Fig. <ref>). Given observational data 𝒟 = {A_i, Y_i}_i=1^n drawn i.i.d. and a counterfactual query, Q_a' → a(y'), for the partial identification, minimizes the following losses: (1) Negative log-likelihood loss aims to fit the observational data distribution by minimizing the negative log-likelihood of the data, 𝒟_a = {A_i = a, Y_i}, modeled by . To prevent the overfitting, we use noise regularization <cit.>, which adds a normally distributed noise: Ỹ_i = Y_i + ξ_i; ξ_i ∼ N(0, σ^2), where σ^2 > 0 is a hyperparameter. Then, the negative log-likelihood loss for a ∈{0, 1} is ℒ_NLL(β_a, θ_a) = - 1/n∑_i=1^n logℙ̂_β_a, θ_a(Y = Ỹ_i | a), where the log-likelihood is evaluated according to the Eq. (<ref>). (2) Wasserstein loss prevents the posterior collapse <cit.> of the , as the sample {Y_i} is not guaranteed to cover the full latent noise space when mapped with the estimated inverse function. We thus sample b points from the latent noise space, {(U_Y^1,j, U_Y^2, j)}_j=1^b ∼Unif(0, 1)^2 and map with the forward transformation F_a(·) for a ∈{0, 1} via {(Ŷ_aug,a,j, Ŷ_a,j)}_j=1^b = F̂_a({(U_Y^1,j, U_Y^2, j)}_j=1^b ). Then, we evaluate the empirical Wasserstein distance ℒ_𝕎(β_a) = ∫_0^1 |𝔽̂_Ŷ_a^-1(q) - 𝔽̂_Y_a^-1(q) | q, where 𝔽̂_Ŷ^-1(·) and 𝔽̂_Y^-1(·) are empirical quantile functions, based on samples {A_i = a, Y_i} and {Ŷ_a,j}, respectively. (3) Counterfactual query loss aims to maximize/minimize ECOU [ECOT]: ℒ_Q(β_a', θ_a', β_a) = Softplus(∓Q̂_a' → a(y')), where ∓ changes for maximization/minimization correspondingly, Q̂_a' → a(y') is evaluated as described in the Eq. (<ref>), and Softplus(x) = log(1 + exp(x)). We use the softplus transformation to scale the counterfactual query logarithmically so that it matches the scale of the negative log-likelihood. We also block the gradients and fit only the counterfactual flow, when maximizing/minimizing ECOU [ECOT], to speed up the training. (4) Curvature loss penalizes the curvature of the level sets of the modeled counterfactual function f̂_Y(a, ·): ℒ_κ(β_a, θ_a) = 1/b∑_j=1^b κ_1(û_Y^1,j, û_Y^2, j), where {κ_1(û_Y^1,j, û_Y^2, j)}_j=1^b are defined in Sec. <ref>. All the losses are summed up to a single training objective ℒ(β_a', θ_a', β_a, θ_a) = ∑_a ∈{0, 1}[ ℒ_NLL(β_a, θ_a) + ℒ_𝕎(β_a) ] + λ_Q ℒ_Q(β_a', θ_a', β_a) + λ_κ ℒ_κ(β_a, θ_a). §.§ Training algorithm and hyperparameters Training algorithm. The training of proceeds in three stages; see the pseudocode in Algorithm <ref>. At a burn-in stage (n_B = 500 training iterations), we only fit two residual normalizing flows (one for each treatment) with ℒ_NLL and ℒ_𝕎. Then, we copy the counterfactual flow and variational augmentation parameters twice, for the task of maximization/minimization, respectively. Also, we freeze the factual flow parameters. At a query stage (n_Q = 100 training iterations), we, additionally to previous losses, enable the counterfactual query loss, λ_Q ℒ_Q. During this stage, two counterfactual flows are able to realign and start “bending” their level sets around the frozen factual level set. Ultimately, at the last curvature-query stage (n_CQ = 500 training iterations), all the parts of the loss in Eq. (<ref>) are enabled. After training is over, we report the upper and lower bounds for the counterfactual query as an output of the with exponential moving average (EMA) of the model parameters <cit.>. EMA of the model parameters allows us to reduce the variance during training and is controlled via the smoothness hyperparameter γ = 0.99. Hyperparameters. We use the Adam optimizer <cit.> with a learning rate η = 0.01 and a minibatch size of b=32 to fit our . This b=32 is also used for all the sampling routines at other losses. For the residual normalizing flows, we use t = 15 residual transformations, each with h_t = 5 units of the hidden layers. We set the relative and absolute tolerance of the fixed point iterations to 0.0001 and a maximal number of iterations to 200. For variational augmentations, we set the number of units in the hidden layer to h_g = 5, and the variance of the augmentation to ε^2 = 0.5^2. For the noise regularization, we chose σ^2 = 0.001^2. This configuration resulted in a good fit (with respect to the test log-likelihood) for all the experiments, and, thus, we did not tune hyperparameters. We varied the coefficients for the losses, λ_Q and λ_κ, and report the results in Sec. <ref> and Appendix <ref>. § EXPERIMENTS §.§ Synthetic datasets We conduct the experiments with datasets, drawn from two synthetic datasets. In the first dataset, we use Y | 0 ∼ℙ(Y | 0) = N(0, 1), Y | 1 ∼ℙ(Y | 1) = N(0, 1), and, in the second, Y | 0 ∼ℙ(Y | 0) = Mixture(0.7 N(-0.5, 1.5^2) + 0.3 N(1.5, 0.5^2)), Y | 1 ∼ℙ(Y | 1) = Mixture(0.3 N(-2.5, 0.35^2) + 0.4 N(0.5, 0.75^2) + 0.3 N(2.0, 0.5^2)). Importantly, the ground truth SCMs (and their κ) for both datasets remain unknown, and we only have access to the observational distributions. This is consistent with the task of partial counterfactual identification. §.§ Additional results In Fig. <ref>, we provide additional results for the synthetic experiments. For our , we set λ_Q = 1.0 and vary λ_κ∈{0.0, 0.5, 1.0, 5.0}. Here, we see that estimated by bounds are located closer to (i) the theoretic upper and lower bounds of BGMs and (ii) closer to each other. §.§ Runtime Table <ref> provides the duration of one training iteration of our for different training stages. Namely, we report the mean and std. across all the experiments (total of 720 runs) for the burn-in, the query, and the curvature-query stages. In our experiments, the long training times are attributed to the reverse transformations in the residual normalizing flows (as they require fixed point iterations) and to the computation of the Hessian to evaluate the curvatures.
http://arxiv.org/abs/2306.05095v1
20230608105452
Recurrent mini-outbursts and a magnetic white dwarf in the symbiotic system FN Sgr
[ "J. Magdolen", "A. Dobrotka", "M. Orio", "J. Mikołajewska", "A. Vanderburg", "B. Monard", "R. Aloisi", "P. Bezák" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.HE" ]
FN Sgr with magnetic WD Magdolen et al. Advanced Technologies Research Institute, Faculty of Materials Science and Technology in Trnava, Slovak University of Technology in Bratislava, Bottova 25, 917 24 Trnava, Slovakia Department of Astronomy, University of Wisconsin 475 N. Charter Str. Madison, WI 53706 INAF - Astronomical Observatory Padova, vicolo dell'Osservatorio 5, 35122 Padova, Italy Nicolaus Copernicus Astronomical Center, Polish Academy of Sciences, Bartycka 18, 00-716 Warsaw, Poland Kleinkaroo Observatory, Calitzdorp, Western Cape, South Africa Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, USA We investigated the optical variability of the symbiotic binary FN Sgr, with photometric monitoring during ≃55 years and with a high-cadence light curve lasting 81 days. The data obtained in the V and I bands were reduced with standard photometric methods. The data were divided into subsamples and analyses with the Lomb-Scargle algorithm. The V and I band light curves showed a phenomenon never before observed with such recurrence in any symbiotic system, namely short outbursts, starting between orbital phase 0.3 and 0.5 and lasting about a month, with a fast rise and a slower decline, and amplitude of 0.5-1 mag. In the light curve we discovered three frequencies with sidebands. We attribute a stable frequency of 127.5 d^-1 (corresponding to an 11.3 minutes period) to the white dwarf rotation. We suggest that this detection probably implies that the white dwarf accretes through a magnetic stream, like in intermediate polars. The small outbursts may be ascribed to the stream-disc interaction. Another possibility is that they are due to localized thermonuclear burning, perhaps confined by the magnetic field, like recently inferred in intermediate polars, albeit on different timescales. We measured also a second frequency around 116.9 d^-1 (corresponding to about 137 minutes), which is much less stable and has a drift. It may be due to rocky detritus around the white dwarf, but it is more likely to be caused by an inhomogeneity in the accretion disk. Finally, there is a third frequency close to the first one that appears to correspond to the beating between the rotation and the second frequency. Recurrent mini-outbursts and a magnetic white dwarf in the symbiotic system FN Sgr J. Magdolen 1, A. Dobrotka 1, M. Orio 2,3, J. Mikołajewska 4, A. Vanderburg 6, B. Monard 5, R. Aloisi 2 and P. Bezák 1 Received / Accepted ========================================================================================================================== § INTRODUCTION Symbiotic stars are binary systems, usually comprised of a white dwarf (WD), main sequence, or a neutron star and an evolved companion, namely a red giant (S-type systems), or an asymptotic giant branch star or a Mira variable (D-type systems). The matter from the companion is accreted by the compact object via a stellar wind or Roche lobe overflow, in many cases creating an accretion disk. The orbital periods are ∼ 200-1000 days for S-type systems and ≳ 50 years for D-type systems (). FN Sgr has been mainly studied spectroscopically <cit.>. <cit.> found some evidence that the WD may be steadily burning accreted hydrogen near its surface. <cit.> presented long-term photometric data over 30 years and derived the orbital period, 568.3 days, supported also by the spectroscopic analysis. As we discuss below, we were able to slightly revise this period in this paper. <cit.> determined the interstellar extinction (E(B-V)=0.2±0.1), a distance of about 7^+1_-2 kpc assuming that the giant fills or almost fills the Roche lobe, and orbital inclination of 80^ o. These authors found that FN Sgr is an S-type symbiotic, composed of an M 5-type giant of 1.5 M_⊙ and radius 140 R_⊙ and a hot WD of 0.7 M_⊙, with binary separation 1.6 AU. They also concluded that the mass transfer process is most likely caused by the Roche lobe overflow of the red giant. An accretion disk is likely to be present and it is most likely the source of the spectral features. <cit.> discuss the double-temperature structure of the hot component, as already observed in other symbiotics, possibly due to a geometrically and optically thick accretion disk. <cit.> discovered a 1996 optical outburst with a 2.5 mag amplitude that lasted until early 2001. Even if outbursts are not unusual in symbiotics, the temperature and luminosity evolution of the hot component during this outburst, were not consistent with an accretion disk instability and were difficult to explain <cit.>. Since the only high-time resolution light curve lasting only 2.8 hours () is not suitable for detailed timing analysis, we propose to obtain a high cadence light curve, which was measured over 81 days, between 2015 October and December, in K2 field 7. § OBSERVATIONS The optical data in Fig. <ref> includes data from <cit.>, from the All-Sky-Automated Survey <cit.> and new photometry we obtained with the 35cm Meade RCX400 telescope at the Kleinkaroo Observatory using a SBIG ST8-XME CCD camera and V and Ic filters. The new V light curve we obtained starts at MJD[MJD = JD - 2400000] 53308, but the Ic data cover a shorter period, starting at MJD 56233. Each observation was the result of several individual exposures, calibrated (dark-subtraction and flat-fielding) and stacked. The magnitudes were derived from differential photometry, with nearby reference stars, using the single image mode of the AIP4 image processing software. The photometric accuracy of the derived magnitudes is better than 0.1 mag. The light curve was measured on 2015-10-5 (EPIC 218331937). It is shown in Fig. <ref> in red. Before flux normalization, systematic corrections were applied following <cit.> and <cit.> [The team evaluates the contribution of scattered background light and subtracts it at the pixel level. The background is a large source of photon noise. For faint stars, where the background is much larger than the star flux, the estimated brightness may be negative] namely background and barycentric correction. We transformed the flux to magnitudes using the equation: m = -2.5log(f/f_0), where f corresponds to the flux value and f_0 was set to 1 (not standardized 's response function). A vertical offset of 12 was used to compare the and V magnitudes. § LONG-TERM LIGHT CURVE The optical V light curve, shown in Fig. <ref>, with all the data we could gather at this stage, covers almost 55 years. The minima represent the inferior spectroscopic conjunction of the red giant <cit.>. We used all available photometric measurements, including the additional data shown in Fig. 2 <cit.> and revised the ephemeris of the minimum as follows: JD( MIN) = 2450260+(567.3 ± 0.3) × E, where E is the number of orbital cycles, implying that the orbital period is one day shorter than calculated earlier, 567.3 days. The blue dashed lines in Fig. <ref> show the minima with a reference minimum determined using a 6th order polynomial (indicated as a blue thick line). The corresponding superior conjunctions are indicated by red dashed lines. In Fig. <ref> and in the selected interval in Fig. <ref> it is clear that small amplitude flares, of amplitude 0.5-1 mag, which we will call here mini-outbursts to differentiate them from major ones like the 1997-1998 event, occurred between 2001 and 2019 and seem to have ceased in 2020-2021. The ASAS light curve <cit.> indicates that the mini-outbursts occurred since 2001. Fig. <ref> shows that these outbursts have a sharp rise (∼ 10 days) and slower decline. They also seem to be orbitally phase-locked. The rise usually starts near phase 0.3 and never after phase 0.5. The bottom panel of Fig. <ref> shows that the peak wavelength during the mini-outbursts was shifted towards higher energies since V-I clearly decreases in magnitude in the flares. Since 2019 the mini-outburst activity has ceased, and the light curves in addition to the deep eclipses show secondary minima, which are most likely due to ellipsoidal variability. This is particularly evident in the I light curve, in which the red giant significantly contributes to the continuum and confirms the conclusion by <cit.> that the red giant fills, or almost fills, its Roche lobe, as these authors inferred from the analysis of the shape and duration of the well-defined eclipses during the large outburst in 1996-2001. If the Roche lobe is filled, a persistent accretion disk should be present in this system. § TIMING ANALYSIS OF THE KEPLER LIGHT CURVE Our light curve was observed over 81 days with a cadence of almost 1 minute. Such a high-quality and long light curve is ideal for period analysis. The 81 days Kepler run, shown in detail in Fig. <ref>, occurred during the decay from a mini-outburst, as shown in Fig. <ref> and <ref>. Thus, in addition to orbital variability a decay after the flare is observed, with a few outlier points due to cosmic rays. To eliminate these effects, first the Hampel filter[Python hampel library <https://github.com/MichaelisTrofficus/hampel_filter>.] was used to detect and remove outliers. By specifying the size of the window as 61 (30 points on each side + 1 central point), the central point inside this window was replaced by the window's median value, if it differed more than 5σ from the window's median. Next, in order to detrend the light curve, we used a moving window median[SciPy Python's package <https://scipy.org/>.] with window size 201 (100 points each side + 1 central point) because a polynomial detrend performed poorly for such a long observation. The window slides over the entire light curve, point by point, where the central point is replaced with a median value calculated over the window. Subsequently, we applied the Lomb-Scargle (LS) method by <cit.> to the processed light curve[Astropy Python's package <https://docs.astropy.org/en/stable/index.html> was calculated with normalization set to "standard".]. The resulting periodogram shows several peaks above the 90-% confidence level (Fig. <ref>). We show in Figure <ref> that zooming at the most significant frequencies, namely 10.5 d^-1 (f_0), 116.9 d^-1 (f_1) and 127.5 d^-1 (f_2) we observe a multi-peak pattern for f_0 and f_1. Such a drift in frequency is typical for a quasi-periodic signal, therefore these frequencies may not be stable. On the contrary, f_2 exhibits only a single dominant peak, suggesting a stable frequency. To determine the origin and the exact behaviour of these frequencies, we split the corrected light curve into 20 portions; 2 equally large parts between days 0 and 10, and 18 equally large parts observed between days 10 and 80. This selection was made in order to distinguish the U-shaped trend at the beginning of the light curve. Subsequently, LS periodograms were created for each portion. First, we inspected the f_0, f_1, and f_2 frequencies. By fitting a Gaussian (or multiple Gaussians where needed), we estimated the frequencies and their uncertainties, given in Table <ref>. The missing values in the f_0 column indicate that the peak confidence fell below 90%. Two frequency values instead are listed when more peaks were present, with two close frequencies. Then, we calculated the mean (with the whole light curve) of the two main frequencies f_1 and f_2. By subtracting them, we found the difference Δ(f) ≈ 10.66 d^-1, which is very close to the f_0 peak. The LS periodogram for each portion of the light curve, with mean frequencies indicated by vertical lines, is shown for f_1 and f_2 in Figure <ref>, and the Δ value for f_0 is also indicated. By examining Table <ref> and Figure <ref>, we observe the trends of the frequencies. The lower frequency f_0 exhibits a visible drift around the Δ value. Moreover, the frequency disappeared at certain times, as its power fell below the 90% confidence level. A similar behaviour is observed for f_1, where the peaks drift around the mean value. On the other side, we can see an obvious decline in power in the second half of the light curve, although it is still above the selected confidence level. Only the f_2 frequency appears to be stable, its drift around the mean is minimal, and so is the decrease in power. In Sect. <ref> we discuss the interpretation of the results in more detail. To confirm that the detrending using a median filter did not suppress any hidden variability within the light curve, we performed a polynomial detrend for each light curve portion. A polynomial fitted the short sub-samples much better it did than the whole light curve. Comparing the results with the previous ones, no significant differences were noted. By inspecting the higher frequencies around 250 d^-1 we realize that they may represent higher harmonics. To evaluate this premise, we calculated the higher harmonics as; 2 f_1≈ 234 d^-1 and 2 f_2 ≈ 255 d^-1. We subtracted the Δ value from 2 f_2 (or added it to 2 f_1) and obtained a value of 244.5 d^-1. By plotting the frequencies together with the calculated values (as the red vertical lines, see Figure <ref>), we observe that these values match the peaks, supporting our assumption. Fig. <ref> shows changes in modulation during the observation. The empty blocks for f_0 correspond to light curve portions in which the double peak in the LS periodogram was present and an exact measurement was not possible, or the confidence of the peak is below 90% (see Table <ref>). § DISCUSSION The mini-outbursts occurred semi-regularly during the orbital period, for more than 16 years. These flares were peculiar in that they occur once per orbital period, at about the same orbital phase, and kept on recurring. This phenomenon is quite different from other bursting activity of larger amplitude observed in symbiotics <cit.>, which does not occur with such clear periodicity. We note that only AG Dra has flares with characteristics that are somewhat similar to FN Sgr (), and in that case they recur with a time scale that is close to the pulsation of the red giant. However, in FN Sgr we do not have high-quality radial velocity data to detect such a pulsation. The timescale and amplitude are similar to the first flaring event observed in the symbiotic system Z And, with 1.5 mag amplitude, a sharp rise and a decay over ≈300 days <cit.>. However, in Z And and other systems with repeated flares (e.g. AX Per, CI Cyg, BF Cyg) the recurrence times were always shorter than the orbital period, usually by about 10-20% <cit.>. A new outburst in Z And had a larger amplitude (2 mag) and lasted longer, and actually, there were three separate flare episodes over almost 3 years. <cit.> attributed this phenomenon outburst to a “combination nova”, namely nuclear shell burning triggered by a disk instability <cit.>. An interesting fact is a secondary period of ≃355 days found in the radial velocity data of the giant in Z And, attributed to the rotation of the giant <cit.>, was close to the recurrence semi-period of the outburst. A combination nova has also been invoked to explain the unusual recent outburst of a CV-like system, V1047 Cen <cit.>. We also note that in Z And a stable oscillation with a 28 min period was detected before and during the flare, which <cit.> attributed to the rotation of a magnetic WD (with B≥ 10^5 Gauss). The high cadence Kepler light curve of FN Sgr over 81 days at the end of 2015 offers new clues and new “puzzles”. We found a complex periodogram with a low frequency f_0, and a group of higher frequencies (f_1, f_2 and other sidebands), resembling the short orbital periods WD binaries classified as intermediate polars (IPs), where the lower frequency is the orbital frequency of the binary, and the higher frequencies are mainly due to the WD spin and the beating between the spin and the orbital frequency (see e.g. ). Other sidebands with higher harmonics can also be present in IPs. By analogy, we suggest that the f_2 frequency, which is stable, is likely to be due to the WD rotation. The WD spin frequency in IPs is observable because the polar caps are heated by magnetic accretion funneled to the poles by the magnetic field. Even if an accretion disk is formed, it is disrupted at the magnetospheric radius. <cit.> analysed the high-resolution photometry taken in 1998 of the FN Sgr and detected a possible variability in the frequency range of 25.5-900 d^-1. Even if the uncertainty is very large, our frequency f_1 agrees with this detection indicating the spin rotation was observable also in 1998. While in IPs there is often a third frequency, corresponding to the beat between the orbital and the rotational period, for FN Sgr f_1 frequency is the beat between the proposed rotation frequency and the f_0 frequency, which is generated by a structure that is not synchronized with the orbital motion. It is likely to be around the WD and to be illuminated by the WD as it rotates. Both f_0 and f_1 are unstable frequencies, and this is consistent with f_1 being the beat because if f_0 is unstable, the beating must be unstable too. The non-stability or quasi-periodicity of f_0 is a crucial characteristic to study a physical model. In the periodograms of IPs either the WD spin frequency is dominant, or the beating frequency (). The beat is usually detected when there is an accretion disk, even if it is disrupted at a certain radius, namely in IPs and not in polars. In fact, the WD rotation in polars, which accrete directly via the magnetic stream, tends to be synchronized. We propose two possible explanations for the f_0 frequency. Both imply the presence of a “structure” or a denser element in the accretion disk. If this has negligible mass compared with the WD, and it orbits a WD of 0.7 M_ <cit.> with Keplerian angular velocity derived from the period associated with f_0 (≈135.5 minutes), it must be localized in the very inner disk at ≃0.76 R_⊙ from the center. One scenario is that the structure in the disk is not fixed, but is due to rocky detritus around the WD, captured in the accretion disk. This would of course explain why the period is not perfectly stable. Rocky detritus so far has been detected only in 1 out of 3000 WDs, so it is a fairly rare phenomenon <cit.>. We also examined the possibility of a “dark spot” like in <cit.>, but that event was recurrent with the period of the WD rotation. Another possibility is that there is a vertical thickening of the accretion disk, causing variable irradiation and inhomogeneities in the disk itself. In dwarf novae, the observed quasi-periodic oscillations (QPOs) may be caused by vertical thickening of the disc that moves as a travelling wave near the inner edge of the disk, alternately obscuring and reflecting radiation from the disk <cit.>. Applied to FN Sgr, this model means that irradiation by the rotating WD would cause the QPO with beat frequency f_1. However, the QPOs in dwarf novae have semi-periods of hundreds of seconds <cit.> and in FN Sgr the much longer period would imply inhomogeneities considerably farther from the disk centre compared to dwarf novae. In the model by <cit.>, winding up and reconnection of magnetic field lines cause inhomogeneities. The magnetic field responsible for the phenomenon may be either that of the WD or that of an equatorial belt on the WD surface (low inertia magnetic accretor model, ). Since the inner disc radius where the inhomogeneities form depends on the magnetic field strength, the WD in FN Sgr would have a stronger magnetic field than dwarf novae, consistently with magnetically channeled accretion that allows detection of the rotation period. §.§ The possibility of rocky bodies around the WD Recent photometry has revealed that WDs have periodic optical variations very similar to the detected 2.2 h periodicity found in FN Sgr. <cit.> analysed light curves of 14 hot WDs and detected periodic variations with periodicities from 2 hours to 10 days. Possible explanations include transits of objects of dimensions of the order of ∼ 50-200 km. The periodicity may arise from UV metal-line opacity, due to the accretion of rocky material as debris of former planetary systems, a phenomenon observed in many WDs <cit.>. WD 2359-434, for instance, shows variability with a period of 2.7 h <cit.>. Some characteristics so far observed for rocky detritus differ significantly from what we detected in FN Sgr: * The quasi-periodic signals change in shape and amplitude quite significantly over the course of tens of days, while in FN Sgr the changes in amplitude, and phase observed over 81 days were much smaller; * The shape of the dip is very sharp and a-symmetric. We also note that in WD 1145+017 b, for instance, the small exoplanet orbiting the WD causes a very sharp feature in the lightcurve <cit.>. * There are multiple periodic signals with similar periods. * Typical timescales of the observed features in the lightcurves are around 1 minute only. Thus, the dips in luminosity caused by the detritus seem to be always sharp, short in duration, and variable; and often there are multiple periods. Looking at these characteristics, we conclude that our observations do not match the rocky detritus scenario, that thus remains a relatively remote possibility. §.§ Interpretation as inhomogeneity in the accretion disk Disk inhomogeneities can be generated by the stream disk overflow, causing vertical disk thickening like in GU Mus (). These inhomogeneities may liberate blobs of matter rotating with Keplerian angular velocity at the radius where the vertical thickening occurs. However, in GU Mus the stream overflow generates the thickening at the circularisation radius[The circularisation radius is the distance from the centre where angular momentum from the L_1 point equals the local specific angular momentum of a Keplerian disc.] which in FN Sgr is approximately 35.2 R_⊙, while we inferred a distance of only 0.76 R_⊙ from the center. <cit.> estimated the minimum distance of the stream from the centre to be r_ min = 0.0488 a q^-0.464, where a is the binary separation and q is the binary mass ratio (donor mass divided by the WD). Thus, the minimum distance is 11.8 R_⊙ in FN Sgr. These estimates are based on the assumption of a Keplerian disc, but <cit.> suggested that the disk around the WD is geometrically thick. This may mean that in FN Sgr there is a sub-Keplerian advection dominated disk, meaning that the radial velocity of the matter is much larger than in a thin disk (). The frequency f_0 in thick disks is defined as η/(2π) Ω_ K, where η is the sub-Keplerian factor between approximately 0.2 and 1 (middle panel of Fig. 1 in ). Thus, the structure or inhomogeneity does not need to be located as far from the WD as in the Keplerian, thin disk case. While the spin and beat modulations have almost sinusoidal shapes, the QPOs with frequency f_0 have a steep rise and slow dissipation (Fig. <ref>) Such asymmetry can be understood as rapid generation of the inhomogeneity that slowly dissipates while orbiting the WD. We note that, while f_2 is clearly detected in almost all sub-samples, the beating frequency f_1 is strong only in the first half of the light curve (Fig. <ref>). If the beat is caused by a body rotating around the WD, the weak power implies that this body is not well irradiated by the rotating WD in the second half of the light curve, or the irradiated surface is not well visible anymore, supporting the idea that this body forms and dissipates slowly during the rotation around the WD. Since the duration of the light curve is 81 days, the angle of view changed by 0.1428 of the orbital cycle (51.4 degrees). The clear visibility of the f_1 frequency during the first half of the light curve suggests that a change in 0.0714 of the orbital cycle (25.7 degrees) modifies the angle of view in such a way that the beating region does not show the irradiated part clearly anymore. Our observation started approximately 29 days before the superior conjunction of the red giant (Fig. <ref>), which is 0.0511 of the orbital cycle (approximately 18.4 degrees). After half of the observation (25.7 degrees), f_1 becomes weaker. Fig. <ref> shows a model of this configuration. The line with angle -18.4^∘ represents the viewing angle at the start of the observation, while 7.3^∘ is half of the observation where f_1 becomes significantly weaker. Since at the viewing angle of 7.3^∘ the f_1 variability starts to decline, if we suppose that there are inhomogeneous blobs visible up to 90^∘ from the viewing direction, the small circle in Fig. <ref> shows the regions where these blobs are generated. After sweeping this region, the blobs are not visible enough or dissipate slowly and become too small to generate a strong beat. Following <cit.> the region of blobs generation can be associated with the stream disk overflow or simply with the trajectory from the L_1 point. Stream disk overflow requires thin disk geometry, while if the disk is geometrically thick, there is not an actual overflow of the accretion stream, but from L_1 point the stream may impinge the disk, penetrate it and generate blobs at its inner edge. §.§ The mini-outbursts Since the outbursts seem to be almost phase-locked, episodic mass accretion rate would be a plausible interpretation. An eccentric orbit could cause mass accretion events when the secondary approaches the primary WD. However, <cit.> concluded that the orbit is almost circular. An alternative based on a phased locked behaviour is that a bright region appears at specific angles of view. Fig. <ref> shows the angles view of phases 0.3 and 0.5. The mini-outburst ends approximately at phase 0.5, which is very close to the viewing angle where the f_1 frequency starts to disappear. This implies a related physical origin of the two events, and the mini-outburst should be connected to the stream disc overflow or to stream-disc impact region. Even if the stream-disc scenario seems to be a plausible explanation, we investigated several other possibilities. An explanation in terms of a thermonuclear runaway (a recurrent “non-ejecting nova” as in <cit.> implies an accretion rate of order 10^-6 M_⊙ yr^-1 and a recurrence time that would decrease from ≃10 years to ≃2 years for increasing WD mass from 0.65 to 1 M_⊙ <cit.>. However, such an outburst would cause an increase by ≃4 magnitudes in V. A third possibility is that the outbursts are related to the magnetic field. In the old nova and IP GK Per, with an orbital period of almost 2 days and a subgiant K2 secondary, dwarf-nova-like-outbursts with a recurrence time of 400±40 days and amplitude of 1-3 mag have been observed for decades <cit.>. Although the exact mechanism powering these outbursts is a matter of debate, it seems certain that the maximum temperature of the disk and therefore its inner radius decrease in outburst, and more matter is suddenly accreted <cit.>. The time scale of the FN Sgr mini-flares is similar, although the much higher luminosity of the system and the larger orbit make it difficult to draw a comparison. Assuming that the WD is strongly magnetized (B≥10^5 Gauss), as our Kepler timing analysis indicates, there are two other, alternative explanations. The magnetosphere may cause “magnetically gated accretion” (): unstable, magnetically regulated accretion causing quasi-periodic bursts. In this model, the disc material builds up around the magnetospheric boundary, and reaching a critical amount the matter accretes onto the WD causing an optical flare. <cit.> studied this phenomenon in the cataclysmic variable MV Lyr, where quasi-periodic bursts of ∼ 30 min appeared every ∼ 2 hours. The main question is how the timescale would vary in a symbiotic, a binary with a so much longer orbital period. The recurrence time of these bursts is typically close to the viscous time-scale t_ visc in the region where the instability occurs (inner disc); t_ visc = r_ in^2/ν, where r_ in is the inner disc radius and ν is a viscosity parameter. Assuming that the magnetospheric radius 0.76 R_⊙ (derived in the previous section), and with the viscosity parameter expressed in term of dimensionlessα parameter following <cit.> as t_ visc = α (h/r)^2 (Gm_ WDr_ in)^1/2. where h/r is the ratio of the scale height h of the disc at radial distance r, G is the gravitational constant and m_ WD is the WD mass. The mini-outbursts seem to be almost phase locked, therefore for t_ visc would have to be close to the orbital period of 567.3 days. If the disc is geometrically thick, with h/r= 0.1, the observed recurrence time of FN Sgr is obtained only with α = 0.0026. However, we know that a realistic value of 0.1 for α in advective disks () yields a much larger magnetospheric radius, namely 8.7 R_⊙. Moreover, the h/r ratio may even be as high as 0.4 (e.g. ), shortening t_ visc considerably. On the basis of these considerations, we rule this model out. An interesting possibility that we would like to consider is that of a localized thermonuclear runaway (LTNR), namely a thermonuclear runaway that does not spread all over the surface, like helium burning on neutron stars. <cit.> proposed that dwarf-nova-like, “vulcanic” eruptions of small amplitude occur on the surface of massive WDs and may recur on time scales of months, due to LTNRs. In Shara's model, the eruption causes a sort of volcano to reach the WDs surface, without mass ejection. <cit.> modelled the LTNR with the formalism used for helium burning on neutron stars and calculated that temperature differences as small as 10^4-10^5 K at the bottom of the envelope accreted by a WD can lead to a LTNR that remains confined and is extinguished before spreading to the all the surface. If accretion funneled by the magnetic field to the WD poles produces such a temperature gradient, a LTNR may occur. The higher the WD mass and the larger the mass accretion rate, the more likely a LTNR would occur. The LTNR may remain confined only for a certain time if it is not extinguished it would later spread to the whole surface, causing a classical nova with a slow rise (a day or more). More recently, the LTNR scenario has been revisited. A LTNR model involving the magnetic field has been studied for highly magnetized WDs, B > 10^6 G, to explain very small amplitude flashes recently observed in CVs, called micronovae by the authors (). Such flashes cause luminosity increases of a few to ≃30 times occurring with hours and recurring on timescales of months. The strength of the magnetic field that allows accretion to be confined to a region at the poles is estimated assuming the radius of accretion disk truncation by the magnetosphere, caused by magnetic pressure balancing the ram pressure of the accretion flow. This defines a magnetospheric radius (see e.g. ) r_ m = 9.8 × 10^8 ( ṁ_ acc/10^15 g s^-1)^-2/7( m_ 1/ M_⊙)^-1/7 ×( μ/10^30 G cm^3)^4/7 cm, where ṁ_ acc is the mass accretion rate, m_ 1 is the WD mass and μ = B R_ WD^3 is the magnetic moment of the WD. Fig. <ref> shows B values calculated for various mass accretion rates. As truncation radius r_ m we assumed 0.76 R_⊙. The likely radius of the 0.7 M_⊙ WD may be R_1 = 0.013 R_⊙ (from the mass-radius relation by ). The WD radius may be larger if the WD atmosphere is inflated, but this usually occurs because of hydrogen burning all over the surface <cit.>. We take into account the possibility of a larger WD radius in Fig. <ref>. However, we note two issues: a) an apparent typo in <cit.>, since from the values in their Table 5, namely a hot, hydrogen burning WD with T_ eff150,000-180,000 K and L ≃ 1000-2000 L_⊙ should be around R ≃ 0.02 R_⊙ instead of 0.2 R_⊙, and b) if nuclear burning is localized, the large UV luminosity should be ascribed to the accretion disk and not to the WD. In any case, we take into account the possibility of a larger WD radius in Fig. <ref>. Allowing for a radius R ≃ 0.02 R_⊙, the micronova scenario is acceptable with a magnetic field higher than a few megaGauss with ṁ of a few 10^-10 M_⊙/yr. However, for values of ṁ that are closer to what has been inferred in many symbiotics, the magnetic field should be of at least the order of hundreds of megaGauss, which is unusual for WDs. If the matter orbits more slowly than with the Keplerian velocity because of the geometrically thick disk, the inner disk radius can be smaller, but this would affect the result only slightly, as shown by the dashed line in the Figure. A detection in supersoft X-rays would be extremely interesting to understand whether the WD atmospheric temperature and bolometric luminosity are as high as derived by <cit.>, in which case the burning is unlikely to be localized. Very constraining upper limits on the bolometric luminosity, however, may be difficult to obtain given the distance and the possible large intrinsic absorption in the symbiotic nebula. § SUMMARY AND CONCLUSIONS The V and I light curves observed over decades revealed that for many years, from before 2001 until 2019, the symbiotic underwent recurrent optical flares with amplitude 0.5-1 mag, at orbital phase 0.3-0.5, with a sharp rise over 10 days and a decay during many weeks. The amplitude and timescales of the outbursts cannot be explained with a disk instability triggered by a burst of mass transfer, neither with a non-ejecting thermonuclear flash nor with a “combination nova”. Because of the detection of a likely rotation period in the light curve, we found that the WD is likely to have a strong magnetic field, ≥ 10^5 Gauss. We examined two alternatives that have recently been proposed for small amplitude flares of magnetic cataclysmic variables, and how they may be relevant for a symbiotic system with a magnetic WD. A thermonuclear runaway in a “non-ejecting nova” would occult with higher luminosity amplitude than observed in FN Sgr. We found that localized thermonuclear burning confined by the magnetic field may explain the observed phenomena with B ≥ 10^6 Gauss and accretion rate ≥ 10^-10 M_⊙ yr^-1. However, localized burning is difficult to reconcile with the high bolometric luminosity and temperature of the ionizing source, inferred from the UV range and from the flux of the He II λ 4686 emission line by <cit.>. If the WD is indeed the ionizing source, as suggested by the above authors, it is so hot and luminous that thermonuclear burning is ongoing all over the surface. An interesting comparison is the one with the periodic outbursts of the IP and old nova GK Per, although the difference in luminosity and spatial dimensions make a rigorous comparison very difficult. The most promising interpretation is based on the phase locked appearance of the mini-outbursts. It points to the visibility of the stream disc overflow which coincides also with conclusions derived from our detailed timing analysis. The timing analysis revealed three dominant frequencies with sidebands. The lowest frequency f_0 is unstable and probably represents an inhomogeneity generated at the inner disk edge. The higher frequency f_2 (11.3 min periodicity) is very stable and we attribute it to a magnetic rotating WD. We also measured a frequency f_1, corresponding to the beating between f_1 and f_2. The f_0 and f_2 frequencies are present during the whole light curve, suggesting permanent visibility of the corresponding sources. The stability of f_2 is consistent with the idea that it is the frequency generated by the rotating WD. We interpret f_0 as a frequency generated by an inner disk inhomogeneity, indicating the presence of blobs or rigid bodies during the whole orbital period. The light curve folded with the period corresponding to f_0 implies that these may be blobs that form and dissipate slowly. The beat frequency f_1 is strong only during the first half of the observation. This is consistent with the region of strongest irradiation of the blobs being associated with the trajectory of an accretion stream from the L_1 point, impinging the geometrically thick disk of FN Sgr, penetrating it and generating inhomogeneity blobs at the inner disk edge. We examined an alternative explanation for the f_1 frequency, namely rocky detritus around the WD, but the characteristics of the modulation are very different from what has been so far observed in nearby WDs. § ACKNOWLEDGEMENT JMag, AD, and PB were supported by the European Regional Development Fund, project No. ITMS2014+: 313011W085. JMik was supported by the Polish National Science Centre (NCN) grant OPUS 2017/27/B/ST9/01940. aa
http://arxiv.org/abs/2306.17834v2
20230630175419
The Detected Stochastic Gravitational Waves and Subsolar-Mass Primordial Black Holes
[ "Keisuke Inomata", "Kazunori Kohri", "Takahiro Terada" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc", "hep-ph" ]
KEK-TH-2535, KEK-Cosmo-0317, KEK-QUP-2023-0016, CTPU-PTC-23-28 Kavli Institute for Cosmological Physics, The University of Chicago, Chicago, IL 60637, USA Division of Science, National Astronomical Observatory of Japan (NAOJ), and SOKENDAI, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, JAPAN Theory Center, IPNS, and QUP (WPI), KEK, 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan Particle Theory and Cosmology Group, Center for Theoretical Physics of the Universe, Institute for Basic Science (IBS), Daejeon, 34126, Korea Multiple pulsar timing array (PTA) collaborations recently announced the evidence of common-spectral processes caused by gravitational waves (GWs). These can be the stochastic GW background and its origin may be astrophysical and/or cosmological. We interpret it as the GWs induced by the primordial curvature perturbations and discuss their implications on primordial black holes (PBHs). We show that the newly released data suggest PBHs much lighter than the Sun (𝒪(10^-4) M_⊙) in contrast to what was expected from the previous PTA data releases. The Detected Stochastic Gravitational Waves and Subsolar-Mass Primordial Black Holes Takahiro Terada July 31, 2023 ======================================================================================== § INTRODUCTION Recently, the evidence of the Hellings-Downs curve <cit.>, a smoking-gun signal of the isotropic stochastic gravitational waves (GWs) representing a particular pattern of angular correlations, has been reported by pulsar timing array (PTA) experiments, in particular, by NANOGrav <cit.> and by EPTA and InPTA <cit.> (see also the results of PPTA <cit.> and CPTA <cit.>). The GWs are consistent with the stochastic GW background (SGWB) as there have not been strong hints for continuous GW signals or anisotropy <cit.>. A natural astrophysical interpretation of the origin of such SGWB is the superposed GW signals from binary mergers of supermassive black holes as discussed in the above PTA papers. Alternatively, the observed GWs may have a cosmological origin. This could be a great observational window to study the early-Universe cosmology. The common spectrum process was already observed in the NANOGrav 12.5-year data <cit.> and IPTA data release 2 <cit.>, though the evidence of the Hellings-Downs curve was not observed at that time. Since then, the cosmological GW sources for the PTA experiments have been enthusiastically studied, e.g., in the context of topological defects such as cosmic strings and domain walls <cit.>, cosmological first-order phase transitions <cit.>, scalar-induced GWs associated with primordial black holes (PBHs) <cit.>, and inflationary GWs <cit.> (see also Ref. <cit.> for a comprehensive study on the cosmological sources). After the recent announcements, a variety of explanations of the SGWB was proposed: cosmic strings <cit.>, domain walls <cit.>, a first-order phase transition <cit.>, inflation (first-order GWs) <cit.>, second-order (scalar-induced) GWs and PBHs <cit.>, parametric resonance in the early dark energy model <cit.>, turbulence due to the primordial magnetic field <cit.>, axion-like particles and gravitational atoms <cit.>. In addition, the importance of softening of the equation-of-state by the QCD crossover was pointed out <cit.>, the effects of the SGWB on neutrino oscillations were discussed <cit.>, and probing dark matter density was discussed in the context of supermassive binary BH mergers <cit.>. In this work, we discuss the implications of the recent observation of the stochastic GWs by the PTA experiments on PBHs. PBHs have been attracting a lot of attention because of their potential to explain dark matter and/or the binary BH merger signals detected by LIGO-Virgo-KAGRA collaborations <cit.> (see Refs. <cit.> for reviews). PBHs can be produced when large density perturbations enter the Hubble horizon. In general, large density perturbations can produce not only PBHs but also GWs through the nonlinear interaction <cit.>. These scalar-induced GWs are an important probe of PBHs <cit.> (see also Refs. <cit.> and references therein). In particular, after the first detection of GWs from the merger of ∼ 30 M_⊙ BHs, the connection between the PTA experiments and the scalar-induced GW signals associated with 𝒪(10) M_⊙ PBHs have been focused on in Refs. <cit.>. § THE GRAVITATIONAL-WAVE SIGNALS AND INDUCED GRAVITATIONAL WAVES The SGWB detected by the PTA collaborations may be explained by cosmological (New Physics) GWs. The New Physics interpretations of the SGWB were studied by the NANOGrav collaboration <cit.> and by the EPTA/InPTA collaboration <cit.>. In particular, they studied the interpretation of the data by the induced GWs. We first review their results. The cosmological abundance of the GWs is conventionally parametrized by Ω_GW(f) = ρ_GW(f)/ρ_total, where ρ_GW(f) is related to the total energy density of the GWs by ρ_GW = ∫dln f ρ_GW(f). The GW spectrum in the PTA experiments is parametrized as <cit.> Ω_GW = 2π^2 f_*^2/3 H_0^2 A_GWB^2 ( f/f_*)^5- γ, where H_0 is the Hubble parameter, A_GWB the amplitude of the GWs, f_* the pivot-scale frequency often adopted as f_* = f_yr = (1 yr)^-1≈ 32 nHz, and γ the power index of the spectrum. Fig. 1 (b) of Ref. <cit.> shows that the fitting results with f_* = 1 yr^-1 and f_* = 0.1 yr^-1 are most consistent with each other at γ≈ 3. This implies that the power-law ansatz is consistent with the power-law index 5-γ≈ 2. There are several ways to interpret this power law in terms of the induced GWs. * The power-law GWs can be explained by the power-law curvature perturbations 𝒫_ζ (k) ∝ k in the relevant range of frequencies. With such 𝒫_ζ(k), the induced GWs approximately behave as Ω_GW∼𝒫_ζ^2 ∝ f^2. This requires a nontrivial condition on the underlying inflation model so that 𝒫_ζ∝ k. Since this is not a generic consequence of inflation models, we do not focus on this case in this letter. A recent discussion based on inflation models can be found in Ref. <cit.>. * The spectral slope can be interpreted as the so-called universal infrared (IR) tail when the curvature perturbations are sufficiently steep. For the induced GWs produced during the radiation-dominated (RD) era, the universal IR tail has the Ω_GW∝ f^3 slope <cit.> for generic underlying curvature perturbations up to logarithmic corrections <cit.>. However, this slope becomes f^2 under some conditions: * When the induced GWs are produced in a cosmological era with the equation-of-state parameter w, the power-law index of the universal IR tail was worked out to be Ω_GW∝ f^3 - 2|1-3w/1+3w| <cit.>. In particular, the power is 2 as desired if w = 1 or 1/9. For example, the w=1 case is realized when the Universe is dominated by the kinetic energy of a scalar field. In the remainder of this letter, we focus on the standard RD era, so we do not consider this option. * Even in the RD era, the Ω_GW∼ f^2 scaling can be realized for an extended range of the frequencies when the spectrum is narrow <cit.>. In particular, the limit of the delta function 𝒫_ζ leads to the Ω_GW∼ f^2 scaling without the restriction on the frequency range. The delta function peak is, of course, not physical nor realistic, but it would approximate sufficiently peaked spectra. The last item above, i.e., the delta function power spectrum was studied in Refs. <cit.> along with other example spectra such as the log-normal function and the top-hat box-shaped function <cit.>. Because of the simplicity and the guaranteed f^2 scaling, we discuss the delta function power spectrum 𝒫_ζ as a first step to interpret the PTA GW signals in terms of the induced GWs. Let us here summarize the equations for the induced GWs. In the Newtonian gauge, the metric perturbations are given by ^2 s = -a^2[ (1+2Φ) η^2 + ((1 - 2 Ψ)δ_ij +h_ij/2) x^i x^j], where Φ and Ψ are the scalar perturbations and h_ij is the tensor perturbation, which describes GWs. We have neglected vector perturbations because we focus on the GWs induced by the scalar perturbations throughout this work. In the following, we consider the perfect fluid, which enables us to take Ψ = Φ. Then, from the Einstein equation, the equation of motion of the tensor perturbations is given by h^λ_k”(η) + 2 ℋh^λ_k'(η) + k^2 h^λ_k(η) = 4 S^λ_k(η), where k (k = |k|) and λ denote the wavenumber and the polarization of the tensor perturbations, the prime denotes ∂/∂η, and ℋ = a'/a is the conformal Hubble parameter. The source term S^λ_ k is given by S^λ_k = ∫^3 q/(2π)^3 e^λ_ij(k̂)q^i q^j [ 2 Φ_qΦ_k-q + 4/3(1+w)(ℋ^-1Φ'_q + Φ_q) ( ℋ^-1Φ'_k-q + Φ_k-q) ], where e^λ_ij(k̂) is the polarization tensor, and w = 1/3 in an RD era. By solving the equation of motion and taking the late-time limit during an RD era, we obtain the energy density parameter of the induced GWs during an RD era as Ω̃_(f) = ∫_0^∞dv ∫_|1-v|^1+vdu 𝒦(u,v) 𝒫_ζ(uk) 𝒫_ζ(vk) , where f=k/(2π) is the frequency of the GW and 𝒫_ζ is the power spectrum of the curvature perturbation, and the integration kernel 𝒦 is given by <cit.> 𝒦(u,v) = 3(4v^2-(1+v^2-u^2)^2)^2(u^2+v^2-3)^4/1024 u^8 v^8[(ln|3-(u+v)^2/3-(u-v)^2| - 4uv/u^2+v^2-3)^2 + π^2 Θ(u+v-√(3)) ] . Note that the energy density parameter of the induced GWs during an RD era asymptotes to Ω̃_ after the peak-scale perturbations enter the horizon. Taking into account the following matter-dominated and dark-energy-dominated eras, the current energy density of the induced GWs is given by Ω_ h^2 = 0.43 ( g_*/80) ( g_*,s/80)^-4/3Ω_r h^2 Ω̃_, where g_* and g_*, s are the effective relativistic degrees of freedom for the energy and entropy densities, respectively, when the GW energy density parameter becomes constant during the RD era, and Ω_r is the current energy density parameter of radiation (Ω_r h^2 ≃ 4.2 × 10^-5). See Ref. <cit.> for the temperature dependence of g_*(T) and g_*, s(T). As motivated above, we consider the monochromatic (delta function) power spectrum given by 𝒫_ζ = A_ζ δ(ln(k/k_*)), where A_ζ governs the overall normalization and k_* is the wavenumber at which there is a spike in the power spectrum. In this case, the energy density parameter of the induced GWs [Eq. (<ref>)] can be expressed as Ω̃_GW(f)= 3 A_ζ^2/64(4-f̃^2/4)^2 f̃^2 (3 f̃^2-2)^2 ×( π^2 (3 f̃^2-2)^2 Θ (2√(3)-3 f̃) + ( 4+(3f̃^2-2) log| 1- 4/3 f̃^2| )^2 ) Θ (2-f̃), where the dimensionless wavenumber f̃≡ f/f_* is introduced for notational simplicity. In the IR limit f̃≪ 1, the spectrum reduces to Ω̃_GW≃3 A_ζ^2/4f̃^2 (π^2 + ( 2 + log3f̃/4)^2 ). Indeed, this scales as f^2 up to the logarithmic correction that becomes more and more important in the limit f̃≪ 1. Note that the above spectrum depends on the combination A_ζ / f_* up to the logarithmic correction, so one must expect the parameter degeneracy in the direction A_ζ∝ f_*. This is consistent with the analyses by the PTA collaborations <cit.>. The NANOGrav result <cit.> is shown by blue contours in Fig. <ref>. In this figure, the orange line shows a roughly linear relation A_ζ = 10^-2(f_*/10^-7 Hz). As the characteristic frequency approaches the NANOGrav frequency range [2 × 10^-9 Hz, 6 × 10^-8 Hz], the relevant part of the GW spectrum ceases to be the IR tail, which has the f^2 scaling. This is why the deviation of the orange straight line from the blue contours becomes larger toward the left part of the figure. The vermilion-shaded region is excluded by the dark radiation constraint Ω_GWh^2 < 1.8 × 10^-6 from the big-bang nucleosynthesis <cit.> because of the overproduction of GWs. See also Ref. <cit.> for a slightly stronger constraint Ω_GWh^2 < 1.7 × 10^-6 from the cosmic microwave background. Already at this stage, the degeneracy allows the parameter space to extend to much higher frequencies than the nanohertz ballpark. This indicates that the new data analyses by the PTA collaborations prefer smaller-mass PBHs than expected so far. § IMPLICATIONS FOR PRIMORDIAL BLACK HOLES In this section, we show that sub-solar mass PBHs are favored by the new data. To this end, we summarize the formulas for PBHs. The formulas are basically the same as those in Ref. <cit.>.[ The abundance of PBHs is subject to a huge uncertainty depending on the calculation scheme. For example, our prescription here is based on Carr's formula (a.k.a. the Press-Schechter formalism) <cit.>, but the result changes significantly when one adopts the peaks theory, within which there are varieties of methods with varying results <cit.>. ] PBHs are formed shortly after an extremely enhanced curvature perturbation enters the Hubble horizon <cit.>. Therefore, the mass of PBHs is related to the wavenumber of the perturbations that produce PBHs M = γ M_H ≃ 6.1 × 10^-4 M_⊙(γ/0.2) (g_*(T)80)^1/2(g_*,s(T)80)^-2/3(6.5× 10^7 Mpc^-1k)^2 ≃ 6.1 × 10^-4 M_⊙(γ/0.2)(g_*(T)80)^1/2(g_*,s(T)80)^-2/3(1.0× 10^-7 Hzf)^2, where γ is the ratio between the PBH mass and the horizon mass, for which we take γ = 0.2 as a fiducial value <cit.>, and T is the temperature at the PBH production. The PBH abundance per log bin in M is given by <cit.> [The tilde on f̃_PBH (M) is introduced to distinguish it with its integrated quantity f_PBH. ] f̃_(M)≃γ^3/2(β(M)2.6×10^-9)(80g_*(T))^1/4(0.12Ω_DMh^2)(M_⊙M)^1/2. The production rate β is given by <cit.> β(M)=∫_δ_cdδ√(2π) σ(M)exp(-δ^22 σ^2(M))≃σ(M)√(2π) δ_cexp(-δ_c^22 σ^2(M)), where δ_c is the threshold value of the overdensity for PBH production. As a fiducial value, we take δ_c = 0.45 <cit.>. The σ is the coarse-grained density contrast, given by σ^2(k)= 1681∫dq/q (qk)^4 W^2(qk) 𝒯^2(q,k^-1) 𝒫_ζ(q) . Here, W(x) is a window function, which we take W(x)=e^-x^2/2, and 𝒯 is the transfer function of the density perturbations during an RD era: 𝒯(q,k^-1)= 3 [sin(q/√(3)k)-(q/√(3)k) cos(q/√(3)k)]/(q/√(3)k)^3 . The total abundance of the PBHs can be expressed as f_ = ∫dM/M f̃_(M) . In the case of the monochromatic power spectrum, σ^2(k) can be integrated analytically σ^2(k) = 16 A_ζ e^-1/k̃^2( cos^2 (1/√(3)k̃) + k̃( 3 k̃sin^2 ( 1/√(3)k̃) - √(3)sin( 2/√(3)k̃) ) ), where k̃≡ k / k_*. Combining these equations and the parameter degeneracy relation Eq. (<ref>), we can map the degeneracy relation onto the M-f_PBH plane: f_PBH∼f̃_PBH(M) ∼ 5.6 × 10^10ν^1/2/δ_c( M/6.1 × 10^-8M_⊙)^-3/4exp( - δ_c^2/2 ν( M/6.1 × 10^-8M_⊙)^1/2), where we have approximated σ^2 as σ^2 (k(M)) ∼ν A_ζ. The coefficient ν is introduced to show the sensitivity of f_PBH on the overall normalization of σ^2. In Fig. <ref>, we set ν = 0.2. The slope does not fit perfectly because the GW spectrum does not have a perfect f^2 scaling. This equation is not a rigorous fit but just a rough guide to the location of the contours mapped from Fig. <ref> into Fig. <ref>. The blue contours in Fig. <ref> are the main result of this paper. This shows that the PBHs much lighter than the Sun are favored by the new PTA data analysis results. If the PBH abundance is significant, they have masses of order 10^-4 M_⊙. More generally, the contours show the range [5 × 10^-5≤ M/M_⊙≤ 2 × 10^-3]. It should be emphasized that this is based on the assumption of the delta function 𝒫_ζ, but our conclusion would not qualitatively change even if we take into account the finite width of the realistic power spectrum. The shaded regions in the upper part of Fig. <ref> show the existing observational constraints on the abundance of PBHs. We see that there is a lower as well as an upper bound on the mass of the PBHs. In other words, the extension to the degeneracy direction is limited by the overproduction of PBHs. Again, the quantitative values of these bounds will change when we drop off the assumption of the delta function curvature spectrum, but our conclusion will be intact at least when the power spectrum has a narrow peak. See the analyses of NANOGrav <cit.> and EPTA/InPTA <cit.> for the curvature spectra with a finite width. § CONCLUSION AND DISCUSSION In this work, we have discussed the implications of the recent PTA observations of stochastic GWs on PBHs. The large scalar perturbations can produce not only PBHs but also strong GWs through nonlinear interactions. If the detected stochastic GWs originate from scalar-induced GWs, we can obtain implications on the PBH mass distribution. We have found that, if PBHs are produced by the monochromatic curvature power spectrum, the stochastic GW signals can be explained by the large-scale tail of the induced GW spectrum. It is interesting to note that the favored region on the M-f_PBH plane to explain the excess events of OGLE <cit.> (the yellow shaded region in Fig. <ref>) <cit.>, has a small overlap with the blue contours in Fig. <ref>.[ The relation between the 12.5-year NANOGrav data and the OGLE events has been studied in the context of PBHs produced during an era with a general w in Ref. <cit.>. ] It is remarkable that the OGLE microlensing events and the SGWB signals from the PTA data can be simultaneously explained by PBHs of mass M≈ 6 × 10^-5 M_⊙. Remember that our analyses are based on the delta-function curvature spectrum for simplicity. It is interesting if the overlap becomes larger when we consider a more realistic curvature perturbation spectrum. We leave this possibility for future work. Examples of the scalar-induced GW spectrum are shown in Fig. <ref> in comparison with the 14 lowest-frequency bins (see Appendix C of Ref. <cit.>) of the NANOGrav 15-year data <cit.>. The black solid line corresponds to M = 1.2 × 10^-4 M_⊙ and f_PBH = 2 × 10^-2, which is inside the 68% credible region (dark blue contour) in Fig. <ref>. The black dot-dashed line corresponds to M = 6 × 10^-5 M_⊙ and f_PBH = 2 × 10^-2, which is inside the 95% credible region (light blue contour) and the yellow shaded region to explain the OGLE events. In our previous work <cit.>, we discussed that the PBH interpretation of the common-spectral processes in the NANOGrav data can be tested by future observations of GWs at a different frequency range originating from the merger events of the binary PBHs. This also constitutes the SGWB because of the superposition of many binary mergers in the Universe. In this letter, we have emphasized that the preferred mass range of PBHs associated with the induced GWs that explain the new PTA data is shifted to a smaller mass range. It is then natural to ask about the corresponding change of the observational prospects to test the PBH scenario. To calculate the merger-based GW spectrum, we have adopted the methods concisely summarized in Appendices of Ref. <cit.>, essentially based on the merger rate calculations in Refs. <cit.> and the source-frame spectrum emitted at a single merger event studied in Refs. <cit.>. For simplicity, we take the representing mass M corresponding to k_* and neglect the mass distribution of PBHs in the calculation of the merger-based GW spectrum. The extension of the f^2/3 spectrum to the far IR may break down at some frequency as discussed in Appendix B of Ref. <cit.>. The comparison of the merger-based GWs (shown by black lines) and the sensitivity curves of future GW detectors is shown in Fig. <ref>. Compared to Fig. 5 in Ref. <cit.>, the peaks of the merger-based GWs are shifted to the high-frequency side. This is because the corresponding binary PBH masses have become lighter in view of the new PTA data. Unfortunately, the observational prospects by the future GW detectors become worse due to this frequency shift. Nevertheless, Fig. <ref> shows that the SGWB originating from 𝒪(10^-4) M_⊙ PBH binary mergers can be tested by future GW observations.[ We adopted different sensitivity curves from those adopted in the previous work <cit.>. See the caption of Fig. <ref>. ] To achieve this goal, foreground subtraction is essential <cit.>.[ We thank Marek Lewicki for his comment on this point. ] An alternative route is the development of MHz–GHz GW detectors. See, e.g., Refs. <cit.> and references therein. Our results are consistent with those in Ref. <cit.> because we basically use the same equations and take similar values of the parameters as in the reference. However, as mentioned in Ref. <cit.>, our results and those in Ref. <cit.> on the PBH abundance are different from those in Ref. <cit.>, in which they show that PBHs are overproduced if we do not take into account the non-Gaussianity that decreases the PBH abundance with the amplitude of the induced GWs fixed.[ The possibility of the PBH overproduction was also discussed with the IPTA Data Release 2 and the NANOGrav 12.5-year datasets in Ref. <cit.>. ] Although we do not take into account the non-Gaussianity that decreases the PBH abundance, we have still found that PBHs are not overproduced. The main discrepancy comes from the difference in the window function and/or the threshold value, δ_c, as mentioned in Ref. <cit.>. Our results show that whether the recent detection of the stochastic GWs is associated with PBHs or not significantly depends on the uncertainties on the choice of window function and/or δ_c. One of the reasons why we focus on the delta function curvature spectrum in this paper even though it is unrealistic is the f^2 scaling suggested by Fig. 1 (b) of the NANOGrav paper <cit.> as discussed above. It is tempting to argue that naively combining the new data () analysis and the full data () analysis in Fig. 1 of EPTA/InPTA paper <cit.>, γ≈ 3 would be obtained. However, one should be careful about the combination of data sets in mild tension. It will be interesting to discuss how well the narrow peak PBH scenarios can also explain the data of other PTA collaborations. We will leave such comprehensive analyses for future work. K.I. and T.T. thank Satoshi Shirai for stimulating discussions. K.I. was supported by the Kavli Institute for Cosmological Physics at the University of Chicago through an endowment from the Kavli Foundation and its founder Fred Kavli. The work of T.T. was supported by IBS under the project code, IBS-R018-D1. The work of K.K. was in part supported by MEXT KAKENHI Grant Number JP22H05270. apsrev4-1
http://arxiv.org/abs/2306.03158v1
20230605181209
Task-Oriented Metaverse Design in the 6G Era
[ "Zhen Meng", "Changyang She", "Guodong Zhao", "Muhammad A. Imran", "Mischa Dohler", "Yonghui Li", "Branka Vucetic" ]
cs.NI
[ "cs.NI", "eess.SP" ]
-@thistlm defeDefinition theoTheorem lemmaLemma corCorollary pproProposition assumeAssumption IEEEtran algorithmAlgorithm theoremTheorem theorembox 500 corollaryCorollary corollarybox 500 @nat@width>@nat@width Task-Oriented Metaverse Design in the 6G Era Zhen Meng, Student Member, IEEE, Changyang She, Senior Member, IEEE, Guodong Zhao, Senior Member, IEEE, Muhammad A. Imran, Fellow, IEEE, Mischa Dohler, Fellow, IEEE, Yonghui Li, Fellow, IEEE, and Branka Vucetic, Life Fellow, IEEE July 31, 2023 =============================================================================================================================================================================================================================================== Task-Oriented Metaverse Design in the 6G Era Zhen Meng, Student Member, IEEE, Changyang She, Senior Member, IEEE, Guodong Zhao, Senior Member, IEEE, Muhammad A. Imran, Fellow, IEEE, Mischa Dohler, Fellow, IEEE, Yonghui Li, Fellow, IEEE, and Branka Vucetic, Life Fellow, IEEE July 31, 2023 =============================================================================================================================================================================================================================================== As an emerging concept, the Metaverse has the potential to revolutionize the social interaction in the post-pandemic era by establishing a digital world for online education, remote healthcare, immersive business, intelligent transportation, and advanced manufacturing. The goal is ambitious, yet the methodologies and technologies to achieve the full vision of the Metaverse remain unclear. In this paper, we first introduce the three infrastructure pillars that lay the foundation of the Metaverse, i.e., human-computer interfaces, sensing and communication systems, and network architectures. Then, we depict the roadmap towards the Metaverse that consists of four stages with different applications. To support diverse applications in the Metaverse, we put forward a novel design methodology: task-oriented design, and further review the challenges and the potential solutions. In the case study, we develop a prototype to illustrate how to synchronize a real-world device and its digital model in the Metaverse by task-oriented design, where a deep reinforcement learning algorithm is adopted to minimize the required communication throughput by optimizing the sampling and prediction systems subject to a synchronization error constraint. Metaverse, 6G, task-oriented design Task-Oriented Metaverse Design in the 6G Era Zhen Meng, Student Member, IEEE, Changyang She, Senior Member, IEEE, Guodong Zhao, Senior Member, IEEE, Muhammad A. Imran, Fellow, IEEE, Mischa Dohler, Fellow, IEEE, Yonghui Li, Fellow, IEEE, and Branka Vucetic, Life Fellow, IEEE July 31, 2023 =============================================================================================================================================================================================================================================== § INTRODUCTION The Metaverse is a digital world that will revolutionize the interactions among humans, machines, and environments by providing a shared, unified, perpetual, and inter-operable realm for participants from all over the world <cit.>. The digital world could be a pure virtual space or a digital mirror of the physical world that has the ability to reprogram the physical world in real time. It lays the foundation for the evolution of different vertical industries including education, entertainment, healthcare, manufacturing, transportation, and immersive business. This ambitious vision brings significant challenges to the development of next-generation communication networks. It is natural to raise the following questions: Q1: Is the available infrastructure sufficient for the Metaverse? To support an application in the Metaverse, the system needs to execute a sequence of interdependent tasks. A task is an activity that needs to be completed within a period of time or by a deadline, such as the pose and eye tracking, positioning, haptic control and feedback, and semantic segmentation. State-of-the-art infrastructure cannot meet the requirements of diverse emerging applications and tasks in the Metaverse. Specifically, existing input/output systems, like the touch screen, keyboards, and mouses, are inconvenient in supporting new tasks. Thus, new Human-Computer Interface (HCI), including Virtual/Augmented Reality (VR/AR), Tactile Internet, and brain-computer interface, will lay the foundation for the Metaverse. Sensing and communication technologies play critical roles in providing timely feedback and seamless connections in the Metaverse with a real-world counterpart. To reduce the infrastructure cost, one promising approach is to exploit widely deployed mobile networks for both sensing and communications. Furthermore, the Sixth Generation (6G) networks will bridge new HCI and sensing & communication systems. Due to the long propagation delay, executing all tasks on a global server cannot meet the latency requirements of tasks. A new multi-tier network architecture that can coordinate computing, communication, and storage resources at the end-user devices, edge/local servers, and global servers efficiently is essential for supporting interdependent tasks in the Metaverse <cit.>. In summary, HCI, sensing and communication technologies, as well as network architectures will serve as the three pillars of the Metaverse. Even with the above infrastructure, supporting emerging applications in the Metaverse is not straightforward. Q2: How to guarantee the Key Performance Indicators (KPIs) of diverse applications/tasks in the Metaverse? The highly integrated and multifaceted demands of applications in the Metaverse impose stringent requirements on KPIs that are much more diverse than the KPIs defined in the three typical scenarios in the Fifth Generation (5G) mobile communication standard, i.e., Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low-Latency Communications (uRLLC), and Massive Machine Type Communications (mMTC) <cit.>. Further considering that one application consists of multiple tasks, to meet the specific requirements of the application in the Metaverse, we should analyze the KPIs at task level, referred to as task-oriented KPIs. For example, to generate haptic feedback, the system should meet the Just Noticeable Difference (JND) constraint, which is the minimum difference between two force signals that is noticeable to users. The network functions and communication KPIs in 5G networks are task agnostic and hence cannot guarantee task level KPIs. The existing communication network design approach divides the whole system into multiple sub-modules for separate optimization and cannot break the barriers among the sub-modules. As a result, it is difficult to provide End-to-End (E2E) performance guarantees. To support the Metaverse in 6G mobile networks, we should revisit the following questions: Q3: What are the issues with the existing design approaches? Do we need new design methodologies in 6G? To improve E2E performance, cross-system design has been investigated in the existing literature <cit.>. To guarantee the control performance with stochastic wireless channels and limited communication resources in mission-critical applications, a predictive control and communication co-design system was introduced in <cit.>, where scheduling policy and transmission power are jointly optimized. To achieve substantial gains in spectral, energy, hardware and cost efficiency with mMTC, Integrated Sensing and Communication (ISAC) was developed in <cit.> to support sensing and communications simultaneously. Considering that end-user devices have limited computing, communication, and storage resources, a cloud-edge-end computing framework-driven solution was introduced in <cit.>. However, the cross-system design problems are in general non-convex or NP-hard, and novel design methodologies for real-time implementation are in urgent need. In this work, we introduce the task-oriented design for the Metaverse in the 6G era. The major contributions of this paper are summarized as follows: 1) We holistically illustrate the three infrastructure pillars that the Metaverse will be built upon, and depict the roadmap toward the full vision of the Metaverse. 2) We comprehensively review the challenges of task-oriented design in the Metaverse; To tackle these challenges, we put forward potential solutions from a system-level perspective. 3) We build a prototype to demonstrate the task-oriented design. The goal of the task is to synchronize a real-world device and its digital model in the Metaverse. § PILLARS OF THE METAVERSE In this section, we review the Metaverse infrastructure and its connections to the task-oriented design. As shown in Fig. <ref>, it consists of three pillars: the human-computer interface (HCI), sensing & communications, and network architecture. §.§ Human-Computer Interface Different from existing input/output systems that are designed to process video and audio signals, future HCI should be carefully designed to support new tasks in the Metaverse. §.§.§ XR Head-Mounted Devices The development of XR devices has greatly improved the user experience by identifying the mobility of the head-mounted device and rendering the three-dimensional (3D) video accordingly. Existing XR systems mainly focus on downlink video streaming. To further enable eye contact and expression reconstruction in the Metaverse, eye tracking and 3D modeling techniques should be integrated into XR systems. By predicting the moving direction of eyes, the XR system can render and transmit the field-of-view to be requested by users. Thus, we can improve the trade-off between data rate and latency in wireless XR. §.§.§ Tactile Devices Tactile devices are essential for supporting haptic feedback in the Metaverse. With a large number of tactile sensors and actuators, it is possible to recognize users' poses and gestures. Once the user hits a virtual item in the Metaverse, the tactile devices generate feedback to users via vibrations and resistance. Most existing tactile devices cannot provide tactile feedback for the entire human body. Several issues remain open in the development of whole-body tactile devices: 1) battery-life time of wearable devices is limited; 2) low-complexity graph signal processing that takes the topology of the sensors/actuators is not available; 3) the actuators should be controlled by engines and algorithms to mimic the tactile experience, which remains an open challenge. §.§.§ Brain-Computer Interface The brain-computer interface can be used for emotion recognition and reconstruction in the Metaverse. Existing brain-computer interfaces suffer from low classification accuracy and long processing delay. Due to these issues, the brain-computer interface may not be able to work as the stand-alone human-computer interface in the near future, but it may assist VR devices or tactile devices to improve the users' experience, as demonstrated in early trials by Meta. §.§.§ Combination of Different Human-Computer Interfaces Different human-computer interfaces have different data structures, generate responses in different time scales, and may support different tasks in one application. Developing a system that manages multiple human-computer interfaces brings unprecedented challenges, and is crucial for improving the users' experience in the Metaverse. To enable the interactions among users with different types of devices, new standards are needed. §.§ Sensing and Communications Sensing and communication technologies enable timely state updates of real-world devices/environments in the Metaverse. Thus, they are critical to the establishment of the digital world. §.§.§ Devices with Communication Modules Smart devices equipped with communication modules can update their states to the Metaverse. For example, a real-world robotic arm measures the angles, speeds, forces, and torques of the joints and sends the states to a server for reconstructing the digital robotic arm. As the number of devices increases, the communication resources become the bottleneck of the Metaverse. Improving the trade-off between the communication resource utilization efficiency and the synchronization accuracy/information freshness is a challenging problem. §.§.§ Environments without Communication Modules Some entities in real-world environments do not have communication modules, such as trees, buildings, pedestrians, etc. To collect their states in the digitally twinned Metaverse, we need a large number of external sensors or cameras. For example, Instant-Nerf is a neural rendering model developed by NVIDIA that can render 2D photos into 3D scenes in a few milliseconds <cit.>. To further understand the environment, semantic segmentation is crucial <cit.>. Nevertheless, most of the existing segmentation algorithms require a considerable amount of computation resources, and the processing time remains the bottleneck of real-time interactions. §.§.§ Joint Sensing-Communication Systems The cost of deploying and operating a large number of sensors and cameras could be extremely high. By integrating communication and sensing functionalities into widely deployed mobile networks, it is possible to reduce the cost. Thus, the Integrated Sensing and Communication (ISAC) system is a practical approach that collects real-world information for the Metaverse <cit.>. Note that there are tradeoffs between KPIs of different tasks and the resource utilization efficiency of ISAC systems, but a universal design framework for different tasks is still missing. §.§ Network Architecture 6G networks will bridge HCI and sensing & communication devices, and the Metaverse. §.§.§ Multi-tier Computing Architecture Developing the Metaverse on a global server for all the users and devices around the world would be very challenging due to long communication delay and the limited communication throughput. Multi-tier computing is believed to be a promising architecture that can coordinate interdependent tasks in the Metaverse by exploiting distributed computing, storage, and communication resources in central servers, local servers, and end-user devices <cit.>. Meanwhile, the multi-tier computing raises new challenges in 6G core networks and radio access networks (RANs). §.§.§ Core Networks 5G core networks manage resources and quality-of-service at the application level. The session management function will create a new protocol data unit session when there is a new service request. How to coordinate multiple tasks for one application remains unclear. To address this issue, several promising techniques have been investigated in the existing literate: 1) The authors of <cit.> developed a semantic-effectiveness plane task-level information processing. 2) In <cit.>, the authors built a knowledge pool for reasoning-driven AI-native systems that enable online learning and fast inference of different network functions. §.§.§ Radio Access Networks Most new HCI and sensing & communication devices will access to RANs for better user experience and flexible deployment. As a result, 6G RANs should support massive devices with diverse KPI requirements. To improve user experience in real-time interactions, ISAC is a promising technology that exploits a shared multi-antenna system and advanced signal processing algorithms for data transmission and environment sensing <cit.>. In addition, a space-air-ground-sea integrated network is promising for enabling seamless connectivity for global interactions in the Metaverse <cit.>. As the tasks and applications in the Metaverse evolve over time, Open-Radio Access Networks (O-RAN) with programmable network functions can reduce the cost for network deployment and upgrades significantly <cit.>. § ROAD MAP TOWARDS THE METAVERSE In this section, we discuss the road map towards the full vision of the Metaverse as illustrated in Fig. <ref>. §.§ Establish Multi-tier Metaverse in Multi-tier Architecture The first step toward the Metaverse is to build digital worlds. There are three types of digital worlds: 1) an imaginary environment that does not have a real-world counterpart; 2) a digital twin of a real-world environment; and 3) a digital world overlays on the physical world or even has the ability to reprogram the physical world in real time. To provide quick responses from the Metaverse to users, we need to use the computing, storage, and communication resources of a local server or the end-user device. Then, the states of the user or its digital model are updated to the global sever for synchronization. The multi-tier network architecture lays the foundation for building multi-tier digital worlds that support real-time interactions among users from all over the world. §.§ Single-User Activities in the Metaverse For single-user activities, all the virtual objects and environments can be built on the end-user device, e.g., a personal computer. With the help of a variety of HCI, the user can interact with everything in the digital world, where several applications in education, entertainment, designing, and planning become possible. For example, by synchronizing users' actions with their digital models in the Metaverse, the users can operate virtual objects and create virtual content (e.g., driving a vehicle or painting). Nevertheless, establishing a digital world on the end-user device is not easy, as the device has limited computing and storage resources. Thus, the processing delay could be the bottleneck for real-time interactions. To address this issue, low-complexity 3D reconstruction and segmentation algorithms are in urgent need. §.§ Local Interactions in the Metaverse In some private networks or local area networks, information is exchanged among devices and users in a small area. In these scenarios, all the devices and human users can interact with each other via a local Metaverse. For example, in a smart factory, sensors monitor manufacturing processes and update their states to the local sever, where a digital twin of the factory is built <cit.>. In the digital factory, it is possible to simulate the outcomes of different actions. If an accident is detected in the simulation, the local server sends commands to actuators to stop the processes in an anticipatory manner. In addition, the data is stored and processed in the local server that is not connected to the Internet. Therefore, this approach can protect users' privacy and avoid security issues. §.§ Global Interactions in the Metaverse The ultimate goal of the Metaverse is to support global interactions for a large number of users using different types of HCI. In this stage, remote healthcare, immersive business, and online education will be possible. Latency remains one of the major issues for global interactions. Specifically, the propagation delay is inevitable in long-distance communications. Stochastic network congestion leads to long queueing delay. In addition, the re-transmission scheme in the existing Transmission Control Protocol/Internet Protocol brings significant latency. Although some interesting ideas have been put forward to achieve real-time interactions <cit.>, the implementation in large-scale networks remains a challenging goal. § TASK-ORIENTED DESIGN FRAMEWORK In this section, we propose a task-oriented design framework. In general, there are three types of tasks in the Metaverse: 1) environment sensing or measurements with HCI for constructing the digital world, 2) data/signal processing for understanding, prediction, inference, and generating feedback, and 3) communications for information exchange among human users, machine-type devices, and servers. Let us take a virtual conference as an example to illustrate the tasks. As body language plays an important role in virtual conference, the HCI executes the task of identifying the human pose and eye tracking. After that, the computer/sever processes the measured/estimated data. In this stage, typical tasks include segmentation, object detection, and rendering of 3D scenes. Finally, based on the interaction between the avatars in the Metaverse (such as shaking hands), the feedback is generated and sent back to users by the Tactile Internet. §.§ Challenges of Task-Oriented Design §.§.§ Data Structures The data structure of a task depends on the HCI or environment sensing technologies. Traditional speech and image signals are represented by time-series data and the red-green-blue (RGB) model, respectively. Nevertheless, spatial correlation is critical for the tactile signals and brainwave signals, and relies on the topology of the sensors. The topology information is useful in signal processing, and may facilitate the execution of tasks. For example, the signals generated by a radar system or depth-sensing camera, such as point-cloud data are converted into 3D tensors in the Euclidean space before they can be processed by convolutional neural networks. This procedure causes additional computational overhead and processing delay. To reduce overhead and improve the performance of a task, the authors of <cit.> developed a PointNet to handle a range of tasks in environment sensing, such as 3D shape classification and segmentation. Nevertheless, a widely accepted standard for data storage, processing, or communications is still missing in the Metaverse, and it will lay the foundation for immersive interactions among human users, machine-type devices, and environments. §.§.§ Task-Oriented KPIs Diverse tasks in the Metaverse have stringent requirements on a range of KPIs, which are still difficult to fulfill. For example, in applications that require low-latency feedback, the user-experienced delay should be close to zero, but the propagation delay could be up to dozens of milliseconds when the communication distance is hundreds or thousands kilometers. Furthermore, the KPIs defined in the 5G standard, such as throughput, latency, and reliability, are not the same as the task-oriented KPIs as illustrated in Fig. <ref>. For instance, in haptic communications, it is natural to raise a question: Do we really need to guarantee the 99.999% reliability in communication systems in order to achieve the target JND? The impact of network resource allocation on the task-oriented KPIs remains unclear, and there is no theoretical model or closed-form expression that can quantify their relationships. To overcome this difficulty, we need novel design methodologies. §.§.§ Multi-Task Processing and Coordination With the multi-tier network architecture, tasks of an application may be executed by the end-user device, an edge/local server, or the cloud server. The offloading and coordination of multiple tasks is not trivial since they are interdependent. For some highly interactive applications, the end-user device senses the behavior of the user, and then communicates with the local server, where the feedback is generated. Finally, the local servers synchronize the states of users in a cloud server. Delays or packet losses in any of the tasks will have a serious impact on the overall performance of the application. To provide satisfactory user experience in the Metaverse, we need to break the barriers among sensing, communication and computing systems, and jointly design the whole network. §.§ Potential Solutions §.§.§ Cross-System Design Existing HCI, sensing, communication, and computing systems are developed separately. This design approach leads to sub-optimal solutions, brings extra communication overhead for coordinating multiple tasks, and can hardly meet the task-oreinted KPIs. To address these issues, a cross-system design has been investigated in the existing literature. There are several existing cross-system design approaches. (1) As shown in <cit.>, when dealing with reconstruction tasks including in-text sentences, sounds, images, and point cloud data, by joint source and channel coding, it is possible to achieve a better quality of service at low signal-to-noise ratios. Nevertheless, complicated coding schemes may bring extra processing delay, which remains an issue in ultra-low latency communications. (2) Considering the cost of deploying a large number of sensors, integrating sensing into communication systems is a promising approach, as cellular networks have been widely deployed <cit.>. By utilizing communication signals in environmental sensing, cellular networks can support a variety of tasks, such as localization, object detection, and health monitoring. (3) Given the fact that state observations are outdated in some tasks, the Metaverse needs to respond to users' actions in an anticipatory manner. To achieve this goal, prediction and communication co-design is promising, especially for applications in the Tactile Internet that requires ultra-low latency <cit.>. It is worth noting that cross-system problems are in general very complicated and may not have well-established models. As a result, most of the existing analytical tools and optimization algorithms are not applicable. §.§.§ Domain-Knowledge-Assisted Deep Learning To solve the above cross-system design problems, data-driven deep learning methods are promising, as they do not rely on theoretical models or assumptions. However, straightforward applications of deep learning may not generalize well with diverse task-oriented KPIs and data structures <cit.>. To address this issue, one should exploit domain knowledge in feature engineering, sample selection, value function design, etc. When deep learning is adopted in task-oriented design, there are three major issues. (1) Most of the existing deep neural networks work well in small-scale problems. As the scale of the problem increases, the training/inference time increases rapidly. In the Metaverse, there could be millions or billions of users and devices, and thus scalability remains an open issue. (2) To support various tasks in different sensing and communication environments, deep learning algorithms trained on a data set should achieve good performance in different use cases after a few steps of fine-tuning. This generalization ability is critical for using deep learning in the Metaverse. (3) Most deep learning algorithms do not offer a performance guarantee in terms of classification or regression accuracy. But the KPIs required by some mission-critical tasks are sensitive to the outcomes of learning algorithms. Improving the safety of deep/reinforcement learning algorithms by exploiting domain knowledge is a promising and vital approach. §.§.§ Universal Design The Metaverse aims to provide better interactions among users with different cultural backgrounds and health conditions (e.g., careers, nationalities, abilities or disabilities, etc.). Different users may have different preferences, habits, and cognition. Meanwhile, they may use different types of HCI devices with different data structures. The diversity of users brings significant challenges in the design of the Metaverse, and the universal design is essential for the success of the Metaverse by considering the diverse needs and abilities of all the users throughout the design process, standardization, and government regulation. For example, a universal design platform named Omniverse can meet user demands from different backgrounds (e.g., artists, developers, and enterprises), where the Universal Scene Description is promising to be the open and extensible standard language for the 3D Internet to eliminate the barriers among different user communities <cit.>. Nevertheless, a lot of effort is still needed in the universal design. § A CASE STUDY Timely and accurate synchronization between the real-world device and its digital model is the foundation of the Metaverse. In this section, we show how to jointly optimize the sampling, communication, and prediction modules for the synchronization task shown in Fig. <ref>. We use domain-knowledge to design a deep reinforcement learning (DRL) algorithm to minimize the communication load subject to an average tracking error constraint. The state is defined as the mean square error (MSE) between the trajectories of the real-world robotic arm and its digital model in the Metaverse. The action includes the prediction horizon and sampling rate. The reward is the communication load. Unlike the latency and reliability constraints in communication system design, our task-oriented design approach aims to guarantee a task-oriented KPI, i.e., the average tracking error. More details about the experiment and the DRL algorithm can be found in <cit.>. §.§ Prototype Setup To validate the algorithm, we built a prototype as shown in Fig. <ref>, where a virtual robotic arm is synchronized with a physical robotic arm in the real world. Specifically, the sensor attached to the robotic arm measures its trajectory (i.e., angle of the first joint) at the frequency of 1kHz. Then, the measured trajectory is sampled, i.e. decimated, and transmitted to the Metaverse, where the server predicts and reconstructs future trajectory to reduce the latency experienced by the user. Then, the digital model of the robotic arm follows the predicted trajectory and feeds back the prediction results to the real-world robotic arm. Finally, the real robotic arm computes the mean square error (MSE) between the measured trajectory and the predicted trajectory, and a deep reinforcement learning algorithm is applied to adjust the sampling rate and the prediction horizon. For data collection, we consider the motion-controlled robotic arm application, where the real robotic arm is controlled by a human operator. Other details of the system setup and the deep reinforcement learning can be found in <cit.>. §.§ Performance Evaluation There are two performance metrics: the average communication load and the average tracking error. In the case without sampling, the packet rate in the communication system is 1,000 packets/s, which is used to normalize the communication load. For example, if the average packet rate is 150 packets/s, then the normalized average communication load is 15 %. The average tracking error is measured by the average MES between the real-world trajectory and the reconstructed trajectory in the Metaverse. The results in Fig. <ref> show the average tracking error and the normalized average communication load in the training stage of the deep reinforcement learning algorithm, where the average tracking error constraint is 0.007^∘. The results show that the task-oriented design approach can meet the average track error constraint and can reduce the average communication load to 13% of the communication load in the system without sampling. In Fig. <ref>, we test the trade-off between the normalized average communication load and the average tracking error, where different packet loss probabilities in the communication system are considered, i.e., p_loss = 0, and 10%. The results show that with a smaller packet loss probability, it is possible to achieve a better trade-off between the normalized average communication load and the average tracking error. In a communication system with a packet loss probability of 10 %, our task-oriented design approach can reduce the normalized average communication load to 27 % when the average tracking error constraint is 0.002^∘. This observation indicates that by adjusting the sampling rate (i.e., the communication load in Fig. <ref>), it is possible to meet the requirement of a task in communication systems with high packet loss probabilities, e.g., 10 %. § CONCLUSION AND FUTURE DIRECTIONS In this paper, we introduced the three infrastructure pillars and depicted the road map toward the full vision of the Metaverse. Then, we proposed a task-oriented design approach followed by a prototype in a case study. In future 6G standards, we need new network functions for task-level resource management. As the tasks may evolve according to the road map of the Metaverse, an O-RAN interface could be a promising direction as it allows network operators to update network functions according to new applications and tasks in the Metaverse. Since machine learning has been adopted in 3GPP as a promising tool for developing network functions, improving the scalability and generalization ability of learning-based network functions remains an open issue. Note that the training of learning-based network functions may lead to huge energy consumption, we need to reconsider the energy efficiency of the whole network. Finally, privacy, security, and trust are critical for the Metaverse, where data is shared among different network functions. Zhen Meng ([email protected]) received his B.Eng. degree from the School of Engineering, University of Glasgow, UK, in 2019. He is currently pursuing his Ph.D. degree at the University of Glasgow, UK. His research interests include the ultra-reliable and low-latency communications, communication-robotics co-design, Metaverse, and cyber-physical systems. Changyang She ([email protected]) is a lecturer-level research fellow at the University of Sydney. He is a recipient of Australian Research Council Discovery Early Career Researcher Award 2021. His research interests lie in the areas of ultra-reliable and low-latency communications, wireless artificial intelligence, and Metaverse. Guodong Zhao ([email protected]) is a Senior Lecturer in the James Watt School of Engineering at the University of Glasgow, UK. He is an IEEE Senior Member and the senior academic lead of the Scotland 5G Centre. His research interests are in the areas of artificial intelligence, communications, robotics, and Metaverse. Muhammad Ali Imran ([email protected]) is a full professor in communication systems and the head of the Autonomous System and Connectivity (ASC) Research Division in the James Watt School of Engineering at University of Glasgow, UK. He is the founding member of the Scotland 5G Centre with expertise in 5G technologies for industrial and robotics applications. Mischa Dohler ([email protected]) is now Chief Architect at Ericsson Inc. in Silicon Valley, working on cutting-edge topics of 6G, Metaverse, XR, Quantum and Blockchain. He serves on the Technical Advisory Committee of the FCC and on the Spectrum Advisory Board of Ofcom. He is a Fellow of the IEEE, the Royal Academy of Engineering, the Royal Society of Arts (RSA), and the Institution of Engineering and Technology (IET). He is a Top-1% Cited Innovator across all science fields globally. Yonghui Li ([email protected]) is a Professor and Director of Wireless Engineering Laboratory at the University of Sydney. He is the recipient of the Australian Queen Elizabeth II Fellowship in 2008 and the Australian Future Fellowship in 2012. He is a Fellow of IEEE. His research interests are in the areas of millimeter wave communications, machine to machine communications, coding techniques and cooperative communications. Branka Vucetic ([email protected]) is an ARC Laureate Fellow and Director of the Centre of IoT and Telecommunications at the University of Sydney. Her current research work is in wireless networks and IoT. She is a Life Fellow of IEEE, the Australian Academy of Technological Sciences and Engineering and the Australian Academy of Science.
http://arxiv.org/abs/2306.02145v1
20230603161829
Origami-Inspired Composite Springs with Bi-directional Translational-Rotational Functionalities
[ "Ravindra Masana", "Mohammed F. Daqaq" ]
physics.app-ph
[ "physics.app-ph" ]
Fano factor, Δ T-noise and cross-correlations in double quantum dots M. Lavagna July 31, 2023 ==================================================================== § INTRODUCTION Origami is the art of folding paper to create aesthetically pleasing three-dimensional designs. Long before its practice as a craft <cit.>, forms of origami existed in nature <cit.>, and were recently uncovered by various researchers who turned to the fields of biology and physiology of plants and animals to gain further insights into building multi-functional engineering systems <cit.>. The appearance of origami patterns in nature inspired such researchers to explore origami as a platform for building functional engineering systems with versatile characteristics that cater to niche applications in various technological fields <cit.>. This includes the design and construction of structures with auxeticity <cit.>, multi-stability <cit.>, and programmable stiffness <cit.>. Such structures have already found their way into the design of solar arrays <cit.>, inflatable booms <cit.>, vascular stents <cit.>, viral traps <cit.>, wave guides <cit.>, and robotic manipulators <cit.>. Among the many different available origami patterns, some designs have attracted more attention in engineering applications. For example, the Miura-Ori, a rigid origami pattern[In rigid origami structures only creases exhibit deformation during deployment.], has been used to construct three-dimensional deployable structures that have been studied and utilized in applications including space exploration <cit.>, deformable electronics <cit.>, artificial muscles <cit.>, and reprogrammable mechanical metamaterials <cit.>. The Ron Resch, which is a non-periodic rigid origami with unusually high buckling strength has been used for energy absorption, <cit.>. The Yoshimura pattern <cit.> and the Kresling pattern <cit.> are leading examples of non-rigid origami patterns [Non-rigid origami structures exhibit deformation of the panels between the creases during deployment. This incorporates other degrees of freedom (hidden degrees of freedom) that are free from the kinematic constraints that govern the rigid folding <cit.>] which have been utilized to engineer structures with unique properties. For instance, the Kresling pattern, has inspired the design of flexible tunable antennas <cit.>, robot manipulators <cit.>, wave guides <cit.>, selectively-collapsible structures <cit.>, vibration isolators <cit.>, fluidic muscles <cit.>, mechanical bit memory switches <cit.>, reconfigurable antenna<cit.> and crawling and peristaltic robots <cit.>. The Kresling pattern has also been used to build and construct coupled linear-torsional springs coined as Kresling Origami Springs (KOSs) <cit.>. Such springs which take the shape of a cylindrical bellow-type structure are created by tessellating similar triangles in cyclic symmetry and connecting them as shown in Figure <ref>(a). The triangles in the KOS are connected in a circular arrangement, with each triangle connected to two other triangles along two of its edges. One edge forms a mountain fold, b_0, and the other a valley fold, c_0. The third edges, a_0, of the connected triangles form two parallel polygonal end planes (top and bottom planes). The design of the KOS is characterized by geometric parameters, that include the number of sides, n, of the parallel polygons, the radius, R, of the circle that encloses them, the preloading height, u_0, and rotation angle, ϕ_0 between the end planes. Figure <ref>(a) illustrates these parameters. When a Kresling Origami Spring (KOS) is subjected to an axial load or a torque, it undergoes compression or expansion, depending on the direction of the load. As a result, the two parallel polygon planes, while staying rigid, move and rotate relative to each other along a centroidal axis, as shown in Fig. <ref>(b). This motion causes the triangular panels to deform and store energy in the form of strain energy. Upon removal of the external load, the KOS springs back to its initial configuration, releasing the stored energy. The behavior of this Origami-inspired KOS is that of a unique restoring element with axial and torsional functionalities, which can form the foundation for many exciting engineering structures, especially those in the field of rotating machinery machinery<cit.>, haptics<cit.>, and soft robotics<cit.>. However, because of the nature of its coupled kinematics, a single KOS always results in a coupled translational-rotational motion regardless of the type of load applied to it. In other words, the two coordinates, u, and, ϕ, are always kinematically constrained resulting in a single degree of freedom. Thus, when one end of the KOS is fixed while the other (free end) is subject to a load, either axial or torsional, the free end undergoes coupled translational-rotational motion. This coupled motion of the free end is not desirable since, in most applications, the free end is usually constrained from rotating when the load is axial, and from translating when the load is torsional. As such, it is desired that the two motions of the free end be decoupled so that the restoring force is independent for different types of loads. One way to achieve this goal is to join two KOSs end-to-end in series to form a Kresling Origami Spring Pair (KOSP). Unlike a single KOS, where the two coordinates of the free end are always coupled, the KOSP can either have coupled or decoupled motion, depending on the angle of the creases in its constituent KOSs. When the constituent KOSs are joined in a way that their creases have opposite sign slopes with respect to the horizontal connecting surface, the motion at the free end is decoupled. On the other hand, if the constituent KOSs are connected in a way that their creases have similar sign slopes, motion at the free end remains coupled, but with an extended range of operation. For brevity, we will refer to KOSPs with decoupled motion at the free end as d-KOSPs, while those with coupled motion will be denoted as c-KOSPs. The decoupling of the motion at the free end can be observed by inspecting the d-KOSP in Fig. <ref> (a) (Green). When the bottom polygon is fixed while the upper end is subjected to a prescribed translational motion, u_T, the top polygon does not rotate as the height of the stack is increased (Note the circular blue marker placed on the top polygon does not rotate under the axial loading). The underlying kinematics of the KOSP allows the translational motion of the free end to occur free of rotation by forcing the center polygon connecting both KOSs to undergo rotation and translation as the translation of the free end is taking place. More specifically, the kinematics is such that the module whose crease orientation matches with the direction of the external rotation undergoes compression, while the other module undergoes expansion. On the other hand, when a prescribed translational motion, u_T, applied on the the top polygon of the c-KOSP (Orange), the top end undergoes coupled-rotational translational motion with extend range of operation as compared to a single KOS. Similarly, as shown in Fig. <ref> (b), when a prescribed rotational motion ϕ_T, is applied at the top end of the d-KOSP, the connecting polygon undergoes coupled translational-rotational motion that maintains that total height of the KOSP constant. Videos demonstrating the different scenarios can be seen in supplementary video S1. Another issue with the torsional behavior of a single KOS is that it is uni-directional. This is because the KOS is much stiffer when the applied torque opposes the folding direction of the panels than when it is applied in the same direction. As such, a single KOS always has an asymmetric restoring torque around its equilibrium state, which is not a desirable attribute. On the other hand, the d-KOSP is bi-directional and can be designed to have a symmetric restoring torque around its equilibrium, which is a key advantage over the single KOS. It is therefore the goal of this paper to design and additively manufacture bi-directional tunable springs that have decoupled translational rotational degrees of freedom at their free end. The restoring force and torque behavior of those springs will be analyzed both numerically and experimentally using functional 3D-printed springs. The number of equilibria, their stability, and bifurcations will also be analyzed as the precompression height of the d-KOSP is varied. The rest of the paper is organized as follows: Section <ref> introduces a simplified truss model, which can be used to study the qualitative quasi-static behavior of the KOS and uses it to analyze the equilibria of a single KOS. Section <ref> uses a truss model to investigate the quasi-static response behavior of d-KOSPs and analyzes the KOSPs possible equilibria and their bifurcations as the stack is pre-compressed to different heights. Section <ref> presents an experimental study of the quasi-static behavior of the proposed d-KOSPs and illustrates that the responses are in qualitative agreement with the numerical findings. Finally, section <ref> presents the key conclusions. § RESTORING BEHAVIOR OF A SINGLE KOS The restoring behavior (force and torque) of a KOS can vary greatly depending on the values of its geometric design parameters. In some cases, this can result in a single equilibrium configuration, while in others, there may be two equilibria. A qualitative understanding of the general behavior of KOSs can be obtained by using an axial truss model, in which each triangle in the KOS is represented by axially-deformable truss elements located at its edges<cit.>, as shown in Fig. <ref> (a).[It is important to note that the truss model is only used in this paper as a qualitative guide for the choice of design parameters that result in different behaviors. More accurate, yet computationally expensive models can be found in <cit.>.] In the truss model, the relative position and orientation of the two end planes during deployment can be described by the length of the three edges of the triangle, a, b, and c, in terms of the other design parameters as, a=2Rsinπ/n, b=√(4R^2sin^2(ϕ-π/n/2)+u^2), c=√(4R^2sin^2(ϕ+π/n/2)+u^2). where ϕ and u are, respectively, the relative angle and vertical distance between the end planes under loading. Assuming that the base of each triangle remains undeformed during deployment, and that the panels do not buckle under compression or encounter self-avoidance at small values of u, the total strain energy stored due to panel deformation can be approximated by Π=nEA/2[(b-b_0)^2/b_0+(c-c_0)^2/c_0], where EA is the axial rigidity of the truss elements. Here, E is the elastic modulus, and A is the cross-sectional area of the truss elements. Moving forward, all the results using the truss model are normalized with respect to the axial rigidity, or simply EA is set to unity. The equilibrium states (u_e, ϕ_e) of the KOS are determined by minimizing the strain energy with respect to u and ϕ. Specifically, Π_u|(u_e,ϕ_e)= Π_ϕ|(u_e,ϕ_e)=0 at any equilibrium state, where, Π_u and Π_ϕ represents ∂Π/∂ u and ∂Π/∂ϕ, respectively. An equilibrium configuration is considered physically stable only if it corresponds to a minimum in the strain energy, which is satisfied when Π_uuΠ_ϕϕ|(u_e,ϕ_e)-Π_uϕ^2|(u_e,ϕ_e)>0 and Π_uu>0. Figure <ref> (b) illustrates the normalized potential energy function, Π, for a KOS with the design parameters, ϕ_0=15^∘, u_0/R=1.65 and n=6. In the figure, the solid lines represent the condition Π_ϕ=0 and Π_ϕϕ>0; i.e. the local minima lines, while the dotted lines represent the condition Π_ϕ=0 and Π_ϕϕ<0; i.e., the local maxima lines. Similarly, the solid line with circular markers represents the condition Π_u=0 and Π_uu>0, and the line with square markers represent the condition Π_u=0 and Π_uu<0. These curves are important because they determine the route taken by the KOS during deployment. When the KOS is subjected to uni-axial loading without any external torque, the KOS follows the curve Π_ϕ=0, whereas it follows the Π_u=0 curve when the KOS is subjected to a torque under no axial loading. Figure <ref> (c) shows the typical potential energy functions of the KOS plotted across an independent variable, u or ϕ. The circular markers at the bottom of the potential energy curves represent the stable equilibria. The quasi-static behavior of the KOSs is highly dependent on the design parameters. Even a small variation in these parameters can lead to significant changes in the behavior of the KOS. The KOS is capable of exhibiting various qualitative restoring force characteristics. These include linear, nonlinear, and quasi-zero stiffness, among others. Depending on the number of stable equilibria that exist, KOSs are typically classified as mono-stable (one stable equilibrium) or bi-stable (two stable equilibria), see Fig. <ref> (c). Figure <ref> (d) shows the design map that demarcates the design space (u_0/R, ϕ_0) of the KOS into mono-stable and bi-stable regions for a KOS with n=6. § KRESLING ORIGAMI PAIRS With the goal of expanding their utilizable space of application, we focus on understanding the quasi-static behavior of a pair of KOSs connected in series as shown previously in Fig. <ref>. The springs can be connected in two ways: either with the slope of the creases having the same sign (c-KOSP) or with the slope of the creases having opposite signs (d-KOSP). When an external torque is applied to the top end of a c-KOSP while the other is fixed, the top end twists and compresses resulting in coupled translational-rotational motion. On the other hand, when the same load is applied to a d-KOSP, the top end only undergoes rotational motion without any translation effectively decoupling the translational from the rotational motions. To better understand the quasi-static behavior of the KOSP, we consider the truss model of a general, N-module KOS stack. It should be noted that the equations governing the mechanics of the truss model of the KOS with its assumptions are still applicable for the constituent KOSs. Having said that, the total strain energy stored in N-module stack of KOSs can be written as, Π_T=∑_i^N Π_i = ∑_i^Nn_iE_iA_i/2[(b_i-b_i0)^2/b_i0+(c_i-c_i0)^2/c_i0], where, b_i=√(4R_i^2sin^2(ϕ_i-π/n_i/2)+u_i^2), c_i=√(4R_i^2sin^2(ϕ_i+π/n_i/2)+u_i^2). Here, the subscript 'i' refers to the different constituent KOSs in the stack. The variables ϕ_i and u_i are, respectively, the relative angle and the vertical distance between the end planes of the i^th KOS module; u_T=Σ u_i and ϕ_T=Σϕ_i, are, respectively, the total height of the stack and net relative rotation between the stack's two ends; and finally, Π_T is the potential energy of the stack. In this model, clockwise rotations are considered to be positive and vice versa. In response to any external loading along or about the longitudinal axis of the stack, the N constituent KOS modules of the stack deform adjusting the ϕ_i's and u_i's to counter balance the external loading with the net restoring force/torque. The new arrangement of the KOS modules in response to the imposed external loads is such that it minimizes the total potential energy. The mathematical formulation of the optimization problem can be posed in the following way: u_i,ϕ_iminimize Π_T=∑_i=1^NΠ_i(u_i,ϕ_i). subject to ∑_i=1^N u_i = u_T, ∑_i=1^Nϕ_i = ϕ_T, u_i^min≤ u_i ≤ u_i^max, ϕ_i^min≤ϕ_i ≤ϕ_i^max . where u_i^min, u_i^max,ϕ_i^min, ϕ_i^max are the minimum and maximum possible coordinates of the i^th constituent. We use Matlab optimization tools to solve this problem, in which, at each iteration for a new set of (u_T,ϕ_T), the program predicts the set of ϕ_i's and u_i's that satisfies the constraints and evaluates the total potential energy, Π_T, using Equation <ref>. §.§ Restoring behavior of d-KOSPs KOSPs formed by stacking KOSs with opposite orientation of their individual creases; i.e. d-KOSPs are of importance since, as aforedescribed, they offer bi-directional functionalities and decouple the motion at the free end. Thus, we dedicate this section to study their quasi-static restoring torque behavior, equilibria, and the bifurcation of those equilibria as the pre-compressed height of the stack, u_T, is varied. At each step of the analysis, the total height of the stack is changed and the potential energy function, restoring torque and equilibria of the KOSP are calculated using the algorithm described in Section <ref>. Figure <ref> depicts such results for a twin d-KOSP consisting of two similar KOSs, each having the design parameters, u_0/R=1.1, ϕ_0=65^o, and n=6, and the restoring torque behavior shown in Fig. <ref> (a). As can be clearly seen, the restoring torque of the single KOS is asymmetric bi-stable with uni-directional tendency. As evident in the bifurcation diagram shown in Fig. <ref> (b), the d-KOSP has a single equilibrium point at ϕ_T=0 for pre-compressed heights 1.98<u_T/R<2.5 (solid green line). Thus, the potential energy function is mono-stable and symmetric as shown in Fig. <ref> (c) for u_T/R=2.2. The restoring torque of the d-KOSP is nearly linear, which is ideal for applications where linearity and symmetry under loading are key for performance. At u_T/R=2, the potential energy becomes almost flat for any prescribed rotation near the equilibrium point, Fig. <ref> (c). Thus, the stiffness becomes nearly zero around the equilibrium point resulting in a quasi-zero-stiffness (QZS) behavior. Such spring characteristics are ideal for the design of broadband vibration absorbers and energy harvesters <cit.>. Near u_T/R=1.95, the only stable equilibrium point of the d-KOSP loses stability through a super-critical pitchfork bifurcation (Sup-crit. P) and gives way to two stable equilibria on either side of the original equilibrium. Thus, the KOSP becomes of the symmetric bi-stable type. This is evident in the shape of the potential energy function shown in Fig. <ref> (e) for u_T/R=1.5. It can be clearly seen that the potential energy function has two minima separated by a local maximum at ϕ_e=0, which are characteristics of a symmetric bi-stable potential. The associated restoring force exhibits a negative stiffness at ϕ_e=0 and positive stiffness for large values of ϕ_T. Such bi-stable springs are key to the design of bi-stable mechanical switches and energy harvesters. As u_T is decreased further, the two stable equilibrium branches diverge and the potential wells get deeper causing the magnitude of the negative local stiffness to increase. As such, it becomes more difficult to force the spring to move from one of its equilibria to the other. At precisely, u_T/R=1.1, one of the KOSs in the stack becomes fully compressed, while the other is at its undeformed state. This is usually referred to in the literature as a self-contact point or as panel self-locking. The result is that the potential energy and the stiffness increase sharply and suddenly at these points as can be clearly seen in Fig. <ref> (f). In essence, these points represent the limit of the d-KOSP operation. Any prescribed rotation of the d-KOSP beyond this point would only deform the panels; a process which requires high strain energy. Decreasing u_T further below u_T/R=1.1, the non-trivial equilibria continue to follow the orange dotted curve which marks the self-contact points of the KOSP. At about u_T/R≈ 0.97, the unstable equilibrium points represented by sqaure marker on Fig. <ref> (b) regains stability through a sub-critical pitchfork bifurcation (Sub-crit. P), and the KOSP becomes tri-stable as can be seen in Fig. <ref> (g). The KOSP remains tri-stable for a very short range of u_T, before the two non-zero stable solutions collide with the unstable solution and destruct each other in a fold bifurcation (Fold) at u_T/R≈ 0.74. Beyond this point, the KOSP becomes mono-stable again with the trivial position, ϕ_e=0, being the only equilibrium point, as can be seen in Fig. <ref> (h). In Fig. <ref> (a,b), we generalize the bifurcation diagram shown in Fig. <ref> (b) into bifurcation maps that demarcate the design space of u_T/R and u_0/R into different domains based on the number of stable equilibria that the twin d-KOSP possesses; i.e., which design parameters lead to mono-stable behavior, and which ones lead to a bi- or tri-stable behavior. The different colored regions with numbered labels represent the different number (as labeled in the figure) of the stable equilibria in that configuration. Those maps are generated for modules with two different values of ϕ_0, namely ϕ_0=45^o and ϕ_0=65^o. For the most part, we can see that the mono-stable behavior is the easiest to realize followed by the bi-stable behavior, then the tri-stable one, and that the region of design parameters leading to the tri-stable behavior shrinks when ϕ is increased. Moreover, in both of the cases, tri-stability is very difficult to achieve when the KOS modules are mono-stable as compared to when the KOSP is designed using bi-stable KOS modules. Figure <ref> (c) and (d), show similar maps for u_0/R= 1.1 and 1.65, respectively, with ϕ_0 being the bifurcation parameters. It can be clearly seen that larger values of u_0/R allow for larger regions in the design space to construct bi- and tri-stable KOSPs. One interesting observation resulting from the numerical analysis is that the d-KOSP can be designed to become mono-stable, bi-stable or even tri-stable irrespective of the type of the stability of its constituents. The symmetric response of the twin module d-KOSP is a feature that is most often desired in designing springs, but is not a constraint, if otherwise, an asymmetric restoring force response is desired. Asymmetry can be easily achieved by constructing the d-KOSP using two different KOSs. For instance, in Fig. <ref> (b), we plot the bifurcation diagram for a d-KOSP constructed by combining the two different bi-stable KOSs whose restoring behavior is shown in Figs. <ref> (a,c); namely, KOS1: u_0/R=1.65, ϕ_0=45^o, and n=6 and, KOS2: u_0/R=1.875, ϕ_0=60^o, and n=6. A first glance reveals that the bifurcation diagram is more complex and is no longer symmetric around ϕ_e=0. At the uncompressed height; i.e. u_T/R=1.65+1.875=3.525, there is a net offset in the equilibrium rotation angle which is 60^o-45^o=15^o, relative to the other end of the KOSP. The potential energy function is mono-stable despite both constituents being bi-stable and the restoring force is of the nonlinear hardening type, as shown in Fig. <ref> (d). When the KOSP is pre-compressed, the force deforms the two springs differently since they have different stiffnesses. The softer spring, here KOS2, undergoes compression and rotation first under the applied load. In the process of compression, KOS2, gains stiffness up to the point u_T/R=3.2, where it becomes stiffer than KOS1. At this point, KOS1 starts to deform and a new equilibrium point is born causing the potential energy to become bi-stable and asymmetric with two equilibrium angles occurring at ϕ_e1=-28^o, and, ϕ_e2=62^o; see Fig.  <ref> (e) obtained at u_T/R=3.152. The bi-stable asymmetric behavior continues to persist down to u_T/R ≈ 2.5, where the spring behavior becomes nearly of the asymmetric quasi-zero stiffness type; see Fig.  <ref> (f). Subsequently, the behavior of the springs becomes very complex as shown in Fig.  <ref> (g) and (h) for u_T/R=2.17 and u_T/R=1.67, respectively. The bifurcation diagram also reveals that the tri-stable behavior cannot be achieved using this combination of spring modules. § EXPERIMENTS §.§ Fabrication Numerical simulations have revealed that the modularity of the KOS can be used to construct functional KOSPs with unique and made-to-order restoring characteristics. The operating range can be increased and the restoring force/torque further tuned by stacking a larger number of unit springs N≥2. To employ these desirable characteristics in a realistic environment, such springs must be durable, and their manufacturing process be systematic and repeatable. Thus, relying on paper folding is obviously not the optimal approach. In a recent article <cit.>, we used 3D printing to produce KOS modules, demonstrating exceptional functionality, repeatability, and high durability. In the proposed design, the basic triangles of each KOS were modified to allow for easy folding and stretching at the panel junctions while still retaining enough stiffness to conform to the Kresling origami pattern and withstand loading. The fabrication process used the Stratasys J750 3D printer with the polyjet method, which utilized two different materials for each panel. The central rigid core of each panel was made of a rigid plastic polyjet material called Vero, while the outer frame was made of a flexible rubber-like polyjet material called TangoBlackPlus. The flexibility of the outer frame enables folding and stretching at the interfaces. Fig. <ref> (a) shows an example of the fabricated KOS module. The new design further introduced additional geometric parameters, namely the width of the flexible material, w and the thickness of the panels, t. These are important parameters that provide additional freedom in designing the KOS modules. Each KOS is reinforced using two end plates with circular holes that are concentric with the longitudinal axis of the KOS. These plates are added to increase the stability of the KOS, and to prevent damages under unwarranted non-axial loads. The holes allow air to escape during deployment. Using the manufacturing approach proposed in our previous article <cit.>, we construct KOSPs for experimental testing. Similar and/or different KOSs are used interchangeably to form various stack combinations, and different crease orientations. §.§ Experimental Testing The experimental portion of this study involves two phases: axial testing and torsional testing. The axial tests are performed to determine the restoring force behavior of the KOS under compressive and tensile loads. These tests are conducted using an Instron Dual Column 5960 universal testing machine. A controlled fixed rate displacement of 0.2 mm/s is applied to the top end of the KOS while the bottom end is placed on a specially designed platform that can freely rotate about a common centroidal axis, as shown in Fig. <ref> (a). The restoring force is measured using a load cell, and the rotation of the bottom end is tracked using digital image correlation (DIC) tools. During the torsional tests, the KOS is subjected to a controlled rate of cyclical torque, including clockwise and anticlockwise rotations, using an Instron MicroTorsion MT1 machine, as depicted in Fig. <ref> (b). One end of the KOS is clamped using a chuck that is connected to a motor, while the other end is placed on a sliding bearing that allows longitudinal motion to be free while the rotary motion is restrained. For the torsional tests of the KOSPs, we require the total length of the KOSP, u_T, to be fixed. Thus, we remove the sliding bearing and prevent it from rotating or translating. A torque cell is used to measure the applied torque at the fixed end, and the relative longitudinal displacement is monitored using DIC tools. It is important to note that the results presented in this work are based on a controlled rotational rate of 20^o/min, although other rotational rates ranging from 10^o/min to 100^o/min were also tested, but the effect of the rate on the response was found to be negligible. Furthermore, the entire test setup is placed in the horizontal plane to eliminate the influence of gravity on the quasi-static responses, which is crucial since the KOS is free to slide during axial testing. It is important to mention that five samples with the same design parameters are tested in both the axial and torsional testing of the KOS modules, and the average of the responses under compression, tension, clockwise and anticlockwise rotation are recorded. The total potential energy is calculated by integrating the measured restoring force across the prescribed displacement in the case of uni-axial testing and integrating the measured torque over the prescribed rotation in the case of torsional testing. §.§ Experimental Results We start by testing the quasi-static torsional behavior of a KOS with the geometric parameters u_0/R=1.875, ϕ_0=60^o, R=15mm, n=6, t=0.75 mm and w=1.5 mm as depicted in Fig. <ref> (a). Our goal is to first understand the behavior of the unit cell forming the KOSP. To achieve this objective, we tested the KOS samples using the Instron torsion testing machine, which was configured to maintain zero axial loading on the KOSs throughout the test. During the test, we prescribed the rotation angle, ϕ, at one end of the module and recorded both the torque and the instantaneous height of the structure, u. Positive rotation caused compression in the KOS module, while negative rotation resulted in expansion. To prevent the module from suffering permanent deformation or damage, we determined the limits of rotation, including the stretch limit and compression limit. Here, the stretch limit, ϕ_s, refers to the point at which ∂ u/∂ϕ becomes large in uni-axial testing, while the compression limit, ϕ_c, is the point before delamination begins to occur. For the considered KOS, ϕ_s=48.5^o while ϕ_c=147.5^o clearly demonstrating one of the key disadvantages of the single KOS design, which lies in the fact that it reaches its stretch limit much faster than its compression limit due to its inherent kinematics. Figure <ref> (a) illustrates the restoring torque and the calculated potential energy function for this KOS. It is evident that the restoring force is asymmetric with the KOS exhibiting a single equilibrium point at the undeformed state ϕ_e=60^o, where the restoring torque is zero. As the angle is increased, the restoring torque monotonously increases up to ϕ=71^o, after which it starts to decrease resulting in negative torsional stiffness up to ϕ=89^o. Thereafter, the KOS loses much of its load carrying ability and the potential energy forms a plateau which extends up to ϕ=110^o. Note that the restoring torque approaches but never crosses zero, thus there is no other equilibrium point and the spring is mono-stable. On rotating further, the KOS begins to get stiffer and the triangular panels begin to interact and avoid each other near ϕ=130^o. On the other hand, upon expansion the KOS from its undeformed state, the stiffness rapidly increases and the KOS quickly reaches its stretch limit of ϕ_s=48.5^o. Next, we investigate the restoring behavior of a twin d-KOSP constructed using a pair of the tested KOS, u_T/R=2×1.875=3.75. Following the testing procedure described in Section <ref>, the restoring torque is measured as shown in Fig. <ref> (b). It is clearly evident that, for this value of u_T/R, the restoring torque of the spring is nearly symmetric and linear around ϕ=0 despite each KOS forming the stack being highly asymmetric and nonlinear. When the spring is precompressed to a height of u_T/R=3.36, the spring becomes bi-stable with two equilibrium points (ϕ_e1=-53^o and ϕ_e2=52^o) as shown in Fig. <ref> (c). The restoring force is bi-stable and nearly symmetric despite the constituents being highly asymmetric and mono-stable. Further decrease of the height to u_T/R=3.05 decreases the depth of the potential wells, and the restoring force becomes nearly of the QZS type as shown in Fig. <ref> (d). To generate the full bifurcation diagram for different values of u_T/R without having to repeat this experiment a large number of times, we use a numero-experimental interpolation approach. The approach employs an algorithm similar to the one described in the numerical analysis, Equation <ref>, to minimize the change in the total potential energy of the KOSP structure during its operation. However, here instead of evaluating the potential energy at the iteration variables (u_T,ϕ_T), we interpolate the potential energy of the modules from the experimental data used in Fig. <ref> (a). As with the simulation of truss KOSPs, we initially set the length of the stack, u_T to a certain value and then iteratively change the rotation angle, ϕ_T. Then the algorithm makes an initial calculated guess (feasible values) of ϕ_1 and ϕ_2, such that they are bounded within the stretch and compression limits of the modules and also satisfy ϕ_1+ϕ_2=ϕ_T. Using the experimental data, the program evaluates u_1, and u_2 and verifies that u_1+u_2=u_T. Once it is verified, the program interpolates Π_1 and Π_2 for ϕ_1 and ϕ_2 and evaluates the net potential energy Π_T. The optimization tool then uses fmincon in Matlab to iteratively searche for the minimum of Π_T by moving towards the feasible region of the objective function. The potential energy functions of the KOSP obtained using the algorithm are compared to their experimental counterparts as shown in Fig. <ref> (a) for different values of u_T/R. Here, the solid lines represent the actual experimental findings while the dotted lines represent those attained using the aforedescribed algorithm. It can be clearly seen that there is a general qualitative agreement, which could be used to predict an experimental bifurcation diagram of the KOSP as a function of u_T/R. This diagram is shown in Fig. <ref> (b) demonstrating the ranges of u_T/R for which the KOSP has a mono- versus bi- or even tri-stable behavior. The surface plot shows the potential energy (strain energy), and the stable and unstable equilibria are represented by a green line and red colored square markers, respectively. The white region represents the values of u_T and ϕ_T that lie outside the operation range of the KOSP; i.e. the stretch and compression limits of the two constituent modules. Generally, we can see that a d-KOSP constructed from an identical pair of KOSs possesses a symmetric potential function that allows for a similar response to both clockwise and counter-clockwise rotations, which is not possible with a single KOS. Additionally, d-KOSPs exhibit an increased range of operability in both rotation and longitudinal deployment. The stiffness of d-KOSPs can be tuned or programmed, including quasi-zero stiffness, providing great control over their behavior for specific applications. Figure <ref> shows the quasi-static response behavior for a KOSP consisting of two different KOSs. KOS 1 is the mono-stable spring studied in Fig. <ref>, while KOS 2 is bi-stable with u_0/R=1.65, ϕ_0=45^o, R=15 mm and n=6. The restoring torque and potential energy of KOS2 are as shown in Fig. <ref> (a). The two equilibria of KOS 2 occur at ϕ_e1=45^o and ϕ_e2= 116^o. The d-KOSP constructed using those springs is experimentally tested at different values of u_T/R and the restoring torque responses are recorded. Fig. <ref> (b) shows those responses for the uncompressed height u_T/R=3.525 where the potential energy function is mono-stable but asymmetric with a single equilibrium point occurring near ϕ_e= 15^o. The restoring force is weakly nonlinear with a slight hardening behavior near the stretch and compression limits. When the KOSP is precompressed to a height of u_T/R=3.2, the KOSP becomes bi-stable asymmetric with the left potential well being much deeper than the right one. The right potential well becomes shallower as the KOSP is precompressed further up to the point where the second minimum in the potential energy function disappears near u_T/R= 2.7, Fig. <ref> (d). In this case, the restoring force becomes nearly QZS around ϕ_T=25^o. As the KOSP is precompressed further to u_T/R= 2.5, the spring becomes of the QZS type but with asymmetric characteristics Fig. <ref> (e). Figure <ref> (f) depicts the interpolated bifurcation diagram which reveals that the d-KOSP will always exhibit an asymmetric response with the characteristics being of the mono-stable type for large and small precompression heights and bi-stable for intermediate ones. § CONCLUSION This paper focuses on the use of serially connected Kresling Origami Springs (KOS) to design bi-directional programmable springs that offer decoupled translational and rotational degrees of freedom. The behavior of these springs is investigated both numerically, using a truss model, and experimentally, using functional 3D printed springs. The study reveals that, by varying the precompressed height of the KOSPs, interesting bifurcations of the static equilibria emerge leading to mono-, bi-, tri-stable, and QZS restoring elements with either symmetric or asymmetric restoring behavior. This is unlike single KOSs whose translational and rotational degrees of freedom are always coupled and cannot be designed to have a tri-stable behavior or to possess a symmetric restoring force behavior. The availability of such springs open up new avenues for developing restoring elements with programmable responses to external stimuli, which could lead to innovative applications in various fields, such as robotics and energy storage <cit.>. The results of this study contribute to the expanding body of knowledge on the potential use of origami principles for engineering applications. The findings also provide new insights into the behavior of KOSPs and offer a foundation for future research in developing multifunctional materials using the Kresling origami pattern. § ACKNOWLEDGEMENTS We thank Core Technology Platforms at New York University Abu Dhabi (NYUAD) for providing their resources and support during fabrication and testing of the KOSs. Parts of this work was supported by the NYU-AD Center for Smart Engineering Materials which is under the full support of Tamkeen under NYUAD RRC Grant No. CG011. § DATA AVAILABILITY The data that support the findings of this study are available from the corresponding author upon reasonable request.
http://arxiv.org/abs/2306.04052v2
20230606225330
Nuclear Spin-Depleted, Isotopically Enriched 70Ge/28Si70Ge Quantum Wells
[ "O. Moutanabbir", "S. Assali", "A. Attiaoui", "G. Daligou", "P. Daoust", "P. Del Vecchio", "S. Koelling", "L. Luo", "N. Rotaru" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci", "physics.app-ph", "quant-ph" ]
APS/123-QED [email protected] Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7 Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7 Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7 Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7 Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7 Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7 Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7 Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7 Department of Engineering Physics, École Polytechnique de Montréal, Montréal, C.P. 6079, Succ. Centre-Ville, Montréal, Québec, Canada H3C 3A7 The p-symmetry of the hole wavefunction is associated with a weaker hyperfine interaction as compared to electrons, thus making hole spin qubits attractive candidates to implement long coherence quantum processors. However, recent studies demonstrated that hole qubits in planar germanium (Ge) heterostructures are still very sensitive to nuclear spin bath. These observations highlight the need to develop nuclear spin-free Ge qubits to suppress this decoherence channel and evaluate its impact. With this perspective, this work demonstrates the epitaxial growth of ^73Ge-depleted isotopically enriched ^70Ge/SiGe quantum wells. The growth was achieved by reduced pressure chemical vapor deposition using isotopically purified monogermane ^70GeH_4 and monosilane ^28SiH_4 with an isotopic purity higher than 99.9 % and 99.99 %, respectively. The quantum wells consist of a series of ^70Ge/SiGe heterostructures grown on Si wafers using a Ge virtual substrate and a graded SiGe buffer layer. The isotopic purity is investigated using atom probe tomography following an analytical procedure addressing the discrepancies in the isotopic content caused by the overlap of isotope peaks in mass spectra. The nuclear spin background in the quantum wells was found to be sensitive to the growth conditions. The lowest concentration of nuclear spin-full isotopes ^73Ge and ^29Si in the heterostructure was established at 0.01 % in the Ge quantum well and SiGe barriers. The measured average distance between nuclear spins reaches 3-4 nm in ^70Ge/^28Si^70Ge, which is an order of magnitude larger than in natural Ge/SiGe heterostructures. Nuclear Spin-Depleted, Isotopically Enriched ^70Ge/^28Si^70Ge Quantum Wells N. Rotaru July 31, 2023 =========================================================================== § INTRODUCTION Although it was quickly relegated behind silicon (Si) because of its relatively low bandgap energy, its lack of a stable oxide, and its large surface state densities, germanium (Ge) is inarguably the material that catalyzed the transition from what W. Pauli and I. Rabi called the ‘’Physics of Dirt’’ <cit.> to modern-day semiconductor physics and technology  <cit.>. Indeed, the ease by which Ge can then be purified and processed led to the demonstration of point contact diode mixers for radar reception  <cit.> and of the point contact and junction transistors  <cit.>. These inventions contributed to laying the groundwork for what was later coined as the first quantum revolution. In recent years, there has been a revived interest in Ge-based materials for integrated photonic circuits  <cit.>, sensing  <cit.>, high-mobility electronic s <cit.>, and solid-state quantum computing  <cit.>. The latter, for instance, aims at capitalizing on the advantageous quantum environment of holes in Ge, their inherently large and tunable spin-orbit interaction (SOI), and their reduced hyperfine coupling with nuclear spins to implement increasingly robust and reliable spin qubits  <cit.>. Indeed, these quantum devices are now considered forefront candidates for scalable quantum processors  <cit.>. This recent surge in developing Ge qubits makes one think that Ge may also be a key material in shaping the anticipated second quantum revolution. From a fundamental standpoint, it is expected that the hyperfine interaction to be weaker for holes than electrons due to the p-symmetry of the hole wavefunction. However, theoretical investigations suggested a hyperfine coupling that is only one order of magnitude smaller than that of electrons  <cit.> or of equal strength as in Si  <cit.>. Moreover, the p-symmetry and d-orbital hybridization of the hole wavefunction leads to an anisotropic hyperfine coupling that is non-existent for electron spins  <cit.>. Interestingly, recent experimental studies hint at the sensitivity of hole spin qubits in planar Ge/SiGe heterostructure to nuclear spin bath reporting an amplitude of the fluctuating Overhauser field of 34.4 kHz, which is suggested to limit spin dephasing times  <cit.>. Although charge noise is believed to be the dominant decohering process, these observations call for the development of nuclear spin-free Ge qubits to elucidate their sensitivity to hyperfine coupling. Undertaking this research direction requires Ge-based quantum devices that are depleted of ^73Ge, which is the only Ge nuclear spin-full stable isotope. This work addresses this very issue and provides a demonstration of the epitaxial growth of isotopically purified ^70Ge quantum wells (QWs). Note that enriched ^70Ge, ^74Ge, and ^76Ge isotopes were employed in the past to grow superlattices and self-assembled quantum dots by solid-source molecular beam epitaxy  <cit.>. Herein, the growth of ^73Ge-depleted QWs is achieved by hydride precursors using the chemical vapor deposition (CVD) method, which is broadly adopted in Ge device research besides being compatible with the processing standards in the semiconductor industry  <cit.>. § EXPERIMENTAL The epitaxial growth of isotopically engineered Ge/SiGe QW heterostructures was carried out on hydrogen-passivated 4-inch (001)-oriented Si wafers in a reduced-pressure CVD reactor using isotopically purified monogermane ^70GeH_4 (isotopic purity >99.9 %) and monosilane ^28SiH_4 (isotopic purity >99.99 %). The precursors were enriched in a centrifugal setup using natural monogermane (^natGeH_4) and SiF_4 as starting gases  <cit.>. After purification, ^70GeH_4 contains traces (<0.006 at.%) of other Ge isotopes: ^72Ge, ^73Ge, ^74Ge, and ^76Ge. Moreover, chemical contaminants including other hydrides are also negligible, with an average content being <0.06 µmol/mol. Reference Ge/SiGe QW heterostructures were also prepared following the same growth protocol using conventional precursors with natural isotopic abundance (^natGeH_4 and disilane ^natSi_2H_6). After annealing in hydrogen, a  3 µm-thick Ge interlayer, commonly known as Ge virtual substrate (Ge-VS), was grown on Si using ^natGeH_4 and a two-step growth process in the 450-600 °C temperature range. Then follows a thermal cyclic annealing step (725-875 °C) to improve the Ge-VS quality. A reverse-graded  1µm-thick Si_1-xGe_x layer was then grown at 600 °C using ^natGeH_4 and ^natSi_2H_6 until a uniform Si content of 18 at.% is reached. Without interrupting the growth, the ^natGeH_4 supply was switched to the purified ^70GeH_4 to grow the first Si_1-xGe_x barrier layer (BR1), while keeping all the other growth parameters unchanged. Thickness and composition of BR1 were varied in the  0.3-1µm range and x = 0.15-0.18 range to investigate the effect of the growth time on the isotopic purity of the epilayers. After that the growth of BR1 was completed, the reactor was then purged in hydrogen for 90 s before growing the ^70Ge QW layer using ^70GeH_4 supply for a variable growth time of up to 40 s. Next, the reactor was purged in hydrogen for 90 s prior to the growth of the Si_1-xGe_x BR2 layer under identical growth conditions as BR1. Lastly, a Si capping layer with a few nm thickness was grown. Fig. 1a illustrates the grown stacks. Ge/Si_0.18Ge_0.82 (A), ^70Ge/Si_0.18Ge_0.82 (B), and ^70Ge/Si_0.15Ge_0.85 (C) QWs were grown using this protocol. The ^70Ge/^28Si_0.15^70Ge_0.85 (D) QW was grown following a similar protocol except for the growth of BR1-2 that was performed by changing from ^natSi_2H_6 to ^28SiH_4 and adjusting the growth conditions to accommodate the change in the precursor decomposition. Several characterization techniques were employed to elucidate the basic properties of the as-grown heterostructures and investigate their isotopic content. Lattice strain and average content in Ge/SiGe heterostructures were evaluated from X-ray diffraction (XRD) measurements including reciprocal space map (RSM) analysis. The microstructure of the grown materials was investigated by transmission electron microscopy (TEM) and scanning TEM (STEM). The quality of interfaces, the atomic-level composition, and the isotopic purity were investigated using atom probe tomography (APT). Additional insights into the chemical and isotopic compositions are also obtained using secondary ion mass spectrometry (SIMS). Raman scattering spectroscopy was employed to evaluate the effects of the isotopic content on phonon scattering in Ge QWs. Additionally, the uniformity of the growth thickness as well as the optical signature of quantum confinement were investigated using spectroscopic ellipsometry (SE). § RESULTS AND DISCUSSION A cross-sectional STEM image of a representative isotopically-engineered Ge QW heterostructures is shown in Fig. 1b, while the enlarged view of the ^70Ge/Si_0.15^70Ge_0.85 QW region is displayed in Fig. 1c. The figure shows an 18 nm-thick ^70Ge QW together with BR1 and BR2 layers with thicknesses of 290 nm and 28 nm, respectively. The transition between SiGe barrier layers and ^70Ge QW is of the order of 1-2 nm. To evaluate the structural quality of the heterostructures, cross-sectional TEM images were acquired (Fig. 1d). The extended defects are confined to the Si/Ge-VS and Ge-VS/Si_1-xGe_x interfaces, with no defects being detected in the QW region at the TEM imaging scale. XRD-RSM (224) analysis of the as-grown heterostructures demonstrates sharp peaks for the SiGe/Ge substrate, barriers as well as the signature of the strained 18 nm-thick ^70Ge layer, thus suggesting an excellent degree of crystallinity across the structure (Fig. 1(e)). Here, the variation in composition between natural and purified SiGe layers (Si_0.18Ge_0.82 vs. Si_0.15Ge_0.85 as determined by APT) is related to the difference in composition between the germane precursor supplies. A first glimpse into the isotopic content of the as-grown QWs was obtained from Raman spectroscopy studies. Fig. 2(a) shows Raman spectra around the Ge-Ge LO mode recorded for a set of QWs grown at a variable growth time between 4 and 40 s corresponding to a 3-30 nm thickness range. The spectra indicate the presence of two distinct modes. The first is centered around 293.7 cm^-1 corresponding to Ge-Ge LO mode in the SiGe barrier, whereas the second peak at 305.3 cm^-1 is attributed to the same mode but in the ^70Ge QW. This assessment is consistent with the observed increase in the second peak intensity as the QW thickness increases. Note that the Ge-Ge mode in ^natGe QW is detected at 300.1 cm^-1, as demonstrated in Fig. 2(b) comparing two identical ^natGe and ^70Ge QW samples. The observed shift between the two samples is analyzed based on the quasi-harmonic approximation, which is a valid approximation for semiconductors at room temperature  <cit.>. According to the virtual crystal approximation, a simple harmonic analysis predicts that the energy of a phonon mode is inversely proportional to the square root of the average isotopic mass. The average isotopic mass is given by ⟨m⟩ = Σ_ic_im_i, with c_i being the fractional composition of an isotope of mass m_i. Knowing that the atomic mass of ^natGe is 72.63 amu, the measured wavenumbers of Ge-Ge LO mode in the sets of QWs yield an average atomic mass in ^70Ge QW lattice of 70.17 amu corresponding to at least 99.6% enrichment in ^70Ge isotopes. As discussed below, the growth protocol has a strong effect on the isotopic purity of the QW. It is important to mention that the limited spectral resolution ( 1 cm^-1) of the used Raman setup does not allow addressing the effect of isotopic purification on lattice disorder  <cit.>. Nevertheless, it is reasonable to conclude that the similarity observed in the full width at the half maximum of the Ge-Ge peaks in ^natGe and ^70Ge is indicative of a similar crystalline quality, which is consistent with XRD and TEM studies. To further assess the quality of the grown QWs, SE studies were carried out on ^70Ge QW samples. For these studies, reference samples consisting of the same grown layers but without BR2 were also prepared and investigated. Fig. 2(c) displays the measured spectra for 18 nm ^70Ge QW and the associated reference material. The figure shows the imaginary dielectric function (left) and the critical point (CP) analysis of the measured dielectric function. The nature of the lineshape of the dielectric function of both heterostructures conceals insights into the quantum confinement in the ^70Ge QW. Note that the penetration depth of the incident excitation near the E_1 CP is around 20-35 nm for Ge bulk. If one considers a limited spectral range between 1.5-3 eV, the effect of the underlying materials (SiGe buffer, Ge-VS, and Si substrate) can be negated as the incident light will not reach and excite them. Consequently, only the top three layers (^70Ge QW, BR2, and Si cap) should in principle contribute to the measured dielectric function (Fig. 2(c)). Moreover, the contribution of the  3-5 nm-thick Si cap should be excluded in the analysis as the E_1 CP of Si is located around 3.4 eV  <cit.>, which is outside the measured spectral range. The second derivative of the dielectric function of the two samples (with and without the top barrier BR2) is displayed in Fig. 2(d). To unravel the electronic structure of the analyzed heterostructure, the measured data were fitted using a generic critical point parabolic band model  <cit.>. The CP energy of the ^70Ge layer without a barrier is evaluated at 2.156 eV, which is close to the Ge bulk CP of 2.134 eV  <cit.>, whereas for ^70Ge QW sample a blueshift is noted yielding a CP energy of 2.233 eV. More importantly, the qualitative difference between both dielectric functions at 2.17 eV is clear. Indeed, the CP lineshape changes drastically from 2D Van Hove singularities in the reference structure (green dots) to a discrete excitonic lineshape in ^70Ge QW (blue dots). This observed change in CP lineshape and energy is indicative of quantum confinement and its associated narrowing of the optical transition in Ge  <cit.>. In the following, the isotopic content of the grown QWs is discussed based on APT studies. Fig. 3(a) shows a representative 3D 30 × 30 × 30 nm^3 atom-by-atom APT map of a ^70Ge QW. The map indicates that the QW region contains mainly the ^70Ge isotope, but traces of other isotopes can also be seen. Before quantifying and discussing the level of these contaminants, the recorded mass spectra are described first, as shown in Fig. 3(b,c). The figures exhibit the mass spectra recorded for a set of four QW samples labeled A, B, C, and D, as illustrated in Fig. 1(a). These samples were grown under different conditions. In sample A, the QW was grown using ^70GeH_4, whereas the SiGe barriers were grown using ^natGeH_4. In the other three samples, the growth of both barriers and QWs was conducted using ^70GeH_4. However, the change from ^natGeH_4 to ^70GeH_4 occurred during the growth of the underlying SiGe layer at a variable thickness from the interface with the QW: 290 nm (B), 1000 nm (C), and 1890 nm (D). This means that the changes from ^natGeH_4 to ^70GeH_4 took place at different times during the growth of SiGe buffer layer prior to the QW growth in these samples (B: 8 min, C: 24 min, and D: 29 min). In the case of sample D, the growth of SiGe barriers was conducted using the isotopically purified precursor ^28SiH_4 instead of ^natSi_2H_6. The growth rate was higher for this sample due to a higher GeH_4 supply required for the growth optimization using ^28SiH_4 precursor. The obtained APT mass spectra are compared in Fig. 3(b,c) showing the spectra of doubly charged Ge ions (Fig. 3(b)) and doubly charged Si ions (Fig. 3(c)). Each spectrum contains 10 million atoms from the selected region which includes most of the top barrier, the full QW and its interfaces, and a part of the bottom barrier. Note that this includes the QW interfaces and the local fluctuations in the isotopic purity observed near these interfaces, as shown in Fig. 4. The mass spectrum of sample A shows peaks associated with all five Ge isotopes at intensities close to the natural abundance of each isotope as most of the signal originates from the barriers grown with ^natGeH_4 (Fig. 3(b)). However, in samples B, C, and D, the APT spectra clearly show enrichment in ^70Ge isotope as the peaks related to other isotopes have significantly diminished. Interestingly, the level of this contamination from other isotopes is intimately related to the growth protocol. Indeed, the level of Ge isotope cross-contamination becomes lower the longer the time, relative to the moment of the QW growth, of the transition from ^natGeH_4 to ^70GeH_4. This indicates that the detection of ^70Ge^++, ^73Ge^++, ^74Ge^++, and ^76Ge^++ peaks is a manifestation of the reservoir effect, meaning that ^natGeH_4 used to grow the much thicker Ge-VS and SiGe-VS still resides in the growth reactor for an extended period of time. This leads to the undesired incorporation of the nuclear spin-full ^73Ge isotope into the growing QW structure. Herein, it is shown that an early introduction of ^natGeH_4 can eliminate this contamination to a great extent. Ideally, the growth of the entire stack Ge-VS/SiGe-VS/BR1/Ge/BR2 should be done using ^70GeH_4, but the process can be costly. Similarly, Fig. 3(c) shows that the use of ^28SiH_4 to grow the SiGe barriers leads to a significant reduction, more than 30-fold, of the amount of ^29Si isotope in the heterostructure. Since the hole wavefunction in Ge QW is expected to leak to the SiGe barriers, it is also important to suppress the hyperfine interactions that may result from the presence of ^29Si isotope. The local isotopic purity and 3D distribution of isotopes can be obtained from APT. However, since the peaks of heavier isotopes are embedded in the tails of the lighter isotopes (Fig. 3(b)), it is important to carefully analyze and model the mass spectra to separate the tails and the peaks to accurately quantify the isotopic content in the heterostructures. Herein, SIMS analyses were carried out to validate the APT isotope mapping method. Since all non-^70Ge isotopes originate from a natural Ge source, one can use the content measured for each isotope to estimate the ^70Ge purity by making a projection of the overall contamination based on the natural distribution of isotopes. As shown in Fig. 4(a), SIMS data provide estimates derived from ^72Ge, ^74Ge, and ^76Ge signals coinciding almost perfectly which each other and the estimate for the ^70Ge purity gained from considering the signal from all Ge isotopes. For APT, however, a difference was observed (data not shown) when estimating based on doubly charged ^72Ge, ^74Ge, or all of the isotopes. This discrepancy is caused by the aforementioned overlap of isotope peaks in the mass spectra (Fig. 3(b)). To address this issue, a Monte-Carlo approach is implemented where the tails are fitted locally around the peak region, and the peak and tail are decomposed multiple 100s or 1000s times to find the average content of the peak and the error created by the decomposition. The resulting estimate using the tail-corrected data for doubly-charge ^72Ge, ^74Ge, and ^76Ge match SIMS data, as shown in Fig. 4a (solid line). Using the same Monte Carlo approach, we can quantify the ^70Ge purity in all samples. The result is shown in Fig. 4(b) highlighting once more the differences between the samples in terms of isotopic purity near the QW caused by the difference in time passed between the onset of ^70GeH_4 growth and the QW growth. Furthermore, both SIMS and APT data consistently show that the 90 s growth interruption at the QW interfaces, introduced to promote the growth of sharper interfaces, leads to an accumulation in ^natGe at the interface. For the growth of sample D, the top barrier was grown without interruption thus suppressing the isotopic cross-contamination at the interface. Maintaining the ^70Ge purity is important to achieve a nuclear spin-depleted interface and BR1. A more accurate evaluation of the nuclear spin background is obtained from APT analyses displayed in Fig. 4(c). The figure outlines the total concentration profiles of nuclear spin-full isotopes ^73Ge and ^29Si across the investigated heterostructures. It is noticeable that in ^natSi^natGe/^70Ge/^natSi^natGe (sample A) the nuclear spin concentration drops from 6 at.% in the SiGe barriers down to 0.1 at.% in the QW. This background is further reduced to 0.02 at.% in QWs of samples B and C and even below 0.01 at.% in sample D consisting of ^28Si^70Ge/^70Ge/^28Si^70Ge. Besides providing the isotopic composition profiles, APT also allows extracting the atomic-level spatial distribution of individual nuclear spin-full species ^73Ge and ^29Si, as displayed in Fig. 4(d), The figure shows the depth evolution of the average distance between neighboring nuclear spins across the investigated heterostructures. To obtain these profiles, a model of SiGe lattice was generated from APT maps  <cit.> on which the distribution of each isotope was imprinted thus allowing the calculation of the distance between nuclear spins in a lattice plane-by-lattice plane fashion. The uncertainty in these calculations was assessed by sampling 10 different models. The obtained result demonstrates that the average distance between nuclear spins is the lowest in the QW for all samples, but it remains sensitive to the growth conditions. For instance, in ^natSi^natGe barriers (A) the obtained average distance is 0.3-0.4 nm, whereas it increases by one order of magnitude to 3-4 nm in isotopically pure^28Si^70Ge/^70Ge/^28Si^70Ge heterostructure (D). § CONCLUSION In summary, this work demonstrates the epitaxial growth of nuclear spin-depleted, isotopically enriched ^70Ge QWs. The growth was achieved on Si wafers using enriched precursors ^70GeH_4 and ^28SiH_4 in a reduced-pressure CVD system. The crystalline quality of the grown heterostructures was confirmed by XRD and electron microscopy studies. The critical point of the grown QWs exhibits a discrete excitonic lineshape at 2.233 eV indicative of quantum confinement. The isotopic purity and the distribution of the nuclear spin background were investigated using APT. In this regard, a Monte Carlo approach was introduced to solve the discrepancies in APT analyses caused by the overlap of isotope peaks in the recorded mass spectra. These analyses demonstrate that the isotopic content is very sensitive to the growth conditions including any growth interruption. The latter was found to induce an accumulation of natural Ge isotopes at the growth interface leading to lower ^70Ge content. To evaluate the distribution of the residual nuclear spin background, a lattice model was constructed to map the average distance between the two nuclear spin-full isotopes ^73Ge and ^29Si. These studies showed that the distance between nuclear spins reaches 3-4 nm in ^70Ge/^28Si^70Ge, which is an order of magnitude higher than in natural Ge/SiGe heterostructure. Additionally, the lowest concentration of ^73Ge and ^29Si contaminants in the heterostructure was established at 0.01% in both QW and barriers of ^70Ge/^28Si^70Ge heterostructure. These insights constitute a valuable input to improve the design and theoretical modeling of spin qubits by providing quantitative, atomic-level details on nuclear spin distribution. METHODS. X-ray diffraction (XRD) measurements performed using a Bruker Discover D8. A 3 bounces Ge(220) 2-crystals analyzer was placed in front of the XRD detector during the XRD (004) and (224) reciprocal space map (RSM) analysis. The microstructure of the grown materials was investigated by transmission electron microscopy (TEM). TEM specimens were prepared in a Thermo Fisher Helios Nanolab 660 dual-beam scanning electron microscope using a gallium-focused ion beam (FIB) at 30, 16, and 5 kV. Electron beam-induced carbon and platinum were locally deposited on the sample to protect the imaged region from being damaged by the ion-beam milling during the thinning of the TEM lamella. TEM and scanning TEM (STEM) analyses were carried out on a Thermo Scientific Talos F200X S/TEM system with an acceleration voltage of 200 kV. Insights into the quality of interfaces, the atomic-level composition, and the isotopic purity were obtained using atom probe tomography (APT). APT specimens were prepared in a FEI Helios Nanolab 660 dual-beam scanning electron microscope using a gallium-focused ion beam (FIB) at 30, 16, and 5 kV. A 120-150 nm-thick chromium capping layer was deposited on the samples before FIB irradiation to minimize the implantation of gallium ions into the imaged region. APT studies were performed in a LEAP 5000XS tool. The LEAP 5000XS utilizes a picosecond laser to generate pulses at a wavelength of 355 nm. For the analysis, all samples were cooled to a temperature of 25 K. The experimental data were collected at laser powers of 3-6 pJ. Additional insights into the chemical and isotopic compositions are also obtained using secondary ion mass spectrometry (SIMS). Raman scattering analyses were performed at room temperature using a 633 nm excitation laser. Additionally, the uniformity of the growth thickness as well as the optical signature of quantum confinement were investigated using spectroscopic ellipsometry (SE). SE measurements were carried out at room temperature, using a variable angle spectroscopic RC2-XI ellipsometer manufactured by J. A. Woollam Co. The variable angle spectroscopic ellipsometer system covers the 0.5–6 eV range. All heterostructures were measured between 70° and 80° angles of incidence with a 1° step. A noticeable increase in the sensitivity of the SE parameters (Ψ and Δ) was observed around 76-77°, which is very close to the Brewster angle for Si and Ge. Thus, during the optical modeling, special care was accorded to the modeling near this angle. ACKNOWLEDGEMENTS. The authors thank J. Bouchard for the technical support with the CVD system. O.M. acknowledges support from NSERC Canada (Discovery Grants, Alliance International Quantum, and CQS2Q Consortium), Canada Research Chairs, Canada Foundation for Innovation, Mitacs, PRIMA Québec, and Defense Canada (Innovation for Defense Excellence and Security, IDEaS), the European Union's Horizon Europe research and innovation programme under grant agreement No 101070700 (MIRAQLS), and the US Army Research Office Grant No. W911NF-22-1-0277. *
http://arxiv.org/abs/2306.11652v2
20230620162121
Sparse Bayesian Estimation of Parameters in Linear-Gaussian State-Space Models
[ "Benjamin Cox", "Victor Elvira" ]
stat.CO
[ "stat.CO", "stat.ME" ]
Thermoelectric properties of Topological Weyl Semimetal Cu_2ZnGeTe_4 Bhawna Sahni,^1 Riddhimoy Pathak,^2 P C Sreeparvathy,^1 Tanusri Saha-Dasgupta,^3 Kanishka Biswas,^2 and Aftab Alam^1 July 31, 2023 ======================================================================================================================== B.C. acknowledges support from the Natural Environment Research Council of the UK through a SENSE CDT studentship (NE/T00939X/1). The work of V. E. is supported by the Agence Nationale de la Recherche of France under PISCES (ANR-17-CE40-0031-01), the Leverhulme Research Fellowship (RF-2021-593), and by ARL/ARO under grants W911NF-20-1-0126 and W911NF-22-1-0235. State-space models (SSMs) are a powerful statistical tool for modelling time-varying systems via a latent state. In these models, the latent state is never directly observed. Instead, a sequence of data points related to the state are obtained. The linear-Gaussian state-space model is widely used, since it allows for exact inference when all model parameters are known, however this is rarely the case. The estimation of these parameters is a very challenging but essential task to perform inference and prediction. In the linear-Gaussian model, the state dynamics are described via a state transition matrix. This model parameter is known to behard to estimate, since it encodes the relationships between the state elements, which are never observed. In many applications, this transition matrix is sparse since not all state components directly affect all other state components. However, most parameter estimation methods do not exploit this feature. In this work we propose SpaRJ, a fully probabilistic Bayesian approach that obtains sparse samples from the posterior distribution of the transition matrix. Our method explores sparsity by traversing a set of models that exhibit differing sparsity patterns in the transition matrix. Moreover, we also design new effective rules to explore transition matrices within the same level of sparsity. This novel methodology has strong theoretical guarantees, and unveils the latent structure of the data generating process, thereby enhancing interpretability. The performance of SpaRJ is showcased in example with dimension 144 in the parameter space, and in a numerical example with real data. Bayesian methods, graphical inference, Kalman filtering, parameter estimation, sparsity detection, state-space modelling, Markov chain Monte Carlo. § INTRODUCTION State-space models (SSMs) are a flexible statistical framework for the probabilistic description of time-variable systems via coupled series of hidden states and associated observations. These models are used to decode motor neuron kinematics from hand movements <cit.>, perform epidemiological forecasting for policy makers <cit.>, and to determine the trajectory of lunar spacecraft from noisy telemetry data <cit.>, among many other applications. In general, SSMs are used in many important problems within, but not limited to, signal processing, statistics, and econometrics <cit.>. In some cases, the true SSM is perfectly known, and the interest is in inferring the sequence of underlying hidden states. In the Bayesian paradigm, estimating the sequence of hidden states is achieved by obtaining a sequence of posterior distributions of the hidden state, known as filtering distributions <cit.>. The linear-Gaussian SSM (LGSSM) is the case where the state and observation models are linear with Gaussian noises. In this case, the sequence of exact filtering distributions is obtained via the Kalman filtering equations <cit.>. In the case of non-linear dynamics, the filtering distributions must be approximated, for instance via the extended Kalman filter <cit.> or the unscented Kalman filter <cit.>. In even more generic models, such as those with non-Gaussian noise, particle filters (PFs) are often used, which approximate the state posterior distributions via Monte Carlo samples <cit.>. All of these filtering methods assume that the model parameters are known. However, model parameters are often unknown, and must therefore be estimated. Parameter estimation is a much more difficult task than filtering, and is also more computationally expensive, since most parameter estimation algorithms require a large number of evaluations of the filtering equations to yield acceptable parameter estimates. There exist several generic methods to do this, with techniques based on Markov chain Monte Carlo (MCMC) <cit.> and expectation maximisation (EM) <cit.> arguably being the most commonly used parameter estimation techniques for state-space models. When estimating model parameters, it is crucial that the estimates reflect the structure of the underlying system. For instance, in real-world systems, the underlying dynamics are often composed of simple units, with each unit interacting with only a subset of the overall system, but when observed together these units exhibit complex behaviour <cit.>. This structure can be recovered by promoting sparsity in the parameter estimates. In addition to better representing the underlying system, sparse parameter estimates have several other advantages. By promoting sparsity uninformative terms are removed from the inference, thereby reducing the dimension of the parameter space, improving model interpretability. Furthermore, parameter sparsity allows us to infer the connectivity of the state space <cit.>, which is useful in several applications, such as biology <cit.>, social networks <cit.>, and neuroscience <cit.>. In state-space models the sparsity structure can be represented as a directed graph, with the nodes signifying the state variables, and edges indicating signifying between variables. In the LGSSM specifically, this graph can be represented by an adjacency matrix with identical sparsity to the transition matrix. This interpretation of a sparse transition matrix as a weighted directed graph was recently proposed in the GraphEM algorithm <cit.>, in which a sparse point-wise maximum-a-posteriori estimator for the transition matrix of the LGSSM is obtained via an EM algorithm. However, this point estimator does not quantify uncertainty, therefore disallowing a probabilistic evaluation of sparsity. The capability to quantify and propagate the uncertainty of an estimate is highly desired in modern applications, as it allows for more informed decision-making processes, as well as providing a better understanding of the underlying model dynamics. In this work, we propose the sparse reversible jump (SpaRJ) algorithm, a fully Bayesian method to estimate the state transition matrix in LGSSMs. This matrix is probabilistically approximated by a stochastic measure constructed from samples obtained from the posterior of this model parameter. The method (a) promotes sparsity in the transition matrix, (b) quantifies the uncertainty, including sparsity uncertainty in each element of the transition matrix, and (c) provides a probabilistic interpretation of (order one) Granger causality between the hidden state dimensions, which is interpreted as a probabilistic network of how the information flows between consecutive time steps. SpaRJ exploits desirable structure properties within the SSM, which presents computational advantages w.r.t. to other well established MCMC methods such as particle MCMC <cit.>, i.e., a decrease in computational cost for a given performance or an improvement in performance for a given computational cost. Our method is built on a novel interpretation of sparsity in the transition matrix as a model constraint. SpaRJ belongs to the family of reversible jump Markov chain Monte Carlo (RJMCMC) <cit.>, a framework for the simultaneous sampling of both model and parameter spaces. We note that RJMCMC methods are not a single algorithm, but a wide family of methods (as it is the case of MCMC methods). Thus, specific algorithms are required to make significant design choices so the RJMCMC approach can be applied in different scenarios <cit.>. In the case of SpaRJ, we design both specific transition kernels and parameter rejuvenation schemes, so the algorithm can efficiently explore both the parameter space, and the sparsity of the parameter in a hierarchical fashion. As RJMCMC is itself a modified Metropolis-Hastings method, the solid theoretical guarantees of both precursors are inherited by our proposed algorithm, such as the asymptotic correctness of distribution of both model and parameter <cit.>. Our method outperforms the current state-of-the-art methods in two numerical experiments. We test SpaRJ in a synthetic example with dimension up to 144 in the parameter space. In this example, a total of 2^144 models are to be explored (i.e., the number of different sparsity levels). Then, we run a numerical example with real data of time series measuring daily temperature. The novel probabilistic graphical interpretation allows recovery of a probabilistic (Granger) causal graph, showcasing the large impact that this novel approach can have in relevant applications of science and engineering. The model transition kernels used by SpaRJ are designed to allow the exploitation of sparse structures that are common in many applications, which reduces the computational complexity of the resulting (sparse) models once the transition matrix has been estimated (see for instance <cit.>). SpaRJ retains strong theoretical guarantees, inherited from the underlying Metropolis-Hastings method, thanks to careful design of the transitions kernels, e.g., keeping the convergence properties of the algorithm. Extending our methodology to parameters other than the transition matrix is readily possible. In particular, we make explicit both a model and parameter proposal for extension to the state covariance parameter . Contributions. The main contributions of this paper[A limited version of this work was presented by the authors in the conference paper <cit.>, which contains a simpler version of the method with no theoretical discussion, methodological insights, or exhaustive numerical validation.] are summarised as follows: * The proposed SpaRJ algorithm is the first method to estimate probabilistically the state transition matrix in LGSSMs (i.e, treating as a random variable rather than a fixed unknown) under sparsity constraints. This is achieved by taking to be a random variable, and sampling the posterior distribution p(|_1:T) under a unique interpretation of sparsity as a model. This new capability allows for powerful inference to be performed with enhanced interpretability in this relevant model, e.g., the construction of a probabilistic Granger causal network mapping the state space, which was not possible before. * The proposed method is the first method to quantify the uncertainty associated with the occurrence of sparsity in SSMs, e.g., in the probability of sparsity occurring in a given element of the transition matrix. This capability is unique among parameter estimation techniques in this field. * Our method proposes an interpretation of sparsity as a model, allowing the use of RJMCMC for sparsity detection in state-space modelling. This is the first RJMCMC method to have been applied to sparsity recovery in state-space models, probably because RJMCMC methods require careful design of several parts of the algorithm, especially for high dimensional parameter spaces as is the case for the matrix valued parameters of the LGSSM. Structure. In Section <ref> we present the components of the problem and present some of the underlying algorithms, as well as the notation we will use. Section <ref> presents the method, with further elucidation in Section <ref>. We present several challenging numerical experiments in Section <ref>, showcasing the performance of our method, and comparing to a recent method with similar goals. We provide some concluding remarks in Section <ref>. § BACKGROUND §.§ State-space models Let us consider the additive linear-Gaussian state-space model (LGSSM), given by 𝐱_t = 𝐀𝐱_t-1 + 𝐪_t, 𝐲_t = 𝐇𝐱_t + 𝐫_t, for t = 1, …, T, where _t ∈^d_x is the hidden state with associated observation 𝐲_t ∈^d_y at time t, 𝐀∈^d_x × d_x is the state transition matrix, 𝐇∈^d_y × d_x is the observation matrix, 𝐪_t ∼𝒩(0, 𝐐) is the state noise, and 𝐫_t ∼𝒩(0, 𝐑) is the observation noise. The state prior is 𝐱_0 ∼𝒩(𝐱̅_0, 𝐏_0), with 𝐱̅_0 and 𝐏_0 known. We assume that the model parameters remain fixed. A common task in state-space modelling is the estimation of the series of p(_t|_1:t) for t ∈{1, …, T}, also known as the filtering distributions. In the case of the LGSSM, these distributions are obtained exactly via the Kalman filter equations <cit.>. The linear-Gaussian assumption is not overly restrictive, as many systems can be approximated via linearisation, and for continuous problems Gaussian noises are very common. Note that, the posterior distribution of any given parameter can be factorised as p(|𝐲_1:T) ∝ p(𝐲_1:T|) p(), where is the parameter of interest, p() is the prior ascribed to the parameter, and p(𝐲_1:T|) is extracted from the Kalman filter via the recursion p(𝐲_1:T|) = ∏_t=1^T p(𝐲_t|𝐲_1:t-1, ), where p(𝐲_1|𝐲_1:0, ) := p(𝐲_1|) <cit.>. This factorisation gives the target distribution for estimating parameters in an LGSSM. In this work we focus on probabilistically estimating , and therefore we are interested in the posterior p(|𝐲_1:T) ∝ p(𝐲_1:T|) p(). In LGSSMs, $̋ andare frequently assumed to be known as parameters of the observation instrument, butandare often unknown. For the purposes of this work, we assume that all parameters exceptare known, or are suitably estimated, although our method can be extended to all parameters of the linear-Gaussian state-space model, as discussed in Section <ref>. §.§ Parameter estimation in SSMs The estimation of the parameters of a state-space model is, in general, a difficult and computationally intensive task <cit.>. This difficulty follows from the state dynamics not being directly observed, and stochasticity in the observations. In this work we focus on Bayesian techniques, as our method is Bayesian, with this focus therefore allowing easier comparison. Frequentist methods for parameter estimation in SSMs are common however, with some relevant references being <cit.>. There are two main approaches to estimating and summarising parameters in state-space models, which we can broadly classify as point estimation methods and probabilistic methods. Point estimation methods. The goal of a point estimation method is to find a single parameter value that is, in some way, the value that best summarises the parameter given the data. An archetypal point estimate is the maximum likelihood estimator (MLE) <cit.>. In a state-space model, when estimating a parameter denoted, the MLE, denoted_MLE, is given by _MLE = _ p(_1:T|). The MLE is fundamentally a frequentist estimator, and hence no prior distribution is used. The Bayesian equivalent to the maximum likelihood estimator is the maximum a posteriori estimator, denoted _MAP, and given by _MAP = _ p(|_1:T) = _( p(_1:T|) p()), from which we see that the MLE is the MAP ifp() ∝1. A common method for point estimation in LGSSMs and in general is the expectation-maximisation (EM) algorithm <cit.>, as explicit formulae exist for the conditional MLE of all parameters in the case of the LGSSM, and thus the model parameters can be estimated iteratively. The EM algorithm allows for all model parameters to be estimated simultaneously, but converges much more slowly as the number parameters to estimate increases <cit.>. Furthermore, this method does not allow for quantification of the uncertainty in the resultant estimates. Probabilistic methods. Distributional methods estimate the target probability density function (pdf) of the parameter given the data, often through the generation of Monte Carlo samples. In the case of state-space models, for a parameterthe target distribution isp(|_1:T), and the set of Monte Carlo samples is{_i}_i=1^n, with_i ∼p(|_1:T).A common class of methods used to obtain these samples is Markov chain Monte Carlo (MCMC), with methods such as particle MCMC <cit.> seeing wide use in SSMs. MCMC is a class of sampling methods that construct, and subsequently sample from, a Markov chain that has the target as its equilibrium distribution <cit.>. The elements of this chain are then taken to be Monte Carlo samples from the target distribution, although typically a number of the initial samples are discarded to ensure that the samples used are from after the chain has converged <cit.>. Note that there exist probabilistic methods, such as Laplace approximation <cit.> and variational inference <cit.>, that provide analytical approximations and do not directly give samples. These methods are seldom used in state-space modelling. Distributional methods give more flexibility than point estimates, as they capture distributional behaviour and inherently quantify uncertainty, as well as allowing point estimates to be estimated from the samples, such as the aforementioned MAP estimator being the maximising argument for the posterior likelihood. §.§ Sparse modelling When fitting and designing statistical models, the presence of sparsity in parameters is often desirable, as it reduces the number of relevant variables thus easing interpretation and simplifying inference. Furthermore, real systems are often made up of several interacting dense blocks that, when taken as a whole, exhibit complex dynamics <cit.>. Sparse estimation methods allow for this structure to be recovered, resulting in estimates that can reflect the structure of the underlying system. Sparsity is ubiquitous within signal processing, with signal decomposition into a sparse combination of components being very common <cit.>, which can be parameterised via model parameters. Furthermore, within signal processing, there exist a number of existing sparse Bayesian methods, such as <cit.>, although these do not operate within the paradigm of state-space modelling. There are several approaches to estimate model parameters such that sparsity may be present. One approach may be to construct many models with unique combinations of sparse and dense elements, fit all of these models, and then select the best model according to some criteria (see <cit.> for examples). This approach is conceptually sound, but computationally expensive for even a small number of parameters p, as 2^p models must be fitted in order to obtain likelihood estimates, or other goodness-of-fit metrics. Another approach is to estimate the model parameters under a sparsity inducing penalty, with the classic example of such a penalty being the LASSO <cit.>. This approach, commonly called regularisation, allows for only one model to be fitted rather than many, but increases the computational complexity of fitting the model. This single regularised estimate is, in most cases, more expensive to compute than fitting a non-regularised estimate as required by the previous approach, but this cost is typically far less than the cumulative cost of all required estimates in the previous approach. Regularisation is a common way to obtain sparse estimates, and can be extended to Bayesian modelling and estimation in the form of sparsity inducing priors <cit.>. In LGSSMs, a sparse estimate of the transition matrixcan be interpreted as the adjacency matrix of a weighed directed graphG, with the nodes being the state elements, and the edges the corresponding elements of<cit.>. We illustrate this with an example in Fig. <ref>. We note that there exist a number of graph estimation methods that can be applied to time series data, such as <cit.>, although these methods do not utilise the structure of the state-space model. Furthermore, these methods often employ an acyclicity constraint, which prevents the results from exhibiting cycles, which are a common feature in real world dynamical systems, for example those resulting from discretised systems of ODEs. The graphGthus encodes the linear, between-step relationships of state elements, simplifying model interpretation. Under this graphical interpretation,A_ijbeing non-zero implies that knowledge of thejth element of the state improves the prediction of theith value. The presence of an edge from nodejto nodeiimplies a Granger-causal relationship between the state elements, as knowledge of the past values of thejth element at timetimproves the prediction of theith element at timet+1, which is precisely the definition of a Granger-causal relationship <cit.>. §.§ Reversible jump Markov chain Monte Carlo Reversible jump Markov chain Monte Carlo (RJMCMC) was proposed as a method for Bayesian model selection <cit.>, and has since seen use in fields such as ecology <cit.>, Gaussian mixture modelling <cit.>, and hidden Markov modelling <cit.>. RJMCMC has even been applied within the realm of signal processing, with some relevant references being <cit.>. However, RJMCMC has not been applied to the estimation the sparsity of model parameters within signal processing. RJMCMC is an extension of the Metropolis-Hastings algorithm that allows for the sampling of a discrete model space, and thus the inclusion of many models within a single sampling chain. RJMCMC is a hierarchical sampler, with an upper layer sampling the models, and a lower layer sampling the posterior distribution of the parameters within the model. This hierarchy allows the use of standard MCMC methods for the lower layer, with the difficulty coming in designing the upper layer <cit.>. RJMCMC traverses the model space via transition kernels between pairs of models, with the jumps occurring probabilistically. This lends the model space an interpretation as a directed graph, with nodes representing the models and edges representing the jumps between models. LetΘ^(i)be the parameter space associated with modelM^(i), and^(i) ∈Θ^(i)an associated realisation of the model parameters. Denote byπ_i,jthe probability of jumping from modelM^(i)toM^(j). Note thatπ_i,jis zero if and only ifπ_j,iis zero. LetM^(1)be the current model, andM^(2)be a candidate model. In order to construct a Markov kernel for the transition between models, a symmetry constraint is imposed, i.e., if it is possible to jump fromM^(1)toM^(2), it must also be possible to jump fromM^(2)toM^(1)<cit.>. In general however, the dimension of the parameter spaces is not equal, hence it is not possible to construct an invertible mapping between them, violating the required symmetry. Reversible jump MCMC addresses this by introducing a dimension matching condition <cit.>; the spacesΘ^(1)andΘ^(2)are augmented with simulated draws from selected distributions such that (^(2), u_2) = T_1,2(^(1), u_1), u_1 ∼ g_1,2(·), u_2 ∼ g_2,1(·), whereT_1,2is a bijection, andg_i,j(·)are known distributions. The parameter mappings and stochastic draws change the equilibrium distribution of the chain, which means that sampling will not be asymptotically correct. This counteracted by modifying the acceptance ratio; in the case of jumping from modelM^(1)to modelM^(2), the acceptance ratio is given by α^(1,2) = |∂ T_1,2(^(1), u_1)/∂(^(1), u_1)|g_2,1(u_2)/g_1,2(u_1)π_2,1/π_1,2p_2(^(2))/p_1(^(1)), wherep_i(^(i))is the density associated, with modelM^(i)evaluated at^(i). The proposal (M^(2),^(2)) is accepted with probabilitymin(α^(1,2), 1), and is otherwise rejected. On rejection, the previous value of the chain is kept, (M^(1),^(1)). In this way, given data , RJMCMC samples the joint posterior p(θ, M|)∝ p(M|)p(θ|M, ). However, in the case only where only θ is of interest, following standard Monte Carlo rules, we can discard the samples of M to obtain p(θ|)<cit.>. Reversible jump MCMC incorporates many models into a single chain, so it is simple to compare or average models. However, the parameter mappings and model jump probabilities must be well designed. Poor selection of these parameters will typically lead to poor mixing in the model space <cit.>. Our method explores only a single overall model, which simplifies the mappings. We impose a pairwise structure on the model space, simplifying the jumps significantly. §.§ Model definitions and notation In order to use RJMCMC to explore sparsity in the transition matrix of a linear-Gaussian state-space model, we must first construct a set of candidate sub-models of the LGSSM that exhibit various sparsity levels. To this end we introduce the notation in Table <ref>. Denote byM_nthe model selected by the algorithm at iterationn. This model is uniquely defined by the associated set of indices of dense elements in, which we denoteℳ_n. Note that there are thus 2^d_x^2 models in our model space, precluding parallel evaluation for even small d_x. It is therefore not possible to use methods such as Bayes factors or marginal likelihood to compare models, as is standard in Bayesian model selection in state-space models <cit.>, as the computational cost is infeasible, due to these requiring all models to be evaluated in order to be compared. The number of elements ofℳ_n, denoted byD_n = |ℳ_n|, is the number of dense elements at iterationn, and therefore the number of non-zero elements of_n. Denote byS_n = d_x^2 - D_nthe number of sparse elements at iterationn. If the true value of a parameter is known, then it will be presented without subscript or superscript. We denote by a superscriptthe complement to a set, and note thatℳ^_ndenotes the set of indices of elements sparse inat iterationn. Each model has an associated parameter space, which we denote byΘ_n. As the parameter space of a sparse parameter is{0}, we therefore have Θ_n = ∏_0<i,j≤ d_x_(i,j),n, _(i,j),n = , (i,j) ∈ℳ_n, {0}, otherwise, where _(i,j),n is the support for (_n)_ij. In the Bayesian paradigm, we can interpret the sparsity of modelM_nas a prior constraint induced byp(|M_n), under which the elements indexed byℳ_n^are always of value0, and the elements indexed byℳ_nare distributed asp(). This is equivalent to fixing the value of the elements ofindexed byℳ_n^to0, as in Eq. (<ref>). We denote thek ×kidentity matrix by_k, and thek-vector with all elements equal to1by1_k. We denote by·any unspecified parameters of a function or distribution. For example,x ∼g(·)means that the parameters of the distributiongare unspecified, typically as they are irrelevant to the discussion. We provide a table of our most used acronyms in Table <ref>. § THE SPARJ ALGORITHM We now present the SpaRJ algorithm, a novel RJMCMC method to obtain sparse samples from the posterior distribution p(|_1:T) of the transition matrix of an LGSSM. We present our method in Algorithm <ref> for estimating the transition matrix, although the algorithm can be adapted to estimating any unknown parameter of the LGSSM. Note that our method samples the joint posterior p(, M|_1:t) ∝ p(M|_1:T)p(|M, _1:t)p() hierarchically, by first sampling M^' from p(M|_1:T) and then sampling ^' from p(|M^', _1:t) conditional on M^'. However, as we are only interested in the posterior of the transition matrix p(|_1:T), we marginalise by discarding the samples of M when performing inference <cit.>. In order to apply our method, we must provide the values of all known parameters of the LGSSM, (used when evaluating the Kalman filter), and initial values for the unknown parametersandM. We initialise the model sampling by settingM_0to the fully dense model. The initial value_0can be selected in a number of ways, such as randomly or via optimisation <cit.>. We obtain the initial log-likelihood,l_0, by running a Kalman filter with the chosen initial value_0, givingl_0 = log(p(𝐲_1:T|_0)). We define a prior p() on the transition matrix, with some examples given in Section <ref>. The prior distribution incorporates our prior knowledge as to the value of the transition matrix , and can be used to promote sparsity. However, it is not required in order to recover sparse samples, and can be chosen to be uninformative or diffuse. The method iteratesNtimes, each iteration yielding a single sample, outputtingNsamples,{_n}_n=1^N. While adapting the number of iterations is possible, e.g. by following <cit.>, we present with fixedNso as to provide a simpler algorithm. Note that, at each iteration we start in modelM_n-1, with transition matrix_n-1, and log-likelihood ofl_n-1. Each iteration is split into three steps: model proposal (Step 1), parameter proposal (Step 2), and accept-reject (Step 3). We note that the model is not fixed, and is sampled at each iteration, allowing for evidence-based recovery of sparsity.Step 1: Propose M^'. At iterationn, we retain the previous modelM_n-1with probability (w.p.)π_0, and hence settingM^'= M_n-1. If a model jump occurs, we set the proposed modelM^'to be sparser thanM_n-1w.p.π_-1, and denser otherwise. To create a model that is sparser, we select a number of dense elements to then make sparse. To create a denser model, we select a number of sparse elements to make dense. The number of elements to change,k, is drawn from a truncated Poisson distribution (see Appendix <ref>) with rate parameterλ_j ∈[0,1), with this range required in order to bias the jump to models close to the previous model. This distribution is chosen as it exhibits the required property of an easily scaled support, needed as the maximal jump distance changes with the current model. Note that this distribution does not form a prior over the model space, but instead is used to generate jump kernels, which are then used to explore the model space. The model space prior p(M) is discussed in Section <ref>, and is by default diffuse, i.e. p(M) = 2^-d_x^2∀ M. The truncated Poisson distribution is simple to sample, as it is a special case of the categorical distribution. The elements to change are then selected uniformly. The proposed modelM^'is always strictly denser than, strictly sparser than, or identical toM_n-1, following the construction of model jumps in Section <ref>. Step 2: Propose ^'. If the proposed modelM^'differs from the previous modelM_n-1, then the parameters_n-1are mapped to^'via eq. (<ref>), a modified identity mapping. This mapping is augmented with stochastic draws if the dimension of the parameter space increases, and has elements removed if the dimension decreases. This mapping has identity Jacobian matrix, and is thus absent from the acceptance ratio. If the proposed modelM^'is the same as the previous modelM_n-1, thenA^'is sampled from the conditional posteriorp(|M^'). To achieve this, we use a random walk Metropolis-Hastings (RWMH) sampler. The RWMH sampler requires a single run of the Kalman filter per iteration, which is the most computationally expensive component of the algorithm. This single run follows from the joint accept-reject decision in Step 3, which allows us to ignore the accept-reject step of a RWMH sampler that jointly assesses all proposals, as in the case of SpaRJ. We cover the parameter proposal process further in Section <ref>. Note that any sampler can be used, even a non-MCMC method, with RWMH chosen for simplicity, computational speed, and to give a baseline statistical performance. Step 3: Metropolis accept-reject. Once the model and parameter values have been proposed, a Metropolis-Hastings acceptance step is performed. We run a Kalman filter with^'to calculate the log-likelihood of the proposal,l^'= log(p(𝐲_1:T|^')). Prior knowledge is included via a function of the prior probability densities, denoted Λ, which encodes both our prior knowledge of the parameter values and of the model (hence the sparsity). A wide range of prior distributions can be used, with our preference being the Laplace distribution, with the associated Λ given in eq. (<ref>), which is known to promote sparsity in Bayesian inference <cit.>. Note that the prior is not required to yield sparse samples, but is useful to combat potential over-fitting resulting from the large number of parameters to fit. If we denote byp()our prior on the transition matrix, thenΛ(_n-1, ^') = log(p(^')) - log(p(_n-1)). In Section <ref>, we provide suggestions as to choosing a prior, and hence Λ(_n-1, ^') = log(p(^')) - log(p(_n-1)), . When defining the model space in Section <ref>, we note that each model is uniquely determined by the sparsity structure it imposes, with this structure being present in all samples of generated from this model. We can therefore assess the model against our prior knowledge solely based on the sample structure, without a separate prior on the model space. An example of such a function is the L_0-norm, which penalises the number of non-zero elements, a property determined entirely by the model that can be assessed via the samples. The log-acceptance ratio of the proposed values ^' and M^' is given by log(a_r) = l^' - l_n-1 + Λ^'_n-1 + c, where c is given in Appendix <ref>. The model and parameter proposals are jointly accepted with probability a_r, and are otherwise rejected. If the proposals are accepted, then we set M_n := M^', _n := ^', and l_n := l^'. Otherwise, we set M_n := M_n-1, _n := _n-1, and l_n := l_n-1. § ALGORITHM DESIGN We now detail the three steps of Algorithm <ref> as presented in Section <ref>. This section is structured to follow the steps of the algorithm for clarity and reproducibility. §.§ Step 1: Model sampling In order to explore potential sparsity ofusing RJMCMC, we design a model jumping scheme that exploits the structure inherent to the model space. §.§.§ Model jumping scheme (steps 1.1 and 1.2 in Alg. <ref>) At each iteration, the algorithm proposes to jump models with probability (w.p.)1-π_0, as in Step 1 of Algorithm <ref>. If the algorithm proposes a model jump, then the proposed modelM^'will be sparser thanM_n-1w.p.π_-1, and denser thanM_n-1otherwise. If no model jump is proposed, thenM^'= M_n-1. There are thus three distinct outcomes of the model jumping step: retention of the previous model, proposing to jump to a sparser model, or proposing to jump to to a denser model. Note that in some cases it is not possible to jump in both directions, and hence if the jumping scheme proposes a model jump, the jump direction is deterministic. This changes the model jump probability, with the results detailed in Appendix <ref>. §.§.§ Model space adjacency (steps 1.2s and 1.2d in Alg. <ref>) Given the jump direction from Step 1.1, we denote bykthe number of elements that are to be made sparse or dense. We drawk ∼TPoi(λ_j, 1, m_n)(see Appendix <ref>), wherem_nis the maximum jump distance in the chosen direction, equal toS_nif jumping denser, andD_nif jumping sparser. The rateλ_jshould be chosen withλ_j ∈[0,1)to prefer jumps to closely related models. We find experimentally thatλ_j = 0.1gives good results, and note thatλ_j = 0is equivalent to a scheme in which the sparsity can change by one element only. Due to the small size of the space in λ_j a grid search would also be possible. The resulting model jump probabilities are asymmetric, with the modification to the acceptance ratio given in Appendix <ref>. In order to provide a set of candidate models forM^', we impose an adjacency condition. We say modelM^(1)is denselyk-adjacent to modelM^(2)if both|ℳ^(1) \ℳ^(2)| = kand|ℳ^(2) \ℳ^(1)| = 0, and sparselyk-adjacent if both|ℳ^(2) \ℳ^(1)| = kand|ℳ^(1) \ℳ^(2)| = 0. In other words, if the modelM^(1)differs inkelements fromM^(2)in a given direction only, then it isk-adjacent in that direction; if the modelM^(1)differs fromM^(2)in both the sparse and dense directions, e.g. two elements are sparser and one element is denser, then it is not adjacent toM^(2). Note that, ifM^(1)is sparselyk-adjacent toM^(2), thenM^(2)is denselyk-adjacent toM^(1), satisfying the reversibility condition of RJMCMC. With this adjacency condition, for a givenkand jump direction, the proposal^'is uniformly selected from the modelsk-adjacent toM_n-1in the given direction. Note that, givenλ_j > 0, it is theoretically possible to reach any model in a maximum of two jumps: one to the maximally sparse model and one jump denser to the desired model. It is possible to extend this proposal technique to jumping in both direction simultaneously, rather than requiring combinations of birth-death moves to achieve the result. This is omitted for simplicity. The natural solution to this is evaluating each jump with an accept-reject step, which effectively recovers the current scheme. It is also possible to proposeℳ^'as a randomly selected list of indices with lengthD_n, effectively shuffling the sparse elements around. This could potentially allow for more robust exploration of the posterior, but these moves would be very unlikely to be accepted. We therefore believe our proposal method to be a good compromise between simplicity and robustness, with good performance as evidenced by Section <ref>. §.§.§ Choice of parameters for model jumps The value of the hyper-parametersπ_0andπ_-1affects the acceptance rate of the proposed models and parameter values. It is known that an acceptance rate close to0.234is optimal for a random walk Metropolis-Hastings sampler <cit.>, and works well as a rule of thumb for RJMCMC algorithms <cit.>. We aim to have our within-model samples accepted at close to this rate, and thus must not propose to change model too often. This is because model changes can significantly alter the conditional posteriorp(|M^'), often leading to a low acceptance probability. We recommend using a model retention probability ofπ_0 ≈0.8. We find this gives enough iterations per model to average close to the optimal acceptance rate, whilst also proposing to jump models relatively frequently, allowing for exploration of sparsity. We recommend settingπ_-1 = 0.5, making the model proposal process symmetric, althoughπ_-1can reflect prior knowledge of the sparsity of, with largerπ_-1indicating a preference for sparsity. The algorithm is relatively insensitive to the value of π_0 and π_-1, allowing for these parameters to be chosen easily. However,π_0andπ_-1can be tuned during the burn-in period, with the objective of reaching a given acceptance rate. We find that using a value of 0.5 works well for both parameters, due to the structure of the model space and restricted parameter space of stable LGSSMs §.§ Step 2: Parameter sampling and mapping Since our method applies MCMC to sample the posterior distribution p(|_1:T), we must define a parameter proposal routine. Once a model has been proposed, the algorithm proposes a parameter value,^', which is constrained to the parameter space ofM^'. The process by which we generate the parameter proposal depends on whether or not the proposed modelM^'is the same as the previous modelM_n-1. IfM^'= M_n-1, the conditional posterior of the transition matrix,p(|_1:T, M^'), is sampled. Otherwise, the parameter value is mapped to the parameter space ofM^'. §.§.§ Sampling under a given model (Step 2.1) To generate the parameter proposal^', we sample fromp(|M^',_1:T), the posterior distribution of the transition matrixunder the modelM^'. This distribution can be written as p(|M^',_1:T) = ∫ p(_0:T,|_1:T, M^') d_0:T ∝ p(|M^')p(_1:T|), wherep(|M^')is the prior assigned to the transition matrix under the proposed model, which can be written p(A_ij|M_n) ∼ p(A_ij), (i,j) ∈ℳ^', δ_0, otherwise, withp(A_ij)deriving fromp(), andδ_0denoting a point mass at0. By substitutinginto eq. (<ref>) we obtain p(_1:T|) = p(y_1|)∏_i=1^Tp(_i|_1:i-1,), and can hence evaluate p(|M^')p(_1:T|), allowing us to sample from p(|M^',_1:T). We propose to sample from this distribution using a random walk Metropolis-Hastings (RWMH) sampler. For the walk distribution, we use a Laplace distribution for each element of, with all steps element-wise distributed i.i.d.Laplace(0,σ), with σ discussed below. We can thus view our proposed parameter proposal as drawing from (A^')_ij∼Laplace((A_n-1)_ij, σ), (i,j) ∈ℳ^', δ_0, otherwise. The Laplace distribution is selected, primarily due to its relationship with our proposed prior distribution for , itself a Laplace distribution <cit.>. In addition, the mass concentration of the Laplace distribution means that the walk will primarily propose values close to the previous value, increasing the acceptance rate, but can also propose values that are further from the accepted value, improving the mixing of the sample chain. The value ofσis chosen to give a within-model acceptance rate near the optimal rate of0.234<cit.>, withσ= 0.1consistently yielding rates close to this. A grid search suffices to select the value of this parameter as, for stable systems, the space of feasible values is small (σ < 1 is recommended). §.§.§ Completion distributions (steps 2.2s, 2.2d) In our algorithm, when a model jump occurs the dimension of the parameter space always changes. For example, jumping to a sparser model is equivalent in the parameters space to discarding parameters and decreasing the dimension of the parameter space. However, if jumping to a denser model, hence increasing the dimension of the parameter space, we require a method to assign a value to the new parameter. RJMCMC accomplishes this by augmenting the parameter mapping from modelM^(i)toM^(j)with draws from a completion distributiong_i,j(·), defined for each possible model jump <cit.>. Rather than defining a distribution for every pair of models, we exploit the numerical properties of sparsity to define a global completion distributiong(·). In our samples, sparse elements take the value zero. In order to propose parameters close to the previous parameters, we draw the value of newly dense elements such that the value is close to zero. In order to accomplish this, we choose aLaplace(0,σ_c)as the global completion distributiong(·), due to its mass concentration and relation to the prior we propose in Section <ref>. Theσ_cparameter is subject to choice, and for stable systems we find thatσ_c ≈0.1performs well, although the parameter could be tuned during the burn-in period. A simple grid search suffices to select the value of this parameter as the method is robust to parameter specification via the accept-reject step <cit.>. For stable systems, the search space for this parameter is small, approximately (0, 0.5], so a grid search is most efficient in terms of computational cost. If the prior is chosen as per the recommendations in Section <ref>, then we can interpret this renewal process as drawing the values for newly dense elements from the prior. §.§.§ Mapping between parameter spaces (steps 2.2s, 2.2d) In order to jump models, we must be able to map between the parameter spaces of the previous model and the proposal. This is, in general, a difficult task <cit.>, but is eased in our case as the models we are sampling are specific cases of the same model, and thus the parameters are the same between models. We therefore use an augmented identity mapping to preserve the interpretation of parameter values between models. Written in terms of, this mapping is given by (A^')_ij = (A_n-1)_ij, if A_ij is unchanged, u_ij, if A_ij becomes dense, 0, if A_ij becomes sparse, withu_iji.i.d.g(·). In order to obtain the Jacobian required to evaluate eq. (<ref>), the transformation must be written as applied to the parameter space, giving (A^')_ij = (A_n-1)_ij^(n-1), (i,j) ∈ℳ_n-1∩ℳ^', u_ij, (i,j) ∈ℳ^'∖ℳ_n-1. As sparse elements are, by construction, not in the parameter space, they are not present in the transformation, and are taken to be zero in^'by definition. The Jacobian of the parameter mapping given by eq. (<ref>) is aD^'×D^'identity matrix, and hence the Jacobian determinant term in Eq. (<ref>) is constant and equal to one. §.§ Step 3: MH accept-reject In this section, we first discuss the MH acceptance ratio, and then the way that prior knowledge of the transition matrix is incorporated, and how this relates to the model space. We then discuss the implications of sparsity in the samples. §.§.§ Modified acceptance ratio The modified Metropolis-Hastings acceptance ratio for our method is given by a_r^(n-1,n) = g(u_n)/g(u_n-1)π_n,n-1/π_n-1,np(^')p(_1:T|^')/p(_n-1)p(_1:T|_n-1). The first term is a correction for detailed balance, required due to the stochastic completion of the parameter mappings. The second term is analogous to the modification required when using an asymmetric proposal in RWMH, but relating to the model space. The last term of the expression is the standard symmetric Metropolis acceptance ratio. Note that the Jacobian term from eq. (<ref>) is equal to one in our case, and is therefore omitted. §.§.§ Incorporating prior knowledge The prior distribution,p(), quantifies our pre-existing knowledge on, including both its sparsity structure. We do not enforce sparsity via this distribution. Since it encodes our knowledge of all elements ofwe call it the overall prior. We can interpret the prior ofconditional on a given modelM_n, orp(|M_n), as encoding the sparsity from the model, and write it as in eq. (<ref>). In this way, we can interpret the sparsity constraint as a series of priors. The relationship between the prior conditional on the model and the overall prior is given by p() = ∑_M p(|M)p(M), wherep()is the overall prior, andp(|M)the conditional prior containing sparsity in elements indexed byℳ, given in eq. (<ref>), andp(M)is the prior assigned to the model space, which is described in Section <ref>. We apply the prior via the function Λ(_n-1,^',λ), where λ is a vector of prior hyper-parameters. When written in terms of the prior, Λ(_n-1,^',λ) = log(p(^';λ)) - log(p(_n-1;λ)).We recommend an element-wise Laplace prior on the transition matrix , given by p(A_ij) := Laplace(0, λ) with λ subject to choice, which results in Λ(_n-1, ^', λ) = λ(‖𝐀_n-1‖_1^1 - ‖𝐀^'‖_1^1) after combining all p(A_ij) to yield p(). This is equivalent to the LASSO penalty <cit.> in regression, which is known to promote sparsity. Experimentally, we find that choosing λ∈ [exp(-2), exp(2)] consistently results in good performance. For more information on selecting parameters for the Laplace prior, and the Laplace prior in general, we refer the reader to <cit.>. Note that the Laplace prior is the Bayesian equivalent to LASSO regression <cit.>, with penalties and priors having an equivalence in Bayesian statistics, as both encode the prior knowledge of a parameter. If the parameterλis not determined based on prior knowledge, it is possible to use a simple grid search to select the value of the parameter as explained above. Furthermore, if aLaplace(0,σ_c)is used for the completion distributiong(·), then dense values are re-initialised using the prior, reinforcing the interpretation of the prior as our existing knowledge. Note that using this prior is not required to recover sparsity, as this occurs as a result of the model sampling. We have performed several runs utilising a diffuse prior on the parameter space, with the results being similar to those where our suggested prior is used. Priors other than Laplace can be used, without compromising the recovery of sparsity, such as the ridge prior, or a diffuse prior. We use the LASSO penalty for its connection to sparsity, as well as for direct comparability to the existing literature. As the parameter proposal uses a standard MCMC scheme, a prior sensitivity analysis can be used to determine the effect of the chosen prior. In order to validate our recommendations, we have run multiple sensitivity analyses for the diffuse prior and several Laplace priors of varying scale, and have observed that the results are independent of the prior in all but the most extreme cases in which the prior is nearly a point mass. §.§.§ Model space prior As the model space encodes only the sparsity of, a prior onthat incorporates this structure is also implicitly a prior on the model space. If no such prior is applied, then the implicit prior on the model space is diffuse, with p(M) = (2^-d_x^2) ∀ M. This follows from evaluatingp()at an arbitraryunder each model via Eq. (<ref>), givingp() = p(|M)whenhas the sparsity structure induced byM, notingp(|M) = 0ifdoes not have this structure. As this holds for all models, it follows that p(M) ∝ 1, and as the model space is discrete and finite, we can obtain an explicit value for the prior.Note that a diffuse prior is, in general, not allowed on the model space if using a posteriori model comparison methods <cit.>. However a diffuse prior is standard for RJMCMC <cit.>, as the model space is sampled, and the model dynamically assessed alongside the parameter.This diffuse prior encodes our lack of prior knowledge as to the specific sparsity structure of . §.§.§ Probabilistic Granger causality In LGSSMs, an elementx_iof the state space Granger-causes elementx_jif knowledge ofx_iat timetimproves the prediction ofx_jat timet+1. We can therefore derive probabilistic Granger-causal relationships from our samples of the transition matrix, as the sampler assesses the proposals using their likelihood, which is equivalent to assessing their predictive capabilities. These probabilistic Granger-causal relationships are powerful, as they allow the probability of a relationship between variables to be quantified. Note that A_ji being zero in the transition matrix does not necessarily mean independence of the state elements, but directed conditional independence on the scale of one time step.This conditional independence means that x_i does not Granger-cause x_j, however x_i may indirectly affect x_j through another variable over multiple time steps. §.§ Extending SpaRJ to other parameters Our method can be used to obtain sparse estimates of any of the parameters of the LGSSM, although some modifications are required to extend the formulation given above (for the transition matrix). Extending our method to the matrix $̋ requires only for the parameter and model proposal to be changed to reflect the size of$̋. Extending the method to covariance matrices requires the proposal value to be constrained such that the resulting matrix is positive semi-definite. If the method does not jump models, remaining in model M_t-1, a possible proposal distribution for the state covariance proposal ' is ' ∼Wishart(_t-1/p, p), where p is larger than d_x, with p > 30d_x working well experimentally. This proposal has expectation close to _t-1, and is positive semi-definite. Note that this distribution is not applied to if estimating only , e.g., in Section <ref> or in other subsections of Section <ref>, and applies only to the extension to sampling the covariance parameter. We enforce the sparsity structure of _t-1 in ', by setting elements sparse under M_t-1 to 0 after sampling but before the accept-reject step, replacing Step 2.1 in Algorithm <ref>, where the model does not change. The model proposal step (Step 2.2 in Algorithm <ref>) would also need to be modified, with the diagonal being dense at all times, and enforcing indices (i,j) and (j,i) to have the same sparsity. The model space is therefore reduced, and model adjacency is assessed via only the upper triangular. This modification to the model proposal process completes the alterations required to use our method to sparsely sample the state covariance matrix. Note that the covariance parameters cannot be interpreted as encoding state connections, and therefore cannot be interpreted graphically in the same way as in the transition matrix. §.§ Computational cost The computational cost of our method is very similar to that of regular MCMC methods when applied to state-space models. The most computationally intensive component of the algorithm is the evaluation of the Kalman filtering equations, with this being over 95% of the computational time in our testing. The additional costs compared to a random walk Metropolis-Hastings (RWMH) method are one or two draws from a uniform distribution, zero or one draw from a truncated Poisson distribution (equivalent to a categorical distribution), and some additional array accesses and comparisons. These extra costs are negligible compared to the cost of evaluating the Kalman filtering equations, with the computational cost and complexity being determined by the matrix operations therein, resulting in a complexity of O(NT(d_x^3 + d_y^3)) for our algorithm, where the dominating d_x^3 and d_y^3 terms result from the matrix operations performed by Kalman filter. The computational cost of our method is empirically demonstrated in Section <ref>, and is functionally equivalent to a standard MCMC method that does not explore sparsity. Thus in practice, given the cost of the filtering equations, the sparsity is explored for free. § NUMERICAL STUDY We now present the results of three sets of simulation studies to evaluate our method, showcasing the performance of SpaRJ in several scenarios. The section is divided into three synthetic data experiments and one real data problem. First, we evaluate the method with isotropic covariance matrices over variable d_x and T. Next, we investigate the effect of known and unknown anisotropic state covariance over variable d_x and λ. The third experiment explores the effect of the true level of sparsity D in the transition matrix on the quality of inference. We then use real data to recover geographical relationships from global temperature data. Finally, we explore the convergence characteristics of the method and check guarantees. For the synthetic experiments, we generate observations following eq. (<ref>), withd_x = d_y,=̋ _d_x, and take_0 = 1_d_xandT = 100unless stated otherwise. The state covariance matrixis specified per study. We generate transition matrices and synthetic data for d_x ∈{3, 6, 12}. Whilst this may seem limited in dimension, this equates to performing inference in 9, 36, and 144 dimensional spaces, as each element of the transition matrix is an independent parameter. Furthermore, we sample the model space, which is of size 2^d_x^2, e.g. 2^144 when d_x = 12. In all experiments, we run SpaRJ forN=15000iterations, discarding the first5000as burn-in. The matrix_0is generated using an EM scheme, initialised at a random element-wise standard normal matrix. We setπ_0 = 0.8andπ_-1 = 0.5in all cases. The LASSO penalty is used, withλchosen per experiment. We use a truncated Poisson distribution for the jump size, withλ_j = 0.1. We contrast our proposed method method with GraphEM <cit.>, an algorithm with similar goals based on proximal optimisation. In addition, we compare with the conditional Granger causality (CGC) method of <cit.>and the DAG-based method (DAGMA) of <cit.>. These methods do not exploit the state-space model structure, and are trained only on the observations _1:T.We note that there are not other RJMCMC-based methods that are applicable to this problem. We therefore compare with a reference MCMC implementation that does not exploit sparsity, and is hence dense in all elements of the estimate. This is equivalent to running our method, but with p(M) = 0 except for the M corresponding to the fully dense matrix, hence effectively removing Step 1 in Algorithm <ref> and Section <ref>. We compare the metrics of RMSE, precision, recall, specificity (true negative rate), and F1 score, with an element being sparse encoded as a positive, and dense as a negative. We use these metrics, which are associated with classification rather than regression, as our method outputs truly sparse samples, and therefore allows for parameters to be classified as sparse or dense without thresholding on their numerical value, or using confidence/credible intervals.We take an element to be sparse under SpaRJ by majority vote of the samples. We average the metrics over 100 independent runs of each algorithm. The average time taken runs to complete is given, with the runs being performed in parallel on an 8 core processor. No special effort was put into optimising any single method, and all methods that use the Kalman filter utilise the same implementation thereof. Note that the DAGMA implementation is GPU accelerated, whereas all other methods utilise only the CPU. RMSE for SpaRJ and for the reference MCMC is calculated with respect to the mean of post-burnin samples for each chain. Note the RMSE is computed relative to , not the sequence of underlying hidden states as is often the case.RMSE is not meaningful for CGC, as it estimates only connectivity. We generate ourmatrices by drawing the dense elements from a standard normal, and then divideby the magnitude of its maximal singular value to give a stable system. §.§ Synthetic data validation §.§.§ Isotropic covariances and We test the performance of the method with isotropic covariance matricesand. Dimension 3 matrix.We generate for dimension d_x=3 with sparsity in one element per row and one element per column. We set = = _d_x, = 10^-8_d_x, and λ = 1 = exp(0).Dimension 6 block diagonal matrix.We generate for dimension d_x=6 as a block diagonal matrix with 2×2 blocks. We set = = 10^-2_d_x, = 10^-8_d_x, and λ = exp(-1) ≈ 0.367. Dimension 12 block diagonal matrix.We generate for dimension d_x=12 as a block diagonal matrix with 2×2 blocks. We set = = 10^-2_d_x, = 10^-8_d_x, and λ = exp(-1) ≈ 0.367. Table <ref> evidences a good performance from SpaRJ, exhibiting the capability to extract the sparsity structure in all examples. Furthermore, point estimates resulting from SpaRJ are consistently closer to the true value than those from comparable methods, as evidenced by the lower RMSE. Note that in all cases the DAG based method recovered overly sparse graphs, as evidenced by the poor specificity scores. We further note that DAGMA is designed to recover acyclic graphs, with all graphs here being cyclical, further degrading performance.In order to test the relationship between the recovered values and the number of observations T, we now demonstrate our method for different values of T ∈ [10,150] using the same 3×3 system as previously. In Figure <ref>, we show averaged metrics over 100 independent runs for SpaRJ and GraphEM. We see that the longer the series the better the overall performance, with SpaRJ giving a better overall performance than GraphEM. The change in the quality of inference with the times series length T illustrated in Figure <ref> is typical for parameter estimation methods in state-space modelling, as a longer series gives more statistical information with which to perform inference. §.§.§ Known anisotropic state covariance We now generate synthetic data using a less favourable regime, under an anisotropic state covariance . In order to do this, we note that alln ×ncovariance matricesΣcan be expressed in the form Σ= ^TDiag(e_1, e_2, …, e_n), whereis an orthogonal matrix, ande_kare the eigenvalues ofΣ, withe_1 ≥e_2 ≥⋯≥e_n > 0. To generate a covariance matrix, we first generate an orthogonal matrixGfollowing the algorithm of <cit.>. We then drawe_i ∼U(0.5,1.5), and sort in descending order. Finally, we obtainΣvia evaluation of Eq. (<ref>). In this way, we generate a random positive definite matrix, with all elements non-zero.To allow direct comparison with the previous results, we use the same set of model parameters as before, except we randomly generate the covariance for each system as above.In Table <ref> see that there is only a small apparent difference in performance between isotropic covariance and non-isotropic state covariances, providing that the covariance is known. This is expected, as a known covariance would not affect the estimation of the value of the state transition matrix. However, when estimating sparsity, the anisotropic nature of the state covariance does have an effect. This is due to the value of the state elements affecting each other in more than one way, as is the case in the isotropic covariance case. There is thus a small drop in metrics in all cases due to this additional source of error. We note that whilst DAGMA may seem to perform well due to the high F1 scores, it does this by recovering an overly sparse graph as indicated by the low specificity. For example, only 9 elements are recovered as dense in the 12 dimensional system, out of a true 24 dense elements, which does not well represent the underlying system.We now perform a sensitivity analysis, in which we vary the strength of the prior by varying λ, and observe the effect on the results. We will perform this analysis on the d_x = 12 system with a known anisotropic covariance. The results of this analysis are presented in Table <ref>. We see that the results are not dependent on the prior parameter, meaning that the parameter can be chosen without excess computation or prior knowledge required. §.§.§ Estimated unknown anisotropic covariance In many scenarios, the true value of the state covarianceis unknown, and must be estimated. As we wish to assess the performance of our method in this scenario, we use the same true state covariance as Section <ref>, but input an estimated state covariance. However, as both and are now unknown, we must estimate both parameters in order to obtain an estimate for . We therefore iteratively estimate and using their analytic maximisers, and input the resulting estimate for into the tested methods. The estimated resulting from this initialisation is discarded, and is not used in our method, nor in any other method. We see in Table <ref> that our method performs well under these challenging conditions, consistently outperforming existing methods. Note that the CGC and DAGMA metrics are unchanged from the previous section, as these methods does not require accept estimate for. The deterioration of metrics is expected in this experiment, as we are inferring both the value of and the value of from the same data, with the estimation of being conditional on the estimated value of . However, the sparsity structures of the estimates are better than comparable methods, and the parameter value is still well estimated, as evidenced by the RMSE value. §.§.§ Variable levels of sparsity We now explore the performance of our method under variable levels of sparsity.To facilitate direct comparison, all other parameters of the state-space model remain the same between sparsity levels, as well as the matrix from which is generated. This experiment is performed on a 4×4 transition matrix, with = = _4, = 10^-8_4, T = 50, and λ = 0.5. Note that, in systems with many dense elements, the effect of each element can be emulated by changing the values of a number of other elements, making sparsity recovery difficult in these cases. Our algorithm performs well in general, consistently outperforming other methods in this case. We observe from Figure <ref> that both methods generally perform better as the number of sparse elements increases. This is due to sharper likelihood changes occurring when sparsity changes when assessing these models. For particularly dense transition matrices, GraphEM outperforms SpaRJ, due to the model proposal step of SpaRJ requiring more transitions to walk the larger space. This could be remedied with a prior encoding more information for these sampling regimes. For example, a penalty relating to the number of sparse elements could be incorporated into the prior. This ease of encoding prior preference in the model space is a strength of SpaRJ, and is not possible in comparable methods. Furthermore, SpaRJ can be assisted by GraphEM via the provision of a sparse initial value _0, which will greatly speed convergence. We have not done this for any of the numerical experiments, but in practice we recommended doing so. §.§ Application to global temperature data We now apply our method to real data. We use the average daily temperature of 324 cities from 1995 to 2021, curated by the United States Environmental Protection Agency <cit.>. We subset the data to the cities of London (GB), Paris (FR), Rome (IT), Melbourne (AU), Houston (US), and Rio de Janeiro (BR) in 2010. We subset to a single year to avoid missing data. We set the parameters as follows:π_-1 = 0.5, π_0 = 0.8, λ= 0.5,andλ_j = 0.2. We estimateusing the EM scheme detailed in Section <ref>, and set=̋ _6, = 0.5_6as per the data specification. The results for GraphEM are given graphically in Figure <ref>, while Figure <ref> displays the results for SpaRJ. In Figure <ref>, edge thickness for GraphEM is proportional to the number of times an edge appears in 100 independent runs, whereas for SpaRJ edge thickness is proportional to the number of post-burnin samples from 100 chains with the edge present.It is well known that weather phenomena are highly non-linear, and are driven by both local and global factors. Not all driving factors are recorded in the data, therefore making it a challenging task to extract statistically causal relationships between locations. For example, neither barometric pressure nor rainfall are utilised, which would assist with localisation <cit.>. We see that the geographical relationships between the cities are well recovered by SpaRJ. We see that the European cities form one cluster, Melbourne is separate, and Rio weakly affects Houston in Figure <ref>. Second, note that there exists no ground truth to compare against in this problem. GraphEM recovers the graph given in Figure <ref>, which does not reflect the geographical positioning of the cities, as there are connections across large distances which is not physically reasonable. In addition, this graph cannot be interpreted probabilistically, as is possible with SpaRJ. On the other hand, GraphEM is generally faster compared to SpaRJ.The SpaRJ estimate, presented in Figure <ref>, offers the capacity for additional inference. For instance, in Figure <ref>, all edges recovered by GraphEM are of a similar thickness, indicating that they are recovered by many independent runs of the algorithm. This is a desirable characteristic of GraphEM, as it is indicative of good convergence, although it does not admit a probabilistic interpretation of state connectivity.SpaRJ recovers the edges probabilistically, which is made apparent in Figure <ref> by the variable edge thicknesses. In SpaRJ, the number of post-burnin samples in which a given edge is present gives an estimate of the probability that this edge is present. This is of particular interest when inferring potential causal relationships, as in this example. This property follows from the broader capacity of SpaRJ to provide Monte Carlo uncertainty quantification. For example, a credible interval for the probability of an edge being present can easily be obtained via bootstrapping with the output of SpaRJ, which is not possible with GraphEM, as it is designed to converge to a point.We used a value of λ = 1.2 in GraphEM for this estimation. However, increasing λ would make the self-self edges disappear (i.e., zeros would appear in the diagonal of before edges among cities would be removed).Comparing the results in Figure <ref>, we see that the output from SpaRJ is more feasible when accounting for geophysical properties. It is not reasonable for the weather of cities to affect each other across very large distances and oceans over a daily timescale, and the parameter estimate should reflect this. This spatial isolation is present in the SpaRJ estimate in Figure <ref>, but is absent from the GraphEM result in Figure <ref>. §.§ Assessing convergence As our method is a MCMC method, it is not desirable to converge in a point-wise sense, however it is desirable to converge in distribution to the target distribution. However, we cannot use standard metrics such as R̂<cit.> to assess convergence, as these assume that the same parameters are being estimated at all times, which is not the case in our method.In the literature there are specific methods to assess convergence for RJMCMC algorithms, with proposed methods <cit.> breaking down for large model spaces with few visited models (as is the case here). However, due to the tight linking between the model space and the parameter space, we can assess convergence via a combination of model metrics and parameter statistics <cit.>.In order to properly assess convergence, we must take into account both the model space and parameter space, and assess convergence in both. As our model space is closely linked to the values in the parameter space, we are able to assess convergence by jointly observing parameter and model metrics. We track both the spectral norm of the sampled_n(parameter metric) and the number of sparse elements (model metric), and plot them in Figure <ref>. We observe that convergence in both the parameter space and the model space occur quickly, and that convergence seems to occur before the burn-in period ends. This is the case for all examples, with the exemplar system being the slowest to converge. It is possible to decrease the time to convergence in several ways, such as better estimates of the model parameters. However, parameters such asπ_0andπ_-1will also alter the speed of convergence, although the manner in which they do so is dependent on the true dynamics. We note that convergence speed is faster for lower dimensional , and conversely is slower for larger . This is due to a larger matrix having more variables to estimate, and hence requiring sampling from a higher dimensional space. Furthermore, the sampler converges faster for longer series lengths. Experimentally, sparser models benefit from a largerπ_-1, whereas denser models benefit from a smallerπ_-1. Increasingπ_0increases convergence speed in the parameter space, but decreases convergence speed in the model space. Convergence could also be improved by using a gradient-based sampler for the parameter posterior, as the gradient is available in closed form <cit.>. We find that the increase in computational cost and the reduction in modularity is not worth the increased speed of convergence. Finally, note that our method inherits the convergence guarantees of RWMH and RJMCMC, and therefore for a finite dimensional parameter space we are guaranteed to converge to the target sampling distribution given sufficient iterations. § CONCLUSION In this work we have proposed the SpaRJ algorithm, a novel Bayesian method for recovering sparse estimates of the transition matrix of a linear-Gaussian state-space model. In addition, SpaRJ provides Bayesian uncertainty quantification of Granger causality between state elements, following from the interpretation of the transition matrix as representing information flow within an LGSSM. The method, built on reversible jump Markov chain Monte Carlo, has strong theoretical guarantees, displays performance exceeding state-of-the-art methods in both challenging synthetic experiments and when operating on real-world data, and exhibits great potential for extension. §.§ Guidance for choice of parameters §.§ Truncated Poisson distribution Denote the Poisson distribution with rateλthat is left-truncated ataand right-truncated atbbyTPoi(λ, a, b). This distribution has supportn ∈ℕ ∩[a,b], and probability mass function of TPoi(n; λ, a, b) = λ^ne^-λ/Z· n!, with Z = ∑_n=a^bλ^ne^-λ/n!. §.§ Correction terms In order to maintain detailed balance in the sampling chain, we must account for the unequal model transition probabilities, which is done via a correction term. These terms arise from the RJMCMC acceptance probability, l^'/lπ_n+1,n/π_n,n+1g(u_n)/g(u_n+1)|∂ T_n,n+1(_n, u_n)/∂(_n, u_n)|, in whichπ_n+1,n/π_n,n+1is the ratio of the probability of the reverse jump to that of the forward jump. All other terms in the acceptance ratio are calculated in Algorithm <ref>, with the Jacobian term ignored as per Section <ref>. As we are using log likelihoods and log acceptance ratios, we compute our correction on the log scale. For a given jump distanceJ, we denote this log correction termc_j,J, withj=swhen jumping sparser, andj=dwhen jumping denser. This term is equal to thelog(π_n+1,n/π_n,n+1)term in the acceptance ratio in Step 3 of Algorithm <ref>. The calculations for both sparser jumps and denser jumps proceed similarly, thus we detail only the derivation for sparser jumps. The forward jump is a jump sparser, which occurs with probabilityπ_-1. When jumping sparser we truncate the jump distribution atD_n. Hence, the probability of drawing a given jump lengthJfor the jump distance isTPoi(J; λ, 1, D_n). GivenJ, the probability of choosing a given set of elements in the forward jump isD_nJ^-1. Multiplying these terms we obtain π_n,n+1 = π_-1 TPoi(J; λ, 1, D_n) J! (D_n-J)!(D_n!)^-1. The reverse jump is a jump denser, which occurs with probability1-π_-1. When jumping sparser, we truncate the jump distribution atS_n+J, the number of sparse elements after the forward jump occurs. Hence the probability of drawing a givenJfor the jump distance isTPoi(J; λ, 1, S_n + J). GivenJ, the probability of choosing a given set of elements in the reverse jump isS_n+JJ^-1. Multiplying these terms we obtain π_n+1,n = (1-π_-1) TPoi(J; λ, 1, S_n+J) J! S_n!(S_n+J)!^-1. From which we obtain the acceptance ratio π_n+1,n/π_n,n+1 = (1-π_-1)TPoi(J; λ, 1, S_n+J)S_n! D_n!/π_-1TPoi(J; λ, 1, D_n)(S_n+J)! (D_n-J)!, Writingr := (1-π_-1)(π_-1)^-1we then have exp(c_s,J) = rTPoi(J; λ, 1, S_n+J)S_n! D_n!/TPoi(J; λ, 1, D_n)(S_n+J)! (D_n-J)!, with the corresponding term for the denser jump being exp(c_d,J) = 1/rTPoi(J; λ, 1, D_n+J)S_n! D_n!/TPoi(J; λ, 1, S_n)(D_n+J)! (S_n-J)!. In the case of jumping to and from maximal density (MD) and maximal sparsity (MS) further adjustment is required. In these cases we replacerfollowing Table <ref>. IEEEtran 10 url@samestyleaghagolzadeh2014latent M. Aghagolzadeh and W. Truccolo, “Latent state-space models for neural decoding,” in 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 1em plus 0.5em minus 0.4em IEEE, 2014, pp. 3033–3036. scienceImperial E. S. Knock, L. K. Whittles, J. A. Lees, P. N. Perez-Guzman, R. Verity, R. G. FitzJohn, K. A. M. Gaythorpe, N. Imai, W. Hinsley, L. C. Okell, A. Rosello, N. Kantas, C. E. Walters, S. Bhatia, O. J. Watson, C. Whittaker, L. Cattarino, A. Boonyasiri, B. A. Djaafara, K. Fraser, H. Fu, H. Wang, X. Xi, C. A. Donnelly, E. Jauneikaite, D. J. Laydon, P. J. White, A. C. Ghani, N. M. Ferguson, A. Cori, and M. Baguelin, “Key epidemiological drivers and impact of interventions in the 2020 SARS-CoV-2 epidemic in England,”Sci Transl Med, vol. 13, no. 602, 07 2021. WoodPLOS S. N. Wood and E. C. Wit, “Was R less than 1 before the English lockdowns? On modelling mechanistic detail, causality and inference about Covid-19,”PLOS ONE, vol. 16, no. 9, pp. 1–19, 09 2021. [Online]. Available: <https://doi.org/10.1371/journal.pone.0257455>grewal2010applications M. S. Grewal and A. P. Andrews, “Applications of Kalman filtering in aerospace 1960 to the present [historical perspectives],”IEEE Control Systems Magazine, vol. 30, no. 3, pp. 69–78, 2010. hamilton1986standard J. D. Hamilton, “A standard error for the estimated state vector of a state-space model,”Journal of Econometrics, vol. 33, no. 3, pp. 387–397, 1986. sarkka2013bayesian S. Särkkä, Bayesian filtering and smoothing. 1em plus 0.5em minus 0.4em Cambridge University Press, 2013, no. 3. kalman1960new R. E. Kalman, “A new approach to linear filtering and prediction problems,”Transactions of the ASME–Journal of Basic Engineering, vol. 82, no. Series D, pp. 35–45, 1960. einicke1999robust G. A. Einicke and L. B. White, “Robust extended Kalman filtering,”IEEE Transactions on Signal Processing, vol. 47, no. 9, pp. 2596–2599, 1999. wan2000unscented E. A. Wan and R. Van Der Merwe, “The unscented Kalman filter for nonlinear estimation,” in Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No. 00EX373). 1em plus 0.5em minus 0.4em IEEE, 2000, pp. 153–158. Gordon93 N. Gordon, D. Salmond, and A. F. M. Smith, “Novel approach to nonlinear and non-Gaussian Bayesian state estimation,”IEE Proceedings-F Radar and Signal Processing, vol. 140, pp. 107–113, 1993. djuric2003particle P. M. Djuric, J. H. Kotecha, J. Zhang, Y. Huang, T. Ghirmai, M. F. Bugallo, and J. Miguez, “Particle filtering,”IEEE signal processing magazine, vol. 20, no. 5, pp. 19–38, 2003. doucet2009tutorial A. Doucet, A. M. Johansen et al., “A tutorial on particle filtering and smoothing: Fifteen years later,”Handbook of nonlinear filtering, vol. 12, no. 656-704, p. 3, 2009. elvira2019elucidating V. Elvira, L. Martino, M. F. Bugallo, and P. M. Djuric, “Elucidating the auxiliary particle filter via multiple importance sampling [lecture notes],”IEEE Signal Processing Magazine, vol. 36, no. 6, pp. 145–152, 2019. branchini2021optimized N. Branchini and V. Elvira, “Optimized auxiliary particle filters: adapting mixture proposals via convex optimization,” in Uncertainty in Artificial Intelligence. 1em plus 0.5em minus 0.4em PMLR, 2021, pp. 1289–1299. andrieu2010particle C. Andrieu, A. Doucet, and R. Holenstein, “Particle Markov chain Monte Carlo methods,”Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 72, no. 3, pp. 269–342, 2010. watts1998collective D. J. Watts and S. H. Strogatz, “Collective dynamics of ‘small-world’ networks,”Nature, vol. 393, no. 6684, pp. 440–442, 1998. chouzenoux2020graphem E. Chouzenoux and V. Elvira, “Graphem: EM algorithm for blind Kalman filtering under graphical sparsity constraints,” in ICASSP 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 1em plus 0.5em minus 0.4em IEEE, 2020, pp. 5840–5844. pirayre2018brane A. Pirayre, C. Couprie, L. Duval, and J. Pesquet, “BRANE Clust: Cluster-assisted gene regulatory network inference refinement,”IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 15, no. 3, pp. 850–860, May 2018. luengo2019hierarchical D. Luengo, G. Rios-Munoz, V. Elvira, C. Sanchez, and A. Artes-Rodriguez, “Hierarchical algorithms for causality retrieval in atrial fibrillation intracavitary electrograms,”IEEE Journal of Biomedical and Health Informatics, vol. 12, no. 1, pp. 143–155, Jan. 2019. ravazzi2017learning C. Ravazzi, R. Tempo, and F. Dabbene, “Learning influence structure in sparse social networks,”IEEE Transactions on Control of Network Systems, vol. PP, pp. 1–1, 12 2017. richiardi2013machine J. Richiardi, S. Achard, B. Horst, , and D. V. D. Ville, “Machine learning with brain graphs,”IEEE Signal Processing Magazine, vol. 30, no. 3, pp. 58–70, 2013. elvira2022graphical V. Elvira and É. Chouzenoux, “Graphical inference in linear-gaussian state-space models,”IEEE Transactions on Signal Processing, vol. 70, pp. 4757–4771, 2022. chouzenoux2023graphit E. Chouzenoux and V. Elvira, “Graphit: Iterative reweighted ℓ_1 algorithm for sparse graph inference in state-space models,” in ICASSP 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 1em plus 0.5em minus 0.4em IEEE, 2023, pp. 1–5. green1995reversible P. J. Green, “Reversible jump Markov chain Monte Carlo computation and Bayesian model determination,”Biometrika, vol. 82, no. 4, pp. 711–732, 1995. cappe2003reversible O. Cappé, C. P. Robert, and T. Rydén, “Reversible jump, birth-and-death and more general continuous time Markov chain Monte Carlo samplers,”Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 65, no. 3, pp. 679–700, 2003. robert2013monte C. Robert and G. Casella, Monte Carlo statistical methods. 1em plus 0.5em minus 0.4em Springer Science & Business Media, 2013. richardson1997bayesian S. Richardson and P. J. Green, “On Bayesian analysis of mixtures with an unknown number of components (with discussion),”Journal of the Royal Statistical Society: series B (statistical methodology), vol. 59, no. 4, pp. 731–792, 1997. cox2022parameter B. Cox and V. Elvira, “Parameter estimation in sparse linear-gaussian state-space models via reversible jump markov chain monte carlo,” in 2022 30th European Signal Processing Conference (EUSIPCO). 1em plus 0.5em minus 0.4em IEEE, 2022, pp. 797–801. kantas2015particle N. Kantas, A. Doucet, S. S. Singh, J. Maciejowski, and N. Chopin, “On particle methods for parameter estimation in state-space models,” 2015. doucet2003parameter A. Doucet and V. B. Tadić, “Parameter estimation in general state-space models using particle methods,”Annals of the institute of Statistical Mathematics, vol. 55, pp. 409–422, 2003. campillo2009convolution F. Campillo and V. Rossi, “Convolution particle filter for parameter estimation in general state-space models,”IEEE Transactions on Aerospace and Electronic Systems, vol. 45, no. 3, pp. 1063–1072, 2009. eliason1993maximum S. R. Eliason, Maximum likelihood estimation: Logic and practice. 1em plus 0.5em minus 0.4em Sage, 1993, no. 96. dempster1977maximum A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,”Journal of the Royal Statistical Society: Series B (Methodological), vol. 39, no. 1, pp. 1–22, 1977. rue2009approximate H. Rue, S. Martino, and N. Chopin, “Approximate Bayesian inference for latent Gaussian models by using integrated nested laplace approximations,”Journal of the royal statistical society: Series b (statistical methodology), vol. 71, no. 2, pp. 319–392, 2009. blei2017variational D. M. Blei, A. Kucukelbir, and J. D. McAuliffe, “Variational inference: A review for statisticians,”Journal of the American statistical Association, vol. 112, no. 518, pp. 859–877, 2017. scott1998atomic S. C. Scott, L. D. David, and A. S. Michael, “Atomic decomposition by basis pursuit,”SIAM journal on scientific computing, vol. 20, no. 1, pp. 33–61, 1998. mohimani2008fast H. Mohimani, M. Babaie-Zadeh, and C. Jutten, “A fast approach for overcomplete sparse decomposition based on smoothed `l0-norm',”IEEE Transactions on Signal Processing, vol. 57, no. 1, pp. 289–301, 2008. zayyani2009iterative H. Zayyani, M. Babaie-Zadeh, and C. Jutten, “An iterative bayesian algorithm for sparse component analysis in presence of noise,”IEEE Transactions on Signal Processing, vol. 57, no. 11, pp. 4378–4390, 2009. ji2008bayesian S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,”IEEE Transactions on signal processing, vol. 56, no. 6, pp. 2346–2356, 2008. schniter2008fast P. Schniter, L. C. Potter, and J. Ziniel, “Fast bayesian matching pursuit,” in 2008 Information Theory and Applications Workshop. 1em plus 0.5em minus 0.4em IEEE, 2008, pp. 326–333. zayyani2009bayesian H. Zayyani, M. Babaie-Zadeh, and C. Jutten, “Bayesian pursuit algorithm for sparse representation,” in 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. 1em plus 0.5em minus 0.4em IEEE, 2009, pp. 1549–1552. korki2016iterative M. Korki, J. Zhang, C. Zhang, and H. Zayyani, “Iterative bayesian reconstruction of non-iid block-sparse signals,”IEEE Transactions on Signal Processing, vol. 64, no. 13, pp. 3297–3307, 2016. llorente2020marginal F. Llorente, L. Martino, D. Delgado, and J. Lopez-Santiago, “Marginal likelihood computation for model selection and hypothesis testing: an extensive review,”arXiv preprint arXiv:2005.08334, 2020. llorente2022safe F. Llorente, L. Martino, E. Curbelo, J. López-Santiago, and D. Delgado, “On the safe use of prior densities for bayesian model selection,”Wiley Interdisciplinary Reviews: Computational Statistics, p. e1595, 2022. martino2017cooperative L. Martino, J. Read, V. Elvira, and F. Louzada, “Cooperative parallel particle filters for online model selection and applications to urban mobility,”Digital Signal Processing, vol. 60, pp. 172–185, 2017. tibshirani1996regression R. Tibshirani, “Regression shrinkage and selection via the lasso,”Journal of the Royal Statistical Society: Series B (Methodological), vol. 58, no. 1, pp. 267–288, 1996. park2008bayesian T. Park and G. Casella, “The Bayesian lasso,”Journal of the American Statistical Association, vol. 103, no. 482, pp. 681–686, 2008. carvalho2010horseshoe C. M. Carvalho, N. G. Polson, and J. G. Scott, “The horseshoe estimator for sparse signals,”Biometrika, vol. 97, no. 2, pp. 465–480, 2010. zheng2018dags X. Zheng, B. Aragam, P. K. Ravikumar, and E. P. Xing, “Dags with no tears: Continuous optimization for structure learning,”Advances in neural information processing systems, vol. 31, 2018. wei2020dags D. Wei, T. Gao, and Y. Yu, “Dags with no fears: A closer look at continuous optimization for learning bayesian networks,”Advances in Neural Information Processing Systems, vol. 33, pp. 3895–3906, 2020. yu2021dags Y. Yu, T. Gao, N. Yin, and Q. Ji, “Dags with no curl: An efficient dag structure learning approach,” in International Conference on Machine Learning. 1em plus 0.5em minus 0.4em PMLR, 2021, pp. 12 156–12 166. bello2022dagma K. Bello, B. Aragam, and P. Ravikumar, “Dagma: Learning dags via m-matrices and a log-determinant acyclicity characterization,”arXiv preprint arXiv:2209.08037, 2022. granger1969investigating C. W. J. Granger, “Investigating causal relations by econometric models and cross-spectral methods,”Econometrica, vol. 37, no. 3, pp. 424–438, 1969. [Online]. Available: <http://www.jstor.org/stable/1912791>pagel2006bayesian M. Pagel and A. Meade, “Bayesian analysis of correlated evolution of discrete characters by reversible-jump Markov chain Monte Carlo,”The American Naturalist, vol. 167, no. 6, pp. 808–825, 2006. andrieu1999joint C. Andrieu and A. Doucet, “Joint bayesian model selection and estimation of noisy sinusoids via reversible jump mcmc,”IEEE Transactions on Signal Processing, vol. 47, no. 10, pp. 2667–2676, 1999. vermaak2004reversible J. Vermaak, C. Andrieu, A. Doucet, and S. Godsill, “Reversible jump markov chain monte carlo strategies for bayesian model selection in autoregressive processes,”Journal of Time Series Analysis, vol. 25, no. 6, pp. 785–809, 2004. bunch2016bayesian P. Bunch, J. Murphy, and S. Godsill, “Bayesian learning of degenerate linear gaussian state space models using markov chain monte carlo,”IEEE Transactions on Signal Processing, vol. 64, no. 16, pp. 4100–4112, 2016. dootika2019multivariate D. Vats, J. M. Flegal, and G. L. Jones, “Multivariate output analysis for Markov chain Monte Carlo,”Biometrika, vol. 106, no. 2, pp. 321–337, 04 2019. [Online]. Available: <https://doi.org/10.1093/biomet/asz002>gelman1997weak A. Gelman, W. R. Gilks, and G. O. Roberts, “Weak convergence and optimal scaling of random walk Metropolis algorithms,”The Annals of Applied Probability, vol. 7, no. 1, pp. 110 – 120, 1997. [Online]. Available: <https://doi.org/10.1214/aoap/1034625254>gagnon2019weak P. Gagnon, M. Bédard, and A. Desgagné, “Weak convergence and optimal tuning of the reversible jump algorithm,”Mathematics and Computers in Simulation, vol. 161, pp. 32–51, 2019. mezzadri2007generate F. Mezzadri, “How to generate random matrices from the classical compact groups,”Notices of the American Mathematical Society, vol. 54, no. 5, pp. 592–604, 2007. udaytd“United states environmental protection agency average daily temperature archive,” United States Environmental Protection Agency. [Online]. Available: <https://academic.udayton.edu/kissock/http/Weather/default.htm>bauer2015quiet P. Bauer, A. Thorpe, and G. Brunet, “The quiet revolution of numerical weather prediction,”Nature, vol. 525, no. 7567, pp. 47–55, 2015. gelman2013bayesian A. Gelman, J. Carlin, H. Stern, D. Dunson, A. Vehtari, and D. Rubin, Bayesian Data Analysis, Third Edition, ser. Chapman & Hall/CRC Texts in Statistical Science. 1em plus 0.5em minus 0.4em Taylor & Francis, 2013.
http://arxiv.org/abs/2306.15677v1
20230612100007
Measuring the continuous research impact of a researcher: The Kz index
[ "Kiran Sharma", "Ziya Uddin" ]
cs.DL
[ "cs.DL" ]
1 .001 K Sharma et al. mode = title]Measuring the continuous research impact of a researcher: The K_z index 1]Kiran Sharma [email protected] 1]Ziya Uddin [email protected] [1]School of Engineering & Technology, BML Munjal University, Gurugram, Haryana-122413, India [cor1, cor2]Corresponding authors The ongoing discussion regarding the utilization of individual research performance for academic hiring, funding allocation, and resource distribution has prompted the need for improved metrics. While traditional measures such as total publications, citations count, and the h-index provide a general overview of research impact, they fall short of capturing the continuous contribution of researchers over time. To address this limitation, we propose the implementation of the K_z index, which takes into account both publication impact and age. In this study, we calculated K_z scores for 376 research profiles. K_z reveals that the researchers with the same h-index can exhibit different K_z scores, and vice versa. Furthermore, we observed instances where researchers with lower citation counts obtained higher K_z scores, and vice versa. Interestingly, the K_z metric follows a log-normal distribution. It highlights its potential as a valuable tool for ranking researchers and facilitating informed decision-making processes. By measuring the continuous research impact, we enable fair evaluations, enhance decision-making processes, and provide focused career advancement support and funding opportunities. * Traditional metrics such as total publications, citations count, and the h-index provide an overall measure of research impact but fail to capture the continuous contribution of researchers. Therefore, there is a need for a robust tool to measure the continuous research impact. * The proposed K_z index is introduced as a solution, taking into account both the impact and age of publications. * Even if two or more researchers have identical total publications, citations count, and h-index, it is unlikely that they share the same K_z scores This characteristic of the K_z index makes it a valuable ranking tool. * The K_z index enables the identification of both star contributors and those with lower impact in the realm of research. * By measuring the continuous research impact, a more comprehensive assessment can be achieved, leading to fair evaluations towards career progression support and research funding. h-index Research impact Research evaluation Citation analysis Science policy Scientometrics [ [ Institute for Statistics, University of Bremen, Germany ============================================================ § INTRODUCTION Research impact is a crucial factor when evaluating the contributions of researchers <cit.>. It plays a vital role in assessing the quality, significance, and reach of their work, which is instrumental in academic promotions, grant allocations, award selections, and overall career progression. Existing indices like the h-index and citation count are commonly used to measure research impact <cit.>; however, it's important to recognize that citations may not provide a comprehensive representation of impact, especially in fields where citation practices differ or in emerging research domains with limited citation opportunities. Therefore, a more nuanced approach is necessary to capture the full extent of the research impact, considering multiple dimensions beyond traditional metrics. The h-index has been subject to criticism due to its limitations in providing a comprehensive view of scientific impact <cit.>. Initially introduced in 2005 by Hirsch, the h-index is calculated based on the number of papers that have received at least h citations from other papers <cit.>. Since its introduction, the h-index has gained significant popularity in academia and has been commonly employed to evaluate the academic success of scientists in various areas, including hiring decisions, promotions, and grant acceptances. Despite efforts by researchers to propose alternative variants of the h-index  <cit.>, the traditional h-index remains widely used as a performance metric in the assessment of scientists because of its simplicity. To overcome the limitations of h-index, Egghe in 2006 proposed g-index which is determined by the distribution of citations across their publications. It is determined by sorting the articles in decreasing order based on the number of citations they have received. The g-index is defined as the largest number g for which the top g articles collectively accumulate at least g^2 citations <cit.>. This means that a researcher with a g-index of 10 has published at least 10 articles that collectively have received at least (10^2 = 100) citations. It's important to note that unlike the h-index, the citations contributing to the g-index can be generated by only a small number of articles. For example, a researcher with 10 papers, where 5 papers have no citations and the remaining five have 350, 35, 10, 2, and 2 citations respectively, would have a g-index of 10 but an h-index of 3 (as only three papers have at least three citations each). Further, after recognizing the limitations of the h-index <cit.>, researchers have proposed various complementary measures to provide a more comprehensive assessment of research impact such as R-index <cit.>, e-index <cit.>, h'-index <cit.>. In the study by Khurana et al. (2022) <cit.>, an enhancement to the h-index is proposed to capture the impact of the highly cited paper. They introduced h_c which is based on the weight assigned to the highly cited paper. h_c has a greater impact on researchers with lower h-index values, particularly by highlighting the significance of their highly cited paper. However, the effect of h_c on established researchers with higher h-index values was found to be negligible. It is worth noting that the h_c focuses on the first highly cited paper and does not consider the impact of subsequent highly cited papers. This limitation again highlights the need for a more comprehensive measure that takes into account all the important factors contributing to research impact <cit.>. The another measure named, L-sequence, introduced by Liu et al. <cit.>, computes the h-index sequence for cumulative publications while taking into account the yearly citation performance. In this approach, the L number is calculated based on the h-index concept for a specific year. Consequently, the impact of the most highly cited paper in that year may be overlooked, and papers with less than L citations are also not considered. Although the concept captures the yearly citation performance of all papers, it does not effectively capture the continuous impact of each individual paper. Also gathering data for the L-sequence can be challenging, as it requires delving into the citation history of each paper for every year. Quantifying research impact is a multifaceted endeavor <cit.>. There is no universally accepted metric or methodology for measuring continuous research impact, and different stakeholders may prioritize different indicators, such as publications, citations, patents, or societal impact. Measuring the continuous research impact of a researcher is crucial for granular assessment, differentiation among researchers, funding decisions, identification of emerging talent, etc. Determining an inclusive and comprehensive approach that captures the diverse dimensions of research impact remains a challenge. §.§ Research Objective The primary objective of this study is to introduce a reliable metric that can effectively capture the continuous research impact of a researcher. The aim of the proposed metric is to differentiate between two researchers who possess identical research parameters. In order to accomplish the stated objective, a newly introduced measure called the K_z-index has been proposed. § K_Z-INDEX The proposed K_z index serves as a tool to measure the research impact of a researcher. It aims to capture the continuous and evolving contributions made by the researcher over time, considering factors such as total publications, citation count, and publication age. §.§ Definition of K_z-index To measure the continuous research impact of a researcher, K_z takes into account two important factors of research: * Impact (k): The impact of a paper is determined by considering two factors: the number of citations (C) it has received and its h-index. The impact of the paper is calculated by using the following equation; C ≤ (h+1)^k , where k ∈ℝ^+. * Age (Δ t): Δ t represents the publication age in relation to the current year and can be determined through the following computation. Δ t = C_y - P_y where C_y represents the current year and P_y represents the publication year. Now, from Eq.<ref> and Eq.<ref>, K_z can be calculated for every researcher i as K_z = ∑_i=1^N k_i/Δ t_i where N is number of publications and N>0. Equation <ref> highlights the sig nificance of K_z metric by incorporating essential research indicators, including total citations, year of publication, number of publications, publication age, and h-index. This comprehensive approach ensures that all significant aspects of a researcher's work are considered, resulting in a more robust and holistic assessment of their research impact. §.§ Advantages of K_z Measuring the continuous research impact of a researcher is crucial for several reasons: * Granular assessment: Traditional matrices such as the citations count, h-index, etc. present an overall impact on a researcher and do not have the capability to capture the ongoing progress and advancement of their work, whereas K_z can acquire a more nuanced and thorough comprehension of a researcher's contributions as they evolve over time. * Differentiation among researchers: Even if two researchers possess the same h-index, their patterns of impact over time may vary significantly. Analyzing their continuous research impact can uncover disparities in productivity and can provide a more comprehensive understanding of their individual profiles. Hence, K_z allows for a more nuanced differentiation among researchers. * Evaluation of long-term impact: Researchers may experience fluctuations in their productivity and impact over their careers. Measuring continuous research impact enables the evaluation of long-term contributions. K_z has the capability of highlighting researchers who consistently generate influential work and have a lasting impact on their field. * Career progression and funding decisions: Many academic institutions, funding agencies, and hiring committees rely on research performance metrics to make decisions. K_z can provide more informed evaluations of researchers, enabling fairer assessments and enhancing the recognition of sustained excellence. * Identification of emerging talent: Continuous research impact measurement can help identify early-career researchers with promising trajectories. By recognizing their continuous growth and impact, further opportunities can be provided to nurture their potential. § CASE STUDIES OF K_Z We conducted four case studies to explore the significance of K_z. Each case study involved two researchers, namely R1 and R2. The number of publications was kept constant across all cases, while the focus was on comparing the h-index and total citations (TC) of two researchers. * Case I - Identical h-index and total citations: Table <ref> represents the first case study where we assumed that both researchers R1 and R2 have the same h-index and total citations count. However, despite sharing these characteristics, researcher R2 obtained a higher K_z score than R1. This difference in Kz scores can be attributed to the impact of the publication year, which played a dominant role in determining the continuous research impact of each researcher. It highlights the significance of considering the temporal aspect of research contributions when assessing the research impact on individuals. * Case II - Identical h-index and different total citations: In this case (Table <ref>), both researchers R1 and R2 have an equal number of publications and h-index, but they differ in their total citations count. Researcher R1 has one highly cited paper, while researcher R2 has multiple highly cited papers. Despite R1 having a higher total number of citations compared to R2, R2 obtains a higher K_z score. This indicates that the impact of having multiple highly cited papers outweighs the effect of a single highly cited paper in determining the continuous research impact. * Case III(a) - Different h-index and total citations: In this case (Table <ref>), both researchers have an equal number of publications but differ in their h-index, number of high impact papers, and total citations. Researcher R1 has a higher h-index but lower total citation count compared to R2. However, despite R1 having a lower total citation count, they obtain the highest K_z score. This highlights the importance of considering the continuous research impact captured by K_z, which takes into account not only the number of citations but also the publication age and impact of publications. * Case III(b) - Different h-index and Total Citations:: In this case (Table <ref>), we again considered two researchers with an equal number of publications but different h-index, high impact papers, and total citations. Researchers R1 had a higher h-index and total citation count compared to researcher R2. Surprisingly, despite these differences, it was researcher R2 who obtained the highest K_z score. This finding suggests that the K_z score takes into account factors beyond just h-index and total citations, emphasizing the importance of considering the continuous impact and temporal aspects of research contributions. § EMPIRICAL STUDY To calculate the continuous research impact (K_z) of researchers, the research profiles of 376 individuals affiliated with Monash University, Australia were obtained. Monash University is a public research institution located in Australia, and information about the researchers can be found on their webpage at <https://research.monash.edu/en/persons/>. The webpage provides the researcher's research ID and Orcid ID, which facilitated the extraction of their publication details and citations from the Web of Science database. From a pool of 6316 researchers' profiles, we selected 376 profiles across different disciplines, ensuring a range of h-index values (1 ≤ h ≤ 112). The choice of databases was made based on data availability. For each researcher ID, information regarding the publication year and the corresponding citations received were extracted. For each researcher, the h-index, g-index, and K_z were computed. Additionally, the overall research age or career length of the researcher was determined by subtracting the year of his/her first publication from the current year. §.§ Comparison of K_z with h-index and career length By using equation <ref>, we calculated the K_z score of 376 researchers. In Figure <ref>, a scatter plot depicting the relationship between K_z and career length. Each dot on the plot represents an individual researcher. The horizontal dashed line represents the median of the axis, while vertical dashed lines are used to divide the plot into three zones based on the length of the researchers' careers: early career (≤ 10 years), mid career (>10 and ≤ 20 years) and advanced career (>20 years). This visualization clearly differentiate between the star performer and average performer at different career stages. Table <ref> provides examples of researchers who have the same h-index values of 25 and 30. It also includes the computation of the g-index, which demonstrates that researchers with the same h-index can have different g values due to variations in their total citation counts. Therefore, it is possible for a researcher with a lower citation count to have a higher g-index, and vice versa. The presence of the same h-index highlights its limitation in differentiating the top-performing researcher from others whereas K_z significantly differentiates the impactful researcher from others. This distinction highlights the varying impact among researchers. Similarly, Table <ref> showcases profiles of researchers with the same career age, yet their K_z scores differ. K_z clearly differentiates the impactful researcher from others where researchers are of the same career length. The same observation applies to total publication and citation counts. In Table <ref>, we examined 11 comparative cases of researchers with identical h-index and career length. Among these cases, one noteworthy instance is S1, where two researchers share the same career length of 8 years and h-index of 12. However, the researcher with higher total publications and citations count has a higher K_z score than the other. Whereas, in case S3, two researchers have a career length of 13 years and h-index of 19, the one with lower total publications but higher citation counts, compared to the other researcher, has a higher K_z score. On the other hand, in case S7, two researchers have a career length of 17 years and h-index of 13, the one with higher total publications but lower citation counts, compared to the other researcher, has a higher K_z score. Hence, this indicates that the K_z metric considers all relevant research indicators such as total publications, citation count, h-index, and publication age to capture the continuous impact of an individual. It is not safe to assume that a higher K_z score is solely determined by either higher total publications or higher citation counts. Additionally, it cannot be concluded that a person with a higher h-index will always have a higher K_z score. The K_z metric takes a comprehensive approach in evaluating research impact, considering multiple factors simultaneously. §.§ Probability distribution of K_z Figure <ref> presents a graphical representation of the plot for log(K_z), which exhibits a mean value of μ and a standard deviation of σ. This plot is compared to the normal distribution with the same mean and standard deviation. The overlapping nature of the two plots suggests that the variable K_z follows a log-normal distribution. To confirm this observation, a “Goodness of Fit" test was conducted using the χ^2 distribution. The objective of the Goodness of Fit Test was to assess the suitability of the null hypothesis that states “the distribution of log(K_z) conforms well to a normal distribution.” The test was executed in the following manner: The logarithm of the values of K_z was computed, and these values were then classified into seven distinct classes, taking into account the mean (μ = 0.78787) and standard deviation (σ = 0.37448). Subsequently, the observed frequencies (O_i) for each class were determined. To obtain the expected frequencies (E_i), the entire dataset consisting of 376 observations was subjected to calculations based on the normal distribution. The specific calculations and their results are provided in Table <ref>. The χ^2 value was computed using the formula χ^2 = ∑(O_i - E_i)^2/E_i and yielded a value of 7.466. As the calculated χ^2 value is smaller than the critical value χ^2_(6,0.05) = 12.592, we cannot reject the null hypothesis at a significance level of 0.05. Therefore, we can conclude that log(K_z) is a suitable fit for the normal distribution. §.§.§ Identification of top contributors and low contributors In the case of a normal distribution, the middle 50% of the data is encompassed within a range of +0.67 and -0.67 standard scores from the mean. Consequently, researchers in the top 25% satisfy the condition K_z ≥ e^(μ+ 0.67 σ), while researchers in the bottom 25% satisfy the condition K_z ≤ e^(μ - 0.67 σ). Similarly, using the properties of normal distribution, the α% of top and bottom performers can be identifies. Unlike previous indices such as the h, g, e, h_c, etc., the K_z-index allows for the identification of both top and bottom contributors. This categorization based on K_z scores can be beneficial for universities, scientific communities, and research funding agencies in identifying significant contributors. § DISCUSSION AND CONCLUSION In this study, we have discussed various research indicators, including total publications, citations count, h-index, g-index, etc., commonly used to measure the impact of research. While total publications, citation count, and h-index are commonly used indicators to assess research impact, they have some limitations when considered individually. * Total publications: Relying solely on the number of publications can be misleading, as it does not consider the quality or impact of those publications. Quantity alone does not reflect the significance or influence of a researcher's work. * Citations count: While citation count is a useful indicator of the influence and visibility of a researcher's work, it can be influenced by factors such as the field of study, publication age, and citation practices within the research community. Additionally, self-citations can artificially inflate citation counts and impact assessments. * h-index: The h-index takes into account both the number of publications and their corresponding citations. However, it does not differentiate between highly cited publications and those with fewer citations. A researcher with a few highly influential papers can have the same h-index as someone with many moderately cited papers. Additionally h-index ignores all the papers which are cited less than the h. * Temporal considerations: Individual metrics may not capture the continuous progress and development of a researcher's work over time. They provide a snapshot of impact at a specific moment and may not reflect the long-term contributions or evolving research trajectory. To overcome these limitations and capture the dynamic nature of research impact, it is essential to consider multiple indicators and employ comprehensive assessment approaches like the K_z metric, which incorporates various factors to provide a more nuanced understanding of research impact. K_z is filed independent as well as takes into account the temporal aspect of the work. Unlike other research indicators, K_z takes into account not only the total publications and citations count but the age of the publications too. Our results demonstrate how K_z can effectively differentiate between two potential researchers who may have the same h-index, citations count, or career length. By incorporating K_z into the evaluation process, we can better assess the research dynamics of an individual and gain insights into their continuous impact over time. To conclude, K_z holds the potential to serve as a superior measure for capturing the impact of individuals, institutions, or journals. Its comprehensive consideration of various research indicators allows a more nuanced assessment of research impact. FurtherK_z can be utilized as a ranking method to evaluate and rank researchers within an institution based on their research impact. Similarly, institutions and journals can be compared and ranked according to their research impact. This information can be valuable in decision-making processes, as funding agencies, research award committees and hiring bodies can leverage the power of K_z to rank potential candidates within a specific field. It provides a standardized tool to assess and compare the impact of research entities, facilitating more informed decisions and promoting recognition based on research excellence. There are some challenges associated with computing the K_z metric too. Some of the potential challenges include: * Data availability and accuracy: Obtaining accurate and comprehensive data from various sources can be a challenge. Different databases may have variations in the coverage of publications and citations, potentially leading to incomplete or inconsistent data. * Data quality and reliability: The accuracy and reliability of the data sources used for computing K_z are crucial as inaccurate or incomplete data can result in misleading or flawed assessments of research impact. * Self-citation manipulation: The issue of self-citation manipulation, where researchers excessively cite their own work to inflate their impact metrics, can pose a challenge as detecting such manipulations requires careful scrutiny and data filtering techniques. As discussed, it can be inferred that the K_z index is a comprehensive mathematical function that considers multiple factors to assess the impact of a researcher. These factors include the researcher's total publications, the citation count of each paper, the researcher's h-index, and the age of publication. The K_z index recognizes influential papers which often receive citations at a faster rate, indicating a greater impact, and therefore assigns them higher weight in impact evaluation. By considering these aspects, the K_z index tends to yield higher values in cases where a researcher has made significant contributions that have garnered substantial citations. § ACKNOWLEGEMENT We acknowledge the suggestions provided by Dr. Satyam Mukherjee. model1-num-names
http://arxiv.org/abs/2306.12490v1
20230621180331
Nonlinear photon-plasma interaction and the black hole superradiant instability
[ "Enrico Cannizzaro", "Fabrizio Corelli", "Paolo Pani" ]
gr-qc
[ "gr-qc", "astro-ph.HE", "hep-ph", "physics.plasm-ph" ]
[email protected] [email protected] [email protected] Electromagnetic field confinement due to plasma near accreting black holes can trigger superradiant instabilities at the linear level, limiting the spin of black holes and providing novel astrophysical sources of electromagnetic bursts. However, nonlinear effects might jeopardize the efficiency of the confinement, rending superradiance ineffective. Motivated by understanding nonlinear interactions in this scenario, here we study the full 3+1 nonlinear dynamics of Maxwell equations in the presence of plasma by focusing on regimes that are seldom explored in standard plasma-physics applications, namely a generic electromagnetic wave of very large amplitude but small frequency propagating in an inhomogeneous, overdense plasma. We show that the plasma transparency effect predicted in certain specific scenarios is not the only possible outcome in the nonlinear regime: plasma blow-out due to nonlinear momentum transfer is generically present and allows for significant energy leakage of electromagnetic fields above a certain threshold. We argue that such effect is sufficient to dramatically quench the plasma-driven superradiant instability around black holes even in the most optimistic scenarios. Nonlinear photon-plasma interaction and the black hole superradiant instability Paolo Pani 7 June 2023 =============================================================================== § INTRODUCTION A remarkable property of black holes (BHs) is that they can amplify low-frequency radiation in a process known as superradiant scattering <cit.>, which is the wave analog of energy and angular momentum extraction from a BH through the Penrose's process <cit.> (see <cit.> for an overview). In 1972, Press and Teukolsky argued that superradiance can in principle be used to produce a “BH bomb” <cit.> provided the amplified waves are confined in the vicinity of the BH, leading to repeated scattering and coherent energy extraction. A natural confining mechanism is provided by the Yukawa decay of massive particles, which is the reason why spinning BHs are unstable against massive bosonic perturbations <cit.>. Due to this superradiant instability <cit.>, massive bosons can extract a significant amount of energy from astrophysical BHs, forming a macroscopic condensate wherein their occupation number grows exponentially. This phenomenon leads to striking observable signatures, such as gaps in the BH mass-spin distribution and nearly-monochromatic gravitational-wave emission from the condensate <cit.>. In order for the instability to be efficient, the Compton wavelength of these bosons should be comparable to the BH size, which selects masses around 10^-11 (10M_⊙/M) eV, where M is the BH mass <cit.>. The possibility of turning BHs into particle-physics laboratories <cit.> for searches of ultralight dark matter has motivated intense study of the superradiant instability for ultralight spin-0 <cit.>, spin-1 <cit.>, and more recently spin-2 <cit.> fields, which in turn spread into many diverse directions <cit.>. Already at the very birth of BH superradiance, Press and Teukolsky suggested that in the presence of astrophysical plasma even ordinary photons could undergo a superradiant instability, without the need to invoke beyond Standard Model physics <cit.>. Indeed, a photon propagating in a plasma acquires an effective mass known as the plasma frequency <cit.> ω_p=√(n_e e^2/m)≈ 10^-11√(n_e/10^-1cm^-3) eV/ħ where n, e, and m are the plasma density, electron charge, and electron mass, respectively. In the case of interstellar plasma (n_e ∼ 10^-1 cm^-3), the effective mass is in the right range to trigger an instability around stellar mass BHs (which was also advocated as a possible explanation for the origin of fast radio bursts <cit.>), whereas primordial plasma could trigger superradiant instabilities in primordial BHs potentially affecting the cosmic microwave background <cit.>. Furthermore, astrophysical BHs can be surrounded by accretion disks due to the outward transfer of angular momentum of accreting matter, effectively introducing a geometrically complex plasma frequency. Given the ubiquity of plasma in astrophysics and the central role that BHs play as high-energy sources and in the galaxy evolution, it is of utmost importance to understand whether the plasma can play an important role in triggering BH superradiant instabilities. While the first quantitative studies about the plasma-driven superradiant instability <cit.> approximated the dynamics as that of a massive photon with effective mass (<ref>), the actual situation is much more complex, since Maxwell's equation must be considered together with the momentum and continuity equation for the plasma fluid. Recently, a linearized version of the plasma-photon system (neglecting plasma backreaction) in curved spacetime was studied in <cit.>, where it was shown that the photon field can be naturally confined by plasma in the vicinity of the BH via the effective mass, forming quasibound states that turn unstable if the BH spins. Nevertheless, a crucial issue was unveiled in <cit.>, where it was argued that, during the superradiant phase, nonlinear modifications to the plasma frequency turn an initially opaque plasma into transparent, hence quenching the confining mechanism and the instability itself. In the nonlinear regime, a transverse, circularly polarized electromagnetic (EM) wave with frequency ω and amplitude E modifies the plasma frequency of a homogeneous plasma as <cit.> ω_p=√(n_e e^2/m √(1+e^2 E^2/m^2 ω^2)) , where the extra term is the Lorentz factor of the electrons. In other words, as the field grows, the electrons turn relativistic and their relativistic mass growth quenches the plasma frequency. As argued in Ref. <cit.>, the threshold of this modification lies in the very early stages of the exponential growth, before the field can extract a significant amount of energy from the BH. While in this specific configuration the quenching of the instability is evident, this argument suffers for a number of limitations. In particular, circularly polarized plane waves in a homogeneous plasma are the only solutions that are purely transverse, as the nonlinear v⃗×B⃗ Lorentz force vanishes (here v⃗ is the velocity of the electron, while B⃗ is the magnetic field). In this case, the plasma density is not modified by the travelling wave and even a low-frequency wave with large amplitude can simply propagate in the plasma, without inducing a nonlinear backreaction. In every other configuration instead (including an inhomogeneous plasma, different polarization, or breaking of the planar symmetry, all expected for setups around BHs), longitudinal and transverse modes are coupled, and therefore the plasma density can be dramatically modified by the propagating field. This backreaction effect leads to a richer phenomenology as high-amplitude waves can push away electrons from some regions of the plasma, thus creating both a strong pile-up of the electron density in some regions and a plasma depletion in other regions. For example, in the case of a circularly polarized wave scattered off an inhomogeneous plasma, the backreaction on the density increases the threshold for relativistic transparency, as electrons are piled up in a narrow region, thus increasing the local density and making nonlinear transparency harder <cit.>. However, in the case of a coherent long-timescale phenomenon such as superradiant instability, one might expect that, if the plasma is significantly pushed away by a strong EM field, the instability is quenched a priori, regardless of the transparency. Overall, the idealized configuration of Ref. <cit.> never applies in the superradiant system, and the nonlinear plasma-photon interaction is much more involved. The goal of this work is to introduce a more complete description of the relevant plasma physics needed to understand plasma-photon interactions in superradiant instabilities. To this purpose, we shall perform 3+1 nonlinear numerical simulations of the full Maxwell's equations. Clearly, this is a classical topic in plasma physics <cit.>. Here we are interested in a regime that is relevant for BH superradiance but is seldom studied in standard plasma-physics applications, namely a low-frequency, high-amplitude EM wave propagating in an inhomogeneous overdense plasma. § FIELD EQUATIONS For simplicity, and because the stress-energy tensor of the plasma and EM field is negligible even during the superradiant growth, we shall considered a fixed background and neglect the gravitational field. We consider a system composed by the EM field and a plasma fluid, described by the field equations (in rationalized Heaviside units with c = 1): ∇_μ F^μν = J^μ, u^ν∇_ν u^μ = e/m F^μν u_ν, ∇_μ(n_e u^μ) = 0, where F_μν is the EM tensor, J^μ is the EM 4-current, u^μ is the 4-velocity field for the plasma fluid, and n_e is the rest number density of electrons inside the plasma. Having in mind future extensions, below we perform a 3+1 decomposition[We shall use Greek alphabet to denote spacetime indices μ, ν∈{0, 1, 2, 3}, and Latin alphabet to denote spatial indices i, j ∈{1, 2, 3}.] of the field equations that is valid for any curved background spacetime. However, in this work we will perform our simulations in flat spacetime, ds^2 = η_μν dx^μ dx^ν. §.§ 3+1 decomposition of the field equations §.§.§ Generic spacetime Let us introduce a foliation of the spacetime into spacelike hypersurfaces Σ_t, orthogonal to the 4-velocity of the Eulerian observer n^μ. We then express the line element as ds^2 = -(α^2 - β_i β^i) dt^2 + 2 β_i dx^i dt + _ij dx^i dx^j, where α is the lapse, β^i is the shift vector, and _ij is the spatial 3-metric. We can define the electric and the magnetic fields as <cit.> E^μ = -n_ν F^νμ, B^μ = -n_ν F^∗νμ, where F^∗μν= -1/2ϵ^μνλσ F_λσ is the dual of F^μν. The EM tensor can be decomposed as F^μν = n^μ E^ν - n^ν E^μ + ϵ^μνσ B_σ, where ϵ^μνσ = n_λϵ^λμνσ is the Levi-Civita tensor of the spacelike hypersurface Σ_t. Note that E^μ and B^μ are orthogonal to n^μ and are spacelike vectors on the 3-surfaces Σ_t. We can define the charge density as ρ = n_μ J^μ, and the 3-current as J^μ = ^μ_ν J^ν, where ^μ_ν is the projection operator onto Σ_t. Finally, we can write the Maxwell equations as <cit.> D_i E^i = ρ, D_i B^i = 0, E^i = _β E^i + α K E^i + [D⃗× (αB⃗)]^i + αJ^i, B^i = _β B^i + α K B^i - [D⃗× (αE⃗)]^i, where D_i is the covariant derivative with respect to the 3-metric γ_ij, and K_ij is the extrinsic curvature. Here the first equation is the Gauss' law, the second equation is equivalent to the absence of magnetic monopoles, and the last two are the evolution equations for the electric and magnetic fields, respectively. The EM 4-current is given by ions and electrons J^μ = J^μ_(ions) + J^μ_(e). We assume ions to be at rest, due to the fact that m ≪ m_(ions), so that J^μ_(ions) = ρ_(ions) n^μ. For electrons instead we have J^μ_(e) = -e n_e u^μ. Let us decompose u^μ into a component along n^μ, Γ = -n^μ u_μ, and a component on the spatial hypersurfaces, u^μ = ^μ_ν u^ν. The 4-velocity of the fluid can be written as u^μ = Γ n^μ + u^μ = Γ (n^μ + ^μ) , where we defined u^μ = Γ^μ. The above expression allows us to write ρ = n_μ J^μ = ρ_(ions) + ρ_(e) = ρ_(ions) + e, where = Γ n_e is the electron density as seen by the Eulerian observer. The density of ions is constant in time, and will be fixed when constructing the initial data[Note that with the conventions we used, electrons carry positive charge, while ions carry negative charge.]. As J^μ_(ions) is orthogonal to Σ_t, the 3-current J^μ receives only contributions from electrons, and we have J^μ = - e n_e Γ^μ = -e ^μ. Thus, the source terms that appear in Eqs. (<ref>)-(<ref>) are ρ = ρ_(ions) + e , J^μ = - e ^μ. Let us now move to Eq. (<ref>). Projecting it on n^μ and Σ_t we obtain respectively (see Appendix <ref> for the explicit computation): Γ = β^i ∂_i Γ - α^i ∂_i Γ + αΓ K_ij^i ^j - Γ^i ∂_i α + e/mα E^i _i , ^i = β^j ∂_j ^i - ^j ∂_j β^i - α a^i - α^i K_jl^j ^l + α/Γe/m( - ^i E^j _j + E^i + ϵ^i j l B_l _j ) + 2 αK^i_j^j + ^i ^j ∂_j α - α^j D_j ^i Finally, we can write the continuity equation (<ref>) as = β^i ∂_i + α K - α^i ∂_i - α∇_μ^μ. While the above decomposition is valid for a generic background metric, from now on we will focus on a flat spacetime. §.§.§ Flat spacetime We use Cartesian coordinates, so that g_μν = η_μν = {-1, 1, 1, 1}. As a consequence, we have that for any 3-vector V^i = V_i, and α = 1, β^i = 0, K_ij = 0. In these coordinates we can write the equations for the EM field as ∂_i E^i = ρ_(ions) + e , ∂_i B^i = 0, E^i = [∂⃗×B⃗]^i - e ^i, B^i = - [∂⃗×E⃗]^i, the evolution equations for Γ and ^i as Γ = - ^i ∂_i Γ + e/m E^i _i, ^i = - ^j ∂_j ^i + 1/Γe/m[ -^i E^j _j + E^i + (×B⃗)^i ], and the continuity equation as = - ^i ∂_i - ∂_i ^i. Moreover, from the normalization condition that u^μ u_μ = -1 we can obtain a constraint for Γ and ^i: Γ^2(1 - ^i _i) = 1. § NUMERICAL SETUP In this section we discuss our numerical setup, describing the integration scheme and the initialization procedure. §.§ Integration scheme We evolve E⃗, B⃗, Γ, , and with Eqs. (<ref>)-(<ref>), using the constraints (<ref>) and (<ref>) to evaluate the convergence of the code. The profile of ρ_(ions) is kept constant, consistently with the approximation that ions are at rest. For the numerical integration we used the fourth-order accurate Runge-Kutta algorithm, computing the spatial derivatives with the fourth-order accurate centered finite differences scheme. For simplicity we shall simulate the propagation of plane EM wave packets along the z direction, and therefore we will obtain field configurations that are homogeneous along the x and y directions. This feature allows us to impose periodic boundary conditions in the x and y directions, as they preserve the homogeneity of the solution without introducing numerical instabilities. We impose periodic boundary conditions also on the z axis and, in order to avoid the spurious interference of the EM wave packet with itself, we choose grids with extension along z large enough to avoid interaction with spurious reflected waves during the simulations. §.§ Initialization procedure When constructing the initial data for the simulations we first set the profile of the plasma. We start by setting Γ(t = 0, x⃗) = 1 and (t = 0, x⃗) = 0, so that the plasma is initially at rest. Then, we initialize the profile of with barrier-like shape of the following form: (t = 0, x⃗) = 2 n_ bkg - n_ max + (n_ max - n_ bkg) ×[σ(z; W_1, z_1) + σ(z; -W_2, z_2)], Where σ(z; W, z_0) = (1+e^-W(z - z_0))^-1 is a sigmoid function. The qualitative behavior of Eq. (<ref>) is shown in Fig. <ref>, where we can see that n_ bkg is the background value of the plasma density and n_ max is the plasma density at the top of the barrier. The parameters z_1,2 determine the location and width of the barrier, while the parameters W_1, 2 control its steepness. Note that this profile was chosen to reproduce a very crude toy model of a matter-density profile around a BH <cit.>, where the accretion flow peaks near the innermost stable circular orbit and is depleted between the latter and the BH horizon. In our context this configuration is particularly relevant because EM waves can be superradiantly amplified near the BH and plasma confinement can trigger an instability <cit.>. Finally, the constant profile of ρ_(ions) is determined by imposing that the plasma is initially neutral, so that ρ_(ions)(t = 0, x⃗) = -e (t = 0, x⃗). Once the profile of the plasma has been assigned we proceed to initialize the EM field. We consider a circularly polarized wave packet moving forward in the z-direction: E⃗ = A_E [ cos[k_z(z - z_0)]; sin[k_z(z - z_0)]; 0 ] e^-(z - z_0)^2/2 σ^2, B⃗ = A_E k_z/ω[ - sin[k_z(z - z_0)]; cos[k_z(z - z_0)]; 0 ] e^-(z - z_0)^2/2 σ^2, where A_E is the amplitude of the wave packet, σ is its width, z_0 its central position, ω is the frequency, and k_z = √(ω^2 - ω_p^2), where ω_p = √(e^2 n_ bkg/m) is the plasma frequency computed using n_ bkg, as the wave packet is initially located outside the barrier (i.e., σ≪ z_1-z_0). § RESULTS Here we present the results of our numerical simulations of nonlinear plasma-photon interactions in different configurations. We shall consider a low-frequency, circularly polarized wave packet propagating along the z direction and scattering off the plasma barrier with the initial density profile given by Eq. (<ref>). §.§ Linear regime As a consistency check of our code, we tested that for sufficiently low amplitude waves our simulations are in agreement with the predictions of linear theory. We set units such that e = m = 1 and consider an initial wave packet of the electric field centered at z_0 = 0, with a characteristic width σ = 5. We also set ω = 0.5 and A_E = 10^-6, so that the evolution can be described by the linear theory. The plasma barrier was situated between z_1 = 40 and z_2 = 100, and we set W_1 = W_2 = 1. The background density of the plasma was n_ bkg = 0.01 so that ω_p^ (bkg) = 0.1 and all the frequency content of the EM wave is above the plasma frequency of the background. We run 6 simulations with n_ max = {n_ bkg, 0.25, 0.5, 0.75, 1, 1.25}, that correspond to plasma frequencies at the top of the barrier ω_p^ (max) = {0.1, 0.5, 0.707, 0.866, 1, 1.12}, respectively, and fall in different parts of the frequency spectrum of the EM wave packet. In the linear regime, we expect that the frequency components above ω_p^ (max) will propagate through the plasma barrier, while the others will be reflected, and this setup allows us to clearly appreciate how this mechanism takes place. In all these simulations we used a grid that extends in [-1, 1] × [-1, 1] × [-450, 450], with a grid step Δ x = Δ y = Δ z = 0.2 and a time step Δ t = 0.1, so that the Courant-Friedrichs-Lewy factor is CFL = 0.5 The final time of integration was set to T = 400. Figure <ref> shows some snapshots of the numerical results at different times for different values of ω_p^ (max). It is evident that the analytical predictions of linear theory are confirmed: as the plasma frequency of the barrier increases, less and less components are able to propagate through it and reach the other side. In particular, when ω_p^ (max)≳ 0.9 the wave is almost entirely reflected, and the transmitted component becomes negligible. Furthermore, in the linear regime the backreaction on the density is effectively negligible, as the barrier remains constant over the entire simulation (in fact, we observed a maximum variation of of the order of 10^-11, which is clearly not appreciable on the scale of Fig. <ref>). To better quantify the frequency components that are propagated and the agreement between the simulations and the analytic expectation in the linear regime, we computed the (discrete) Fourier transform of the time evolution of E^x in two points along the z axis: z=-50 and z=150, which are located before and after the plasma barrier, respectively. Figure <ref> shows the absolute value of the Fourier transform for the different values of the plasma frequency in the barrier, which are represented as vertical dotted lines. As we can see from the Fourier transform at z=150, the transmitted waves have only components with frequency ω>ω_p^ (max), in agreement with the fact that only modes above this threshold can propagate. Hence, the barrier perfectly acts as a high-pass filter, with a critical threshold given by the plasma frequency. §.§ Nonlinear regime We can now proceed to increase the amplitude of the field until linear theory breaks down and the interaction becomes fully nonlinear. As anticipated, we shall show that the evolution is more involved than in the idealized model described in <cit.>. Indeed, even from a first qualitative analysis, it is evident from the z-component of the momentum equation (<ref>) that in the nonlinear regime electrons will experience an acceleration along the z axis due to the nonlinear Lorentz term (×B⃗)^z. The formation of a current along the z directions implies a modification of the density profile because of the continuity equation, and also the formation of a longitudinal electric field that tries to balance and preserve charge neutrality. In the following, we will support this qualitative analysis with the results of the numerical simulations and show that nonlinear effects can have a dramatic impact on the system dynamics. In this set of simulations, we set units[Note that, in rationalized Heaviside units, changing m (and hence the classical electron radius) simply accounts for rescaling lengths, times, and masses in the simulations. Lengths and times are rescaled by [m]^-1, while the electric field amplitude scales as [m]^2. Hence, the results of this section can be obtained in the case m=1 by rescaling the other quantities accordingly.] such that e=1 and m=1000, and we consider an initial wave packet of the electric field centered at z_0 = -150, with a width[While formally the initial profile of the EM field, Eq. (<ref>), represents a circularly polarized wave packet, the chosen value of the parameter σ reduces the y component of the electric field, making the polarization effectively elliptic.] σ = 100 and ω=0.001. We vary the amplitude of the EM in a range 0.1 ≤ A_E ≤ 1000. As for the plasma profile, we adopt a similar geometric model to the linear case, with the barrier placed between z_1 = 100 and z_2 = 650, with W_1 = W_2 = 0.1. We consider a background density n_ bkg = 5 × 10^-6, and a maximum barrier density n_ max = 0.5, that corresponds to a plasma frequency of ω_p^ (max) = 0.022. We use a numerical grid that extends in [-2, 2] × [-2, 2] × [-750, 850], with a grid spacing Δ x = Δ y = Δ z = 0.2, and a time step Δ t = 0.1, so that CFL = 0.5. The final time of integration was set to T = 500. The parameters are chosen such that the frequency of the wave packet is always much larger than ω_p^ (bkg), but a significant component of the spectrum, namely ≈ 97.5 %, is below the plasma frequency of the barrier, and should therefore be reflected if one assumes linear theory. First of all, we quantify the value of the electric field which gives rise to nonlinearities. A crucial parameter that characterizes the threshold of nonlinearities in laser-plasma interactions is the peak amplitude of the normalized vector potential, defined as a_0= e A/m (see e.g. <cit.>). Specifically, when a_0 ≳ 1, electrons acquire a relativistic transverse velocity, and therefore the interactions become nonlinear. Given our units, and estimating A ≈ E/ω, we obtain a critical electric field E_ crit≳ m ω /e ≈ 1. We performed a set of simulations choosing different values of the initial amplitude of the EM wave packet in the range 0.1 ≤ A_E ≤ 1000. Figure <ref> shows snapshots of the numerical simulations for some selected choices of A_E. It is possible to observe that in the case A_E=1 (top panel) the density profile of plasma is not altered throghout all the simulation, as in the linear case discussed in the previous section. Moreover, at sufficiently long times, the wavepacket is reflected by the barrier, in agreement with linear theory predictions. From the second panel on (i.e. as A_E ≳ 10), instead, the wavepacket induces a nonnegligible backreaction on the plasma density. This effect increases significantly for higher amplitudes, and it is due to the nonlinear couplings between transverse and longitudinal polarizations: the nonlinear Lorentz term (×B⃗)^z in the longitudinal component of the momentum equation (<ref>) induces a radiation pressure on the plasma, and hence a longitudinal velocity ^z; as electrons travel along the z direction and ions remain at rest, a large longitudinal field due to charge separation is created, which tries to balance the effect of the Lorentz force and restore charge neutrality. This phenomenology resembles the one of plasma-based accelerators, where super-intense laser pulses are used to create large longitudinal fields that can be used to accelerate electrons <cit.>. To quantify the collective motion induced by nonlinearities we computed the velocity dispersion of electrons as √(⟨^2 ⟩) =√(∫_V d^3 x _i ^i/∫_V d^3 x ). Since the field are constant along the transverse directions[In Appendix <ref> we show how the homogeneity of the fields along the transverse plane is preserved during the evolution.], then (x, y, z) = (z) and ^i(x, y, z) = ^i(z). This allows us to evaluate the above integral as √(⟨^2 ⟩) =√(∫_z_-∞^z_+∞ dz (z) _i(z) ^i(z)/∫_z_-∞^z_+∞ dz (z)), where z_±∞ are the boundaries of the z domain and we compute the integral using the trapezoidal rule. In the upper panel of Fig. <ref> we plot the behavior of the velocity dispersion with respect to the initial amplitude A_E for different times. As we can see the nonlinearities start becoming relevant in the range 1 ≲ A_E ≲ 10, where electrons start to acquire a collective motion. This is also confirmed by the middle panel, where the solid and dashed lines denote the maximum of || and ^z, respectively. While these quantities do not represent the collective behavior of the system, they have the advantage of not containing the contribution given by the portion of the plasma barrier that has not been reached yet by the EM wave. From this plot we can observe that in the range 1 ≲ A_E ≲ 10, the electrons start acquiring a relativistic velocity with a large component on the transverse plane. As already mentioned, the longitudinal motion of electrons generate a longitudinal field. Nevertheless, plasmas can sustain longitudinal fields only up to a certain threshold, usually called wave-breaking (WB) limit, above which plasma is not able to shield and sustain anymore electric fields, and the fluid description breaks down. This phenomenon was pioneered in <cit.> for the case of nonlinear, nonrelativistic cold plasmas, where the critical longitudinal field for WB was found to be E^z_ WB=mω_p/e, and later generalized for pulses with relativistic phase velocities <cit.>. This threshold field represents the limit after which the plasma response loses coherence as neighbouring electrons start crossing each other within one plasma frequency period. Therefore, above this critical electric field the plasma is not anymore able to coherently act as a system of coupled oscillators, and the fluid model based on collective effects breaks down. This leads to the formation of a spike in , which eventually diverges, and to a steepening of the longitudinal component of the electric field. Full particle-in-cell numerical simulations are required after the breakdown (see, e.g., <cit.>). In our simulations, we observe the WB phenomenon at late time for large values of the electric field, in which cases we can only extract information before the breakdown of the model. In order to better appreciate how the WB takes place, we repeated the simulation with A_E = 1 for a longer integration time and a larger grid. In the upper panel of Fig. <ref> we show the evolution of E^x (solid lines) throughout all the simulation, where we can clearly see that the incoming wave packet is reflected by the plasma barrier. However, for t ≈ 700, the longitudinal component of leads to an evolution of the plasma density. In this stage the plasma loses coherence and develops local spikes that increase in height and becomes sharper with time. When one of these spikes becomes excessively narrow, the fluid description of the system breaks down and the simulation crashes. This can be observed from the bottom panel of Fig. <ref>, where we show the longitudinal component of E⃗ together with the plasma density profile. Note that WB occurs as soon as the nonlinearities come into play (we observed it already for A_E = 1), and the fluid description in the nonlinear regime cannot be used for long-term numerical simulations. However the good convergence of the code even slightly before WB takes place (see Appendix <ref>) ensures the reliability of the results up to this point. Overall, Figs. <ref> and <ref> show that for A_E∼ 1 the system becomes weakly nonlinear, in agreement with the previously mentioned analytical estimates. Going back to the snapshots of the evolutions in Fig. <ref>, we now wish to analyze the behavior of the system for larger electric fields, where the backreaction is macroscopic. We can see that in this case, i.e. for A_E ≳ 50, all the electrons in the plasma barrier are “transported” in the z direction and piled up within a plasma wake whose density grows over time. This corresponds to a blowout regime induced by radiation pressure. In order to better describe how the system reaches this phase, we can compute the longitudinal component of the collective electron velocity as ⟨^z ⟩ = ∫_V d^3 x ^z/∫_V d^3 x = ∫_z_-∞^z_+∞ dz (z) ^z(z)/∫_z_-∞^z_+∞ dz (z), where, again, we took advantage of the homogeneity of the system along the transverse direction to reduce the dimensionality of the domain of integration. The results are shown in the lower panel of Fig. <ref>, where we can see that for A_E ≲ 10, the longitudinal momentum remains low and is not influenced by the wavepacket. For A_E ≳ 10 instead, ⟨^z ⟩ starts to increase in time, indicating that the system is in the blowout regime, as electrons are collectively moving forward in the z direction. Overall, the above analysis shows that when the idealized situation studied in <cit.> cannot be applied and the nonlinear Lorentz term does not vanish, the general physical picture is drastically different and that penetration occurs in this setup due to radiation-pressure acceleration rather than transparency. § DISCUSSION: IMPLICATIONS FOR PLASMA-DRIVEN SUPERRADIANT INSTABILITIES Motivated by exploring the plasma-driven superradiant instability of accreting BHs at the full nonlinear level, we have performed 3+1 numerical simulations of a plane wave of very large amplitude but small frequency scattered off an inhomogeneous plasma barrier. Although nonlinear plasma-photon interactions are well studied in plasma-physics applications, to the best of our knowledge this is the first analysis aimed at exploring numerically this interesting setup in generic settings. One of our main findings is the absence of the relativistic transparency effect in our simulations. As already mentioned, the analysis performed in <cit.> showed that, above a critical electric field, plasma turns from opaque to transparent, thus enabling the propagation of EM waves with frequency below the plasma one. From Eq. (<ref>), such critical electric field for transparency is E_ crit^ transp= m/e√(ω_p^2-ω^2). In our simulations, we considered electric fields well above this threshold, yet we were not able to observe this effect. On the contrary, in the nonlinear regime the plasma strongly interacts with the EM field in a complex way. The role of relativistic transparency in more realistic situations than the one described in <cit.> was rarely considered in the literature and is still an open problem <cit.>. Nevertheless, some subsequent analysis found a number of interesting features, and revealed that its phenomenology in realistic setups is more complex. In Ref. <cit.> an analytical investigation of a similar setup was performed by considering the scattering between a laser wavepacket and a sharp boundary plasma. The conclusion of the analysis is that, when plasma is inhomogeneous, nonlinearities tend to create a strong peaking of the plasma electron density (and hence of the effective plasma frequency), suppressing the laser penetration and enhancing the critical threshold needed for transparency. Subsequently, Refs. <cit.> confirmed this prediction numerically, and showed that in a more realistic scenario transparency can occur but the phenomenology is drastically different from the one predicted in <cit.>. For nearly-critical plasmas, transparency arises due to the propagation of solitons, while for higher densities the penetration effect holds only for finite length scales. Nevertheless, these simulations were performed by considering a simplified momentum equation due to the assumption of a null-vorticity plasma, which is typically suitable for unidimensional problems, but likely fails to describe complex-geometry problems as the one of superradiant fields. Using particle-in-cell simulations, it was then realized that radiation-pressure can push and accelerate the fluid to relativistic regimes, similarly to our results, and produce interesting effects such as hole-boring, ion acceleration, and light-sail <cit.>. While the complicated interplay between relativistic transparency and radiation-pressure acceleration is still an open problem <cit.>, we argue that the latter, which arises in generic situations with very overdense plasmas and high amplitude electric fields, is sufficient to dramatically quench the plasma-driven superradiant instability. To enforce this conclusion, we provide a rough estimate of the total energy extracted from the BH before nonlinear effects take place <cit.>. In order for the instability to be efficient on astrophysical timescales, ω≲ω_p ≈ O(1/(GM)), where G is Newton's constant and M is the BH mass <cit.>. This gives a critical electric field E_ crit=m ω/e≈ 4 × 10^5 V/ cm(M_⊙/M) The associated total energy can be estimated as U= E_ crit^2 L^3, where L is the size of the condensate formed by the superradiant instability, and corresponds to the location of the plasma barrier. This gives U≈ 10^7 J(M/M_⊙)(L/6M)^3 where we assumed that the peak of the plasma barrier roughly corresponds to the location of the peak density of an accretion disk, L≈ 6 M. On the other hand, the total rotational energy of the BH is given by K= M R^2 Ω^2, where R and Ω are the radius and the angular velocity of the horizon, respectively. To efficiently satisfy the superradiant condition, Ω≳ω_p≈ O(1/(GM)), so that K≈ 10^43 J(M/M_⊙) . Therefore, when the electric field reaches the threshold for nonlinearities, the total energy extracted from the BH is tiny, U/K ≈ 10^-36. Another argument supporting this conclusion is that, for the superradiant instability to be sustainable, the maximum energy leakage of the confining mechanism cannot exceed the superradiant amplification factor of the BH. For EM waves, the maximum amplification factor (for nearly extremal BHs and fine-tuned frequency) does not exceed ≈ 4% and is typically much smaller <cit.>. Therefore, the instability is not quenched only if the plasma is able to confine more than 96% of the EM field energy. Our simulations shows that in the nonlinear regime the situation is quite the opposite: almost the entirety of the EM field is not confined by the plasma, thus destroying its capability to ignite the instability. We expect this argument to be valid also when ω_p≫ω, in which case plasma depletion through blowout is negligible, but the EM field can still transfer energy into longitudinal plasma motion. Note that the arguments above are extremely conservative, since are based on a number of optimistic assumptions that would maximize the instability. First of all, realistic accretion flows around BHs are not spherical nor stationary, especially around spinning BHs. This would generically introduce mode-mixing and decoherence, rendering the instability less efficient. More importantly, even in the linear regime a disk-shape accretion geometry can (partially) confine modes that are mostly distributed along the equatorial plane, but would naturally provide energy leakage along off-equatorial directions <cit.>. Finally, a sufficiently high plasma density in the corona could quench photon propagation in the first place <cit.>, at least at the linear level during the early stages of the instability. Although our results strongly suggest that nonlinearities completely quench the ordinary plasma-triggered BH superradiant instability, our framework can be directly used to explore more promising problems in other contexts, especially in beyond-Standard-Model scenarios. It would be interesting to study how nonlinear plasma interactions affects BH superradiant instabilities triggered by ultralight bosons, for example in the context of axion electrodynamics or in the case of superradiant dark photons kinetically mixed with ordinary photons. In the latter case, if the plasma frequency is much greater than the dark photon bare mass, the two vector fields decouple due to in-medium suppressions. In Refs. <cit.>, it was assumed that as the dark photon field grows and accelerates the plasma, the effect of the plasma frequency vanishes as it is unable to impede the propagation of high amplitude EM waves. While our results confirm these statements, they also prove that in generic settings the propagation will dramatically alter the plasma profile, and therefore suggest that even in these systems (as well as for axion-photon induced blasts <cit.>) a more careful analysis at the plasma frequency scale must be performed. We thank Andrea Caputo for interesting conversations. We acknowledge financial support provided under the European Union's H2020 ERC, Starting Grant agreement no. DarkGRA–757480 and support under the MIUR PRIN (Grant 2020KR4KN2 “String Theory as a bridge between Gauge Theories and Quantum Gravity”) and FARE (GW-NEXT, CUP: B84I20000100001, 2020KR4KN2) programmes. We also acknowledge additional financial support provided by Sapienza, “Progetti per Avvio alla Ricerca”, protocol number AR1221816BB60BDE. § DERIVATION OF THE 3+1 FORM OF THE FIELD EQUATIONS Here we perform the explicit computation to obtain the field equations in the 3+1 form. For the EM field we avoid to rewrite the procedure and we refer directly to <cit.>. We will thus consider only Eqs. (<ref>), (<ref>). §.§ Decomposition of Eq. (<ref>) Let us rewrite Eq. (<ref>) for clarity: u^ν∇_ν u^μ = e/m F^μν u_ν. we have to project it separately on n^μ and on Σ_t. §.§.§ Projection on n^μ Contracting Eq. (<ref>) with n_μ we obtain n_μ u^ν∇_ν u^μ = e/m F^μν u_ν n_μ. In the right hand side we have e/m F^μν u_ν n_μ = - e/m E^ν u_ν = - e/m E^νu_ν, where in the last step we used the fact that E^μ lies on Σ_t. The left hand side requires more manipulation. In particular we have that n_μ u^ν∇_ν u^μ = u^ν∇_ν (n_μ u^μ) - u^μ u^ν∇_ν n_μ = -u^ν∇_νΓ - u^μ u^ν∇_ν n_μ. Let us now consider only the second term: u^μ u^ν∇_ν n_μ = u^μ u^νδ^λ_ν∇_λ n_μ = u^μ u^ν (^λ_ν - n^λ n_ν) ∇_λ n_μ = u^ν u^μ^λ_ν∇_λ n_μ - u^ν n_ν u^μ a_μ = u^ν u^μ^λ_νδ^σ_μ∇_λ n_σ + Γ u^μ a_μ = u^ν u^μ^λ_ν^σ_μ∇_λ n_σ - u^ν u^μ^λ_ν n^σ n_μ∇_λ n_σ + Γ u^μ a_μ. Here we used the definition of the projection operator ^μ_ν = δ^μ_ν + n^μ n_ν, the definition of Γ, and defined the 4-acceleration of the Eulerian observer, a_μ = n^ν∇_ν n_μ = D_μlnα. Given that n^μ n_μ = -1 the second term in the last line vanishes. Furthermore, by recognizing that K_μν = -^λ_ν^σ_μ∇_λ n_σ, we can write the first term as - K_μν u^μ u^ν. Substituting all these terms in Eq. (<ref>) we obtain -u^μ∇_μΓ + K_μν u^μ u^ν - Γ u^μ D_μlnα = - e/m E^μu_μ. Using now the decomposed form of u^μ (Eq. (<ref>)) we can write Γ = β^i ∂_i Γ - α^i ∂_i Γ + αΓ K_ij^i ^j - Γ^i ∂_i α + e/mα E^i _i. §.§.§ Projection on Σ_t Let us now project Eq. (<ref>) with ^μ_ν: ^μ_σu^ν∇_ν u^σ = e/m^μ_σ F^σν u_ν. In the right hand side we have e/m^μ_σ F^σν u_ν = e/m^μ_σ(n^σ E^ν - n^ν E^σ + ϵ^σνλB_λ) u_ν = - e/m n^ν u_ν E^μ + e/mϵ^μνλB_λ u_ν = e/mΓ E^μ + e/mΓϵ^μνλB_λ_ν. In the left hand side, instead, we start by substituting the decomposition (<ref>): ^μ_σu^ν∇_ν u^σ = ^μ_σu^ν∇_ν (Γ n^σ + u^σ) = ^μ_σ u^ν n^σ∇_νΓ + Γ^μ_σ u^ν∇_ν n^σ + ^μ_σ u^ν∇_νu^σ = Γ^μ_σ (Γ n^ν + u^ν) ∇_ν n^σ + ^μ_σ(Γ n^ν + u^ν) ∇_νu^σ = Γ^2 ^μ_σ a^σ - ΓK^μ_νu^ν + Γ^μ_σ n^ν∇_νu^σ + ^μ_σu^ν D_νu^σ, where in the third step we used the orthogonality between n^μ and ^μ_ν, while on the fourth step we used the definition of the 4-acceleration a_μ and the extrinsic curvature K_μν. The covariant derivative D_μ has been introduced according to the definition D_νu^μ = ^σ_ν^μ_λ∇_σu^λ. Let us now rewrite this equation in terms of ^μ: ^μ_σu^ν∇_ν u^σ = Γ^2 a^μ - Γ^2 K^μ_ν^ν + Γ^μ n^ν∇_νΓ + Γ^2 ^μ_σ n^ν∇_ν^σ + ^ν^μΓ D_νΓ + Γ^2 ^μ_σ^ν D_ν^σ. Now we wish to rewrite the spatial components of this equation in the form of an evolution equation, and for this purpose we use a procedure similar to the one in Eqs. (A14) - (A20) of <cit.>. First we note that for any 3-vector V^μ, _n V^ν = n^μ∇_μV^ν - V^μ∇_μ n^ν, so that ^ν_σ n^μ∇_μV^σ = ^ν_σ_n V^σ + ^ν_σV^μ∇_μ n^σ = ^ν_σ_n V^σ - V^μK^ν_μ. Now, the Lie derivative can also be written in terms of partial derivatives, and setting ν = i we obtain ^i_σ n^μ∇_μV^σ = ^i_σ_n V^σ - V^j K^i_j = 1/αV^i - β^j/α∂_j V^i + V^j/α∂_j β^i - V^j K^i_j , where we made use of the explicit expressions of ^μ_ν and n^μ. If we now substitute Eq. (<ref>) in the i-th component of Eq. (<ref>), we get ^i_σu^ν∇_ν u^σ = Γ^2 a^i + Γ^i n^ν∇_νΓ + ^i ^j Γ D_j Γ + Γ^2/α( ^i - β^j ∂_j ^i + ^j ∂_j β^i ) + Γ^2 ^j D_j ^i - 2Γ^2 K^i_j^j . Next, n^ν∇_νΓ = 1/α [Γ - β^i ∂_i Γ], which is given by Eq. (<ref>). Substituting in Eq. (<ref>) we obtain ^i_σu^ν∇_ν u^σ = Γ^2 a^i + Γ^2 ^i K_jl^j ^l - Γ^2 ^i ^j ∂_ j α/α + Γ^2/α( ^i - β^j ∂_j ^i + ^j ∂_j β^i ) + e/mΓ^i E^j _j + Γ^2 ^j D_j ^i - 2Γ^2 K^i_j^j . We are now ready to replace Eq. (<ref>) and the spatial components of Eq. (<ref>) in the original equation (<ref>) and isolate the evolution operator. The result is: ^i = β^j ∂_j ^i - ^j ∂_j β^i - α a^i - α^i K_jl^j ^l + α/Γe/m( - ^i E^j _j + E^i + ϵ^i j l B_l _j ) + 2 αK^i_j^j + ^i ^j ∂_j α - α^j D_j ^i . §.§ Continuity equation in 3+1 variables Let us now use the variables that we have introduced to rewrite the continuity equation Eq. (<ref>). Using the decomposition u^μ = Γ (n^μ + ^μ) and the definition of the electron density seen by the Eulerian observer, = Γ n_e, we can rewrite Eq. (<ref>) as 0 = ∇_μ [n_e Γ (n^μ + ^μ)] = ∇_μ [ (n^μ + ^μ)] = n^μ∇_μ + ^μ∇_μ + ∇_μ n^μ + ∇_μ^μ . Expressing n^μ∇_μ in terms of Lie derivatives, Eq. (<ref>) can be written as an evolution equation for : = β^i ∂_i + α K - α^i ∂_i - α∇_μ^μ. § CONVERGENCE TESTS We have evaluated the accuracy and the convergence properties of our code by checking how the constraint violations scale with the resolution in two test setups taken from the simulations presented in the main text. Specifically, we considered the following quantities CV_ Gauss = ∂_i E^i - e + ρ_(ions), CV_ Plasma = √(Γ^2(1 - _i ^i)) - 1, which, whenever nonzero, represent the violations of the Gauss law (<ref>) and of the normalization condition in Eq. (<ref>), respectively. In order to asses the reliability of our code we show here the convergence in the two most challenging nonlinear regimes: WB and blowout (although not shown, the convergence of the linear regime is excellent). Starting from the former, we repeated the simulation with A_E = 1 whose characteristic are described in Sec. <ref>, using a lower resolution Δ x = Δ y = Δ z = 0.4, and increasing the grid size to [-4, 4] × [-4, 4] ×[-1450, 1150] in order to maintain 21 grid points along the x and y directions. We also doubled the time step to Δ t = 0.2, in order to keep the CFL factor constant. Figure <ref> shows the constraint violations CV_ Gauss (left panel) and CV_ Plasma (right panel) along the z axis at t = 830, slightly before WB happens (cf. lower panel of Fig. <ref>). In general, while for both the constraint violations there is a region where they are dominated by noise, in the central region they show an excellent fourth order scaling, and convergence is lost only for 65 ≲ z ≲ 75, where the WB phenomenon is taking place. We now move to consider the convergence in the blowout regime. We repeated the simulation with A_E = 1000 using grid steps Δ x = Δ y = Δ z = 0.4 while maintaining the CFL factor constant. As in the previous case we extended the grid to [-4, 4] × [-4, 4] ×[-750, 850] in order to have the same number of grid points along the transverse directions x and y. We show the scaling of CV_ Gauss and CV_ Plasma on the z axis at t = 190 in the left and right panel of Fig. <ref>, respectively. We can see that the code converges extremely well, except in the region just behind the peak of the plasma density (cf. lowest panel of Fig. <ref>). However, we note that the extension of the region where convergence is lost decreases as the resolution increases, and that fourth-order scaling is restored in the plasma-depleted region. Given the excellent convergence properties in the nonlinear regime, we conclude that the code is reliable and produces accurate results at the resolutions used in this work. § HOMOGENEITY OF THE FIELDS ALONG THE TRANSVERSE DIRECTION Throughout all this work we used numerical grids whose extension along the transverse directions x and y is significantly smaller than in the z direction. This has the advantage of reducing considerably the computational cost, and can be done by exploiting the planar geometry of the system under consideration. In this appendix, we wish to show that homogeneity of the variables along the transverse directions is preserved also at late times during the evolution, so that this grid structure is compatible with the physical properties of the system for the entire duration of the simulations. For this purpose we consider the simulation in the nonlinear regime with A_E = 1000, and we extract the profiles of E^x, E^y, E^z, and along the x and y axes at z = 240. This operation is performed at t = 180 when the system is already in a blowout state, and the value of the z coordinate is chosen to be where plasma is concentrated at this time. We show the results in Fig. <ref>, where the left and right panels represent the profiles along the x and y axes, respectively. We see that all the profiles are constant along the axes, and that the values are consistent between the two plots, confirming that the system maintains homogeneity along the transverse direction. utphys
http://arxiv.org/abs/2306.08972v1
20230615090747
Realistic Model for Random Lasers from Spin-Glass Theory
[ "Jacopo Niedda" ]
cond-mat.dis-nn
[ "cond-mat.dis-nn" ]
Scattering of relativistic electron beams by the anode mesh in high-current vircators Gurinovich A.A. July 31, 2023 ===================================================================================== A Claudia, compagna This work finds its place in the statistical mechanical approach to light amplification in disordered media, namely Random Lasers (RLs). The problem of going beyond the standard mean-field Replica Symmetry Breaking (RSB) theory employed to find the solution of spin-glass models for RLs is addressed, improving the theory towards a more realistic description of these optical systems. The leading model of the glassy lasing transition is considered, justifying the emergence of the 4-body interaction term in the context of RL semiclassical theory. In the slow amplitude basis, the mode-couplings are selected by a Frequency Matching Condition (FMC) and the Langevin equation for the complex amplitude dynamics has a white noise, leading to an effective equilibrium theory for the stationary regime of RLs. The spin-glass 4-phasor Hamiltonian is obtained by taking disordered couplings, as induced by the randomness of the mode spatial extension and of the nonlinear optical response. A global constraint on the overall intensity is implemented to ensure the system stability. Standard mean-field theory requires the model to be defined on the fully-connected interaction graph, where the FMC is always satisfied. This approximation allows one to use standard RSB techniques developed for mean-field spin glasses, but only applies to a very special regime, the narrow-bandwidth limit, where the emission spectrum has a width comparable to the typical linewidth of the modes. This prevents the theory from being applied to generic experimental situations, e.g., hindering the reproduction of the central narrowing in RL empirical spectra. It is of great interest, then, to investigate the model on the Mode-Locked (ML) diluted interaction graph. To address the problem, both a numerical and an analytical approach are followed. A major result is the evidence of a mixed-order ergodicity breaking transition in the ML 4-phasor model, as revealed by exchange Monte Carlo numerical simulation. The joint study of the specific-heat divergence at the critical point and of the low temperature behavior of the Parisi overlap distribution reveals both the second and the first-order nature of the transition. This feature, already analytically predicted on the fully-connected model, seems quite solidly preserved in the diluted model. However, in numerical simulations preceding this work, the transition is found not to be compatible with mean-field theory, according to the estimated value of the scaling exponent of the critical region, which appears to be outside the boundaries corresponding to a mean-field universality class. We derive these bounds through a general argument for mean-field second order transitions. New results from numerical simulations show how the previous ones were haunted by strong finite-size effects, as expected in simulations of a dense model such as the ML RL: the number of connections in the graph requires a number of operations which scales as the cube of the system size, thus forbidding the simulation of large enough sizes. To reduce these effects, we develop a simulation strategy based on periodic boundary conditions on the frequencies, for which the simulated model at a given size can be regarded as the bulk of the model with free boundaries pertaining to a larger size. By means of this strategy, we assess that the scaling of the critical region is actually compatible with mean-field theory. However, the universality class of the model seems not to be the same as its fully connected counterpart, suggesting that the ML RL needs a different mean-field solution. The possibility of a localization transition in the ML RL is also investigated. In this context, localization - else termed power condensation - is the phenomenon whereby a finite number of modes carries an extensive amount of light intensity. The presence of localization, as the global constraint on the overall intensity is tuned above a given threshold, is only theoretically possible in presence of dilution with respect to the fully-connected case, where the high connectivity of the model guarantees equipartition of the constraint among all degrees of freedom. From the finite-size study of the localization order parameter, we assess that, despite some evidence of incipient localization, the glassy phase of light is not strictly speaking localized. Moreover, the study of the spectral entropy reveals that the low temperature phase of the model is characterized by intensity equipartition breaking. We have termed “pseudo- localization” the transition to this hybrid phase, where light intensity is not completely localized and at the same time is not equipartitioned among the modes. One of the most relevant aspects revealed by the numerical results is that the critical temperature of the glass and of the pseudo-localization transitions is the same. This occurrence makes the ML RL an interesting problem where ergodicity breaking manifests itself in a twofold way: replica-symmetry breaking and condensation. The opportunity given by this model is to study both transitions at the same time, opening the way to more general studies for arbitrary nonlinearities and degrees of dilution. Supported by the numerical evidence that the ML RL is, indeed, a mean-field model, we address its analytical solution. Our approach is based on a technique developed for the Merit Factor problem, which has the same topology of the ML network. This is an ordered model, which due to antiferromagnetic couplings, exhibits a frustrated glassy phenomenology. The presence of a glass transition is investigated through the replica method applied to the model in the space where the spin variables are mapped by a random unitary matrix. We call this version of the model Random Unitary Model (RUM). A careful study of the saddle-point self-consistency equations of the RUM, both in the replica symmetric and in the one step replica-symmetry breaking scheme reveals the absence of a phase transition for this model and leads us to question whether the mapping between the original deterministic (though frustrated) model and the RUM is under control. The technique is then applied to the ML RL, where after averaging over the disordered couplings we pass to a generalized Fourier space by transforming the local overlaps with a random unitary matrix. The major difficulty of defining a global order parameter for the model and finding closed equations to determine it as function of temperature is successfully addressed, with the introduction of a new order parameter, a superoverlap, which is a measure of the correlations among local overlaps. However, the solution suffers the same problem of the RUM for the Merit Factor problem. To the best of our knowledge, this represents the first tentative solution ever attained of a spin-glass model out of the fully-connected or sparse graph cases. First of all, I would like to thank my advisors, Luca Leuzzi and Giacomo Gradenigo. Thanks to Luca for introducing me to this fascinating research topic, for being patient and always taking me seriously, for his humanity and availability, qualities that are rare to find. Thanks to Giacomo for helping me in some difficult moments, for the many physics conversations, for giving me esteem and trust, and for the many pieces of advice. I also thank the referees of this thesis work, Juan Jesus Ruiz Lorenzo and Markus Muller, for their positive and encouraging reports. In particular, thanks to Markus for carefully correcting the thesis and for raising some interesting questions. I thank Daniele Ancora for supporting me in a difficult part of this work, providing his expertise. Thanks for his listening and friendship. I thank the Chimera group for welcoming me in a stimulating and pleasant environment, and, no less important, for funding all my research. Thanks to Giorgio Parisi for being the moral inspiration of this work and for finding the time to listen to us: it is an honor to feel like adding a small piece to a great story. A special thanks goes to Matteo Negri and Pietro Valigi, colleagues and friends, because it is nice to spend time with you talking about everything. I thank all the people who have crossed our dear π-room: thank you from the bottom of my heart for sharing the hardships of research and for making the toughest days easier. Finally, I would also like to thank Silvio Franz and Ada Altieri for hosting me in Paris for two visiting periods during my PhD. Thanks in particular to Silvio for teaching me so much and for showing me that research can be done by getting lost in long and passionate discussions, without the hurry of commitments. CHAPTER: INTRODUCTION Statistical mechanical models for spin glasses were first introduced in the '70s by Edwards and Anderson <cit.> for the study of certain magnetic alloys displaying an intriguing low temperature behavior, which significantly differed from ferromagnetism. In such systems, lowering the temperature did not lead to the onset of long-range order in terms of global magnetization, but rather to the freezing of the material in apparently random configurations. The problem of dealing with this structural and athermal kind of randomness proved to be hard also in the mean-field approximation <cit.>. It took almost a decade and a remarkable series of papers by Parisi <cit.> to lay the foundations of the mean-field theory of spin glasses and to deepen the knowledge about the spin-glass phase transition. The effort required the development of new mathematical techniques, such as the algebraic replica-symmetry breaking method and the probabilistic cavity approach. The physical scenario coherently revealed by these techniques is that at least at the mean-field level some kind of magnetic order arises at low temperature, where the system exhibits a behavior compatible with ergodicity breaking in multiple pure states non related by a symmetry operation and organized in a highly nontrivial structure <cit.>. Spin-glass theory, then, took the shape of the ideal settlement to rigorously frame the physical meaning of complexity and describe a number of out-of-equilibrium phenomena, including weak ergodicity breaking and aging, i.e. the phenomenon by which the relaxation of a system depends on its history <cit.>. As new spin-glass models with nonlinear interactions were considered <cit.>, it was soon understood that spin glasses could represent a powerful tool to describe a much larger class of systems spreading over many different fields of research, such as condensed matter physics, biophysics and computer science. The first and probably most studied applications can be traced in structural glasses <cit.>, the amorphous state reached by many supercooled liquids, when cooled fast enough to avoid crystallization, and neural networks <cit.>, the prototype of learning systems, which mimic the interactions among neurons in the brain. Nowadays, the list of systems and problems where spin-glass models and techniques have been applied is quite long, ranging from colloids <cit.> to granular materials <cit.>, from protein folding <cit.> to optimization and constrained satisfaction problems in computer science <cit.> and theoretical ecology <cit.>. All those systems, ubiquitous in science, where frustration leads to a complex structure of states may be described as spin glasses. If spin-glass theory represents the perfect framework for a large number of systems, it is also true that new insights on the theory have been acquired from many applications, such as the Random First Order Transition <cit.>, developed in the context of structural glasses, to describe the glass transition and the theory of the jamming transition for the packing of hard spheres <cit.>. For this reason, spin glasses can be fairly regarded as one of the most interdisciplinary line of research in statistical mechanics. However, despite the number of applications, the mean-field theory of spin glasses has not yet found a clear correspondence in experiments on physical systems. In particular, its most prominent feature, replica-symmetry breaking, has been a long debated issue, leading to question whether it is just an artifact of long-range interactions, rather than an actual physical mechanism <cit.>. One would be naturally interested in understanding what of the mean-field picture remains true in finite dimension, that is in the case of the vast majority of physical systems described within the framework of spin-glass theory, which are characterized by rapidly decaying interactions. Unfortunately, unlike the case of ferromagnetism, an approach based on a renormalizable field theory is still missing for spin glasses, albeit very hardly investigated <cit.> (see also Ref. <cit.> for more recent approaches). However, among the applications of spin-glass theory there is a fortunate one, to which the present work is devoted, which is very promising as an experimental benchmark of replica-symmetry breaking: the study of optical waves in disordered media with gain, namely random lasers. Indeed, recently the order parameter of the replica symmetry breaking theory has been experimentally measured in these optical systems, for which the mean-field theory is exact <cit.>. §.§.§ Random Lasers A Random Laser (RL) is made of an optically active medium with randomly placed scatterers <cit.>. As in standard lasers, the optical activity[With optical activity of the medium we refer to the inversion of the atomic level population of the material by means of external energy injection, which is necessary for stimulated emission. In optics, optical activity also stands for the ability of a substance to rotate the polarization plane of light passing through it.] of the medium provides the gain, whose specific relation with the frequency of the radiation depends on the material. However, random lasers differ from their ordered counterpart both in the inhomogeneity of the medium and in the absence of a proper resonating cavity, which accounts for feedback in standard lasers. In order to have lasing without a cavity some other mechanism at least for light confinement must exist, which manages to overcome the strong leakages of these systems. Since Letokhov's groundbreaking work <cit.>, where light amplification in random media was first theoretically predicted, the trapping action for light has been attributed to the multiple scattering with the constituents of the material. The nature of the feedback, instead, whether it was resonant or non-resonant[Non-resonant or incoherent feedback leads to amplified spontaneous emission (ASE) or superluminescence <cit.>, which is light produced by spontaneous emission optically amplified by stimulated emission. In this case, interference effects are neglected, and the laser output is only determined by the gain curve of the active medium.], remained intensely debated for a long time and with it the nature of the modes of RLs <cit.>. In the original theory by Letokhov only light intensity was considered, with phases and interference not playing any role in mode dynamics. A key finding obtained by means of a diffusion equation with gain is that there is a threshold for amplification, when the volume of the medium is sufficiently large with respect to the gain length. The diffusive limit applies to the case when the mean free path of the photon with respect to scattering is much larger than its wavelength and, at the same time, much smaller than the average dimension of the region occupied by the medium <cit.>. In this approach, above the threshold the emission spectrum is predicted to be continuous and peaked in the frequency corresponding to the maximum gain. These features were observed in early experiments <cit.>, fueling the idea that the notion of modes looses its meaning in RLs. Later experiments, based on more accurate techniques and spectral refinement, revealed the emergence of sharp peaks in the emission spectra of RLs on top of the global narrowing as the external pumping was increased <cit.>. The observation of highly structured and heterogeneous spectra brought evidence in favor of the existence of many coupled modes with random frequencies. Studies on photon statistics <cit.> confirmed this idea, by showing that the intensity of light emitted at the peak frequencies exhibits a Poisson photon count distribution, as in the case of standard multimode lasers. In view of these experiments, it was generally accepted that random lasing is, in fact, characterized by a resonant feedback mechanism, which induces the existence of well-defined cavity modes. This idea is also supported by more recent results drawn from numerical simulations based on the semiclassical theory of RLs <cit.>. The physical picture that one has to bear in mind is the following: the multiple scattering of light with the randomly placed scatterers not only confines part of the spectrum inside the medium, but also allows for the existence of cavity modes with a lifetime long enough to compete for amplification. The key role of scattering in random lasing is quite remarkable, especially if one thinks that in laser theory scattering is usually considered to be deleterious to the lasing action, since it is responsible for losses disturbing the intensity and directionality of the output. The modes of RLs are many, they are characterized by a complex spatial profile of the electromagnetic field and in most RLs they are extended[Just a note on the use of some words, which may be misleading: the modes of RLs are confined in the medium, in the sense that their spatial extension is comparable with the characteristic length of the sample material, but not properly localized in the sense of Anderson localization (see the next paragraph); they are extended over the whole volume of the sample, as if the sample itself represents a cavity.] and coupled. In a fascinating way, one can say that random lasers are “mirror-less” systems, but not “mode-less” <cit.>. What is not yet completely understood is the physical source of the oscillating modes and of the corresponding peaks in RLs spectra. Some attempts to explain the occurrence of well defined resonances have been made in terms of light localization, the counterpart for the photons of Anderson localization of electrons <cit.>, which was claimed to have been experimentally revealed in Refs. <cit.>. The presence of localization was inferred from measurements of the deviation from diffusion theory, e.g., through the study of photon time of flight. However, it has been later theoretically proved <cit.> that light localization can not take place in 3D, due to the vectorial nature of light, as revealed by a comparison between the spectra of the random Green matrix describing the propagation of light from one atom to another in the vector case and in the scalar approximation[The difference with respect to electron localization lies in the different role played by polarization with respect to electron spin: in the case of light, elementary excitations from one atom to another can be mediated not only by the transverse electromagnetic waves but also by the direct interaction of atomic dipole moments, which is accounted for by the longitudinal component of the electromagnetic field <cit.>. While the former phenomenon would be reduced by increasing the number density of atoms, the latter becomes more and more efficient as the typical distance between neighboring atoms decreases.]. This is coherent with the observation that in many materials the modes, though confined inside the sample, are extended all over its volume. The deviations from diffusion theory mistaken for Anderson localization were then traced back to experimental effects, such as delays in fluorescence <cit.>. Therefore, though in less than 3D it may truly be observed in particular random lasers <cit.>, Anderson localization can not be taken as a general feedback mechanism for these systems. Whatever the physical mechanism leading to the existence of cavity modes in RLs may be, a multimode theory of RLs based on quantum mechanics principles has to include the openness of the cavity which leads to a nonperturbative effect of the leakages and the inhomogeneity of the medium, which causes the irregular spatial structure of the electromagnetic field. Though a complete quantum theory of light amplification in random media is still missing, when treated in a semiclassical perspective <cit.>, random lasers display two basic features of complex disordered systems: nonlinear interactions and disorder. Evidence that random lasing may be a complex phenomenon comes from more recent experiments <cit.>, which have revealed a new feature. The positions of the spectral peaks were already known to change, if different parts of a sample were illuminated, as a clear consequence of medium heterogeneity. These experiments show a very peculiar behavior in the temporal and spectral response of RLs, when taking shots of the spectrum produced by exactly the same piece of sample at different times, each one corresponding to a pump pulse. The positions of the random scatterers as well as the external conditions are kept fixed all along the data acquisition. The intriguing result is that each shot shows a different pattern of the peaks (see Fig. <ref>), meaning that, at variance with standard multimode lasers, there is no specific frequency which is preferred, but depending on the initial state, with the disorder kept fixed, the narrow emission peaks change frequency every time[Incidentally, this phenomenon may have also contributed to early observations of continuous RL spectra, where data were averaged over many shots, smoothing the spectral profile.]. This behavior strongly resembles the freezing of magnetic alloys or supercooled liquids in random configurations, making the idea of a spin-glass theory of random lasing quite tempting. Moreover, statistical mechanics is not new to lasing systems: the so-called Statistical Light-mode Dynamics (SLD) proved to be a successful way to deal with standard multimodal lasers, where the number of modes is high enough and nonlinear effects are present <cit.>. The main merit of SLD is to show that an effective thermodynamic theory of these photonic systems is possible, where noise, mainly due to spontaneous emission, can be treated in a non-perturbative way. It may seem inappropriate to develop an equilibrium theory for lasers, which are out-of-equilibrium systems by definition, being constantly subjected to external energy injection. However, a stationary regime is achieved in such systems thanks to gain saturation, a phenomenon connected to the fact that, as the power is kept constant, the emitting atoms periodically decade into lower states, saturating the gain of the laser. This justifies the introduction of an equilibrium measure, giving weights to steady lasing states. The extension of the SLD approach to RLs has led quite naturally to the development of a research line devoted to the theoretical modeling of optical waves in random media within the framework of spin-glass theory <cit.>. §.§.§ A Glassy Random Laser The two main goals of the spin-glass approach to RLs are the following: (i) to provide a theoretical interpretation of the lasing phase of optically active random media in terms of glassy light, which can be regarded as the amorphous phase of light modes; (ii) to create the opportunity of experimentally testing the theory of spin glasses, and in particular replica symmetry breaking, on systems in which the glassy state is much easier to access than in structural and spin glasses. Regarding (ii), one reason why this is the case is that the dynamics of light modes is incomparably faster with respect to the dynamics of particles in liquids or condensed matter systems, so that an effective equilibrium state is easier to reach for RLs. Incidentally, this is also the reason why by glassy light, here, it is only meant that RLs seem to be characterized by a multi-valley landscape with many possible equilibrium states: phenomena like aging, memory and rejuvenation, which are typical of the dynamics of supercooled liquids, may not be observable on the short timescale in which a laser reaches the steady state. The other – and maybe more important – reason is that many RLs are naturally represented by a statistical mechanical system with long-range interactions as a consequence of the fact that the effective mode couplings are determined by the spatial overlap among the wave functions of the modes, which can be extended over the whole medium. Another merit of this approach is to provide a theoretical framework for the analysis of the mode-locking process in multimode lasers, both standard and random. In standard lasers, mode-locking entails the formation of very short, regularly spaced pulses in the laser output <cit.>. To produce ultrafast multimode lasers, special devices are required which sustain the pulse formation through nonlinear couplings selected by a particular rule called the frequency-matching condition (FMC). Given four modes, they form an interacting quadruplet only if their frequencies satisfy the following relation FMC: | ω_1 -ω_2 +ω_3 -ω_4 | ≲γ, where γ represents the typical single mode linewidth. In random lasers, pulse formation is, in principle, hindered by the disordered spatial structure of the electromagnetic field and by the random frequency distribution. Indeed, it has never been observed in such systems. However, nonlinear interactions and FMC are intrinsic to a RL and do not require ad hoc devices. Though the possibility of a pulsed random laser is still only hypothetical, evidence of a self-induced mode-locked phase has been recently found in Ref. <cit.>. Within the statistical mechanics approach, the formation of a mode-locked phase is interpreted as a phase transition: while increasing the pump energy, the system leaves a random fluctuating regime to enter a locked one, where the oscillation modes have different phases and intensities, but they are fixed, “locked” and “frozen”. The statistical mechanics description of RLs has led to the definition of the Mode-Locked (ML) p-phasor model, a mixed p-spin model (p=2 and 4, i.e. both two and four body interactions) with complex variables constrained on a N-dimensional sphere and quenched disordered couplings. In this framework, the oscillation modes of the electromagnetic field are represented by N phasors placed on the nodes of the interaction graph and the total optical intensity of the laser is fixed by the spherical constraint on the amplitudes of the phasors. The model can be adapted to describe multimode lasers in the presence of an arbitrary degree of disorder and non-linearity, resulting in a comprehensive theory of the laser mode-locking transition in both random and standard lasers. The interaction graph is dense because the interactions among the modes are long range as a consequence of the evidence of extended modes. The specific topology of the graph is defined by the FMC, which yields a deterministic dilution of the interaction network. In Refs. <cit.> the model has been analytically solved in a certain regime compatible with a fully connected graph of interaction, where standard mean-field techniques for spin-glass models can be applied. This particular regime is the narrow-bandwidth limit, where the typical linewidth of the modes is comparable with the entire emission bandwidth of the laser. The replica solution of the fully-connected model already presents a very rich phenomenology, with various kinds of replica symmetry breaking, corresponding to nontrivial optical phases. In this context, shot-to-shot fluctuations of the emission spectra are shown to be compatible with an organization of mode configurations in cluster of states similar to the one occurring in spin glasses. Such correspondence relies on the equivalence between the distribution of the Intensity Fluctuation Overlap (IFO), which can be experimentally measured, and the distribution of the overlap between states, the order parameter of the spin-glass transition <cit.>. Experimental evidence of replica symmetry breaking in the IFO probability distribution function has been found in Ref. <cit.>. However, a complete understanding of the physics of RLs requires to go beyond the narrow-bandwidth limit and needs to incorporate in the description the FMC, which is an essential ingredient of the ML p-phasor model for the reproduction of the experimental spectra, see Fig. <ref>. For combinatorial reasons modes at the center of the spectrum are frequently selected by the FMC, so that when the external pumping is increased and the nonlinear interactions become dominant, the spectrum develops a central narrowing on top of the gain profile curve (which, instead, prevails in the fluorescence regime). The inclusion of the FMC is the main goal of this work, where the problem of dealing with the diluted mode-locked graph is addressed both numerically and analytically. It is worth stressing that, besides being of interest for a more realistic description of RLs, the subject of this work is also fascinating from a purely theoretical point of view. In fact, we deal with a nonlinear (4-body) disordered model with complex spherical variables and couplings selected according to a deterministic rule. The presence of a 2-body interaction term, which takes into account the net gain profile of the medium and the radiation losses, allows for the competition between linear and nonlinear interactions, which is known to be responsible for mixed-order replica symmetry breaking. In fact, in the fully-connected case, the model is the generalization of the (2+p)-spin (with p=4) model <cit.> to complex variables, both magnitudes and phases. One of the most interesting features of the model is the dilution of the interaction graph, which is of the order of the system size N. This leaves the 4-body interaction network still dense, i.e. still 𝒪(N^2) connections per mode, which is an intermediate situation between the fully-connected (𝒪(N^3) interactions per mode) and the sparse case (each mode participating in 𝒪(1) interaction terms). To the best of our knowledge, no spin-glass model has been analytically solved in this particular regime of dilution. Moreover, given the presence of a global quantity conserved through a hard constraint (i.e. the total optical power) the model offers the possibility of studying the occurrence of a power condensation transition in the space of the modes, especially in relation to the breaking of ergodicity, which is signaled by replica-symmetry breaking. The possibility of intensity localization is also suggested by the sharp peaks in the spectra of Fig. <ref>, which are evidence of the fact that the total value of the intensity is not homogeneously parted among the modes and might be a precursor to a sharp condensation of the whole intensity on 𝒪(1) modes. §.§.§ Organization of the Thesis The Thesis is divided in two parts, a numerical and an analytical one, which are preceded by an introductory chapter on the mean-field theory of RLs. The organization of the chapters follows the natural development of the research: after acquiring confidence with the status of the art, the results of numerical simulations are presented and discussed in Part I; then, inspired by the physical insights obtained through the simulations, in Part II the analytical approach is developed. Three Appendices contain much of the technicalities of the computations. Each chapter opens with a brief introduction to the topic to which it is devoted. In what follows, we sketch the contents of each chapter. * Chapter <ref> contains an Introduction to the spin-glass theory of RLs, where the main analytical results obtained within the mean-field fully-connected approximation are described in some detail. After introducing the reader to the statistical mechanics approach to standard (ordered) multimode lasers, the spin-glass model for random laser is derived starting from the semiclassical laser theory for open and disordered systems in the system-and-bath approach. The presence of off-diagonal linear terms of interactions among the modes is related to the openness of the system, while the presence of nonlinearity accounts for the light-matter interaction at the third order in perturbation theory in the mode amplitudes. The relevant approximations which are needed in order to obtain the mean-field fully-connected model are described. Then, the replica computation to derive the quenched average of the free energy is considered and the phase diagram of the model is described. The last section is devoted to an introduction to the IFO and how they are related to the Parisi overlap in the mean-field fully-connected theory. * Chapter <ref> deals with the first attempt at including the dilution effect due to the FMC in the theory through numerical simulations. The numerical technique used to simulate the model is described in detail and the results of Ref. <cit.> are carefully reviewed. The density of the model interaction graph represents an additional difficulty with respect to those already present in Monte Carlo simulations of finite-dimensional spin-glass models: not only the relaxation to equilibrium is hindered by the presence of local minima in the free energy landscape, but each attempt of changing configuration has a computational complexity which scales as the square of the system size. Moreover, especially for the study of non-self-averaging quantities, many samples corresponding to different realizations of the disordered couplings have to be simulated. In order to deal with these difficulties, a Parallel Tempering Monte Carlo algorithm has been developed and parallelized for Graphic Processing Units. From the simulations, the typical behavior of a Random First-Order Transition is revealed for the simulated model, though the results are plagued by strong finite-size effects, making the assessing of the universality class of the model a nontrivial task. * Chapter <ref> is devoted to a refinement of the finite-size scaling analysis of the glass transition for the ML 4-phasor model. Many of the results presented here are contained in Ref. <cit.>. In order to reduce the finite size effects in numerical simulations of the mode-locked glassy random laser, two strategies have been exploited: first, simulations with larger sizes and a larger number of disordered samples have been performed; secondly, and more remarkably, a version of the model with periodic boundary conditions on the frequencies has been introduced in order to simulate the bulk spectrum of the model. The results obtained by a more precise finite-size scaling technique allow us to conclude that the ML 4-phasor model is indeed compatible with a mean-field theory, though it may be in a different universality class with respect to its fully-connected counterpart. This is the main output of this chapter; then, the study of the glass transition is completed by presenting results which pertain to various overlap probability distribution functions. * Chapter <ref> is devoted to the numerical study of the power condensation phenomenon in the ML 4-phasor model. The results presented here are contained in Ref. <cit.>, where evidence of an emergent pseudo-localized phase characterizing the low-temperature replica symmetry breaking phase of the model is provided. A pseudo-localized phase corresponds to a state in which the intensity of light modes is neither equipartited among all modes nor really localized on few of them. Such a hybrid phase has been recently characterized in other models, such as the Discrete Non-Linear Schrödinger equation <cit.>, just as a finite size effect, while in the low temperature phase of the glassy random laser it seems to be robust in the limit of large size. The differences between such non-interacting models and generic p-body nonlinear interacting models are highlighted: in particular, the role played by the dilution of the interaction network is clarified. * Chapter <ref> is the first analytical chapter of this Thesis and the only one which is not directly dedicated to the ML 4-phasor model for the glassy random laser. The similarity between the topology of the mode-locked graph and the structure of the Hamiltonian of the Bernasconi model for the Merit Factor problem <cit.>, has led us to devote our attention to this model first. Although it is a model with long-ranged ordered interactions, finite-size numerical studies, which have been replicated in this work, point in the direction of a glassy behavior at low temperature. The solution technique proposed in Ref. <cit.>, which is based on quenched averaging over the unitary group of transformations of the spin variables, is carefully analyzed and completed through the study of the saddle-point equations with different ansatzes of solution. No evidence of phase transition at finite temperature has been found with one step of Replica Symmetry Breaking (RSB), up to the precision of our analysis; however, we believe that the solution technique may be the right tool to address the computation of the free energy in the mode-locked random laser. The three Appendices to this chapter deal respectively with the integration over the Haar measure of the unitary group and the Replica Symmetric (RS) and 1RSB details of the computation. * Chapter <ref> is devoted to the proposal of a new mean-field theory for the mode-locked glassy random laser. The quenched average over the disordered couplings leads to a long-range ordered matrix field theory in the local overlap, which is characterized by a Hamiltonian formally similar to the one of the Merit Factor problem, but at the level of the local overlaps rather than of the spins. The technique developed for the Bernasconi model is then applied to the model of interest, allowing us, after averaging over the unitary group, to introduce a global order parameter, which we have called superoverlap. As the global overlap usually represents a two-point correlation between spin variables, the superoverlap denotes a correlation between local overlaps. The RS and 1RSB self-consistency equations have been derived, and their study is in progress. * Chapter <ref> contains the conclusions of this Thesis and a discussion on the research lines opened by the present work on the topic. Among them we mention the integration of the saddle point equations of the ML 4-phasor model, the numerical simulation of models with realistic frequency distributions and gain profiles, a detailed analysis of the comparison between the experimentally measured and the numerically computed overlap distributions, considering thermalization, size and time-averaging effects CHAPTER: MEAN-FIELD THEORY OF THE GLASS TRANSITION IN RANDOM LASERS In this chapter the most salient features of the mean-field spin-glass theory of random lasers are described. Before getting to the heart of the discussion, some background knowledge is provided about multimode lasing systems, in order to make the reader confident with the most relevant physical properties of these systems from a statistical mechanics point of view. The case of standard multimode lasers is discussed first, since it represents a constant basis for comparison for the more general statistical theory of random lasers. Given the large number of modes (10^2-10^9 in long lasers) and the stabilizing effect of gain saturation, an effective thermodynamic theory can be developed for the stationary regime <cit.>. The main outcome of the mean-field analysis of these systems is that the onset of the mode-locking regime <cit.>, can be interpreted as a noise driven first-order phase transition <cit.>. Then, we briefly review the system-and-bath approach to random laser theory developed in Refs. <cit.> to deal with the openness of the cavity and the light-matter interaction. In our perspective, the main merit of this approach is to provide reasonable explanations for the origin of all the essential elements of the general spin-glass model of a RL, starting from the semiclassical approximation to the quantum dynamics of the electromagnetic field in an open and disordered medium. In the second part of the chapter, the spherical (2+4)-phasor model <cit.>, which represents the leading mean-field spin-glass model for RLs, is presented in connection with the semiclassical derivation. The particular regime where the theory applies is carefully described, by presenting all the approximations which make the model compatible with mean-field fully-connected theory. After a brief summary of the replica method for the solution of quenched disordered systems, the replica computation for the model of interest is reviewed in its main steps and the results are described. The general phase diagram of the model is presented, with particular attention to the glass transition, which will be studied in the rest of this work. The phenomenology of the model is very rich already in the fully-connected case, where the system exhibits four different phases corresponding to different regimes in the output of a laser depending on the amount of energy injected into the system and on the degree of disorder of the medium. Moreover, the breaking of replica symmetry occurs with three different kind of structures depending on the degree of non-linearity. In the last section, the theory is put in correspondence with experiments through the study of the overlap among intensity fluctuations <cit.> (i.e. IFO), an experimentally measurable quantity whose analytical counterpart can be expressed in terms of the Parisi overlap in the fully-connected approximation <cit.>. § STATISTICAL LIGHT-MODE DYNAMICS Though concepts borrowed from phase transition physics were already present in the seminal work of Lamb on multimode lasers <cit.>, it is not until the early '00s that statistical mechanics methods were systematically applied to the study of optical systems. Statistical Light-mode Dynamics (SLD) is an approach developed by Gordon and Fisher <cit.> to deal with open problems regarding the mode-locking phenomenon in multimode lasers. Mode-locking is a consequence of the fact that, unlike a conventional laser, a mode-locked laser oscillates among longitudinal modes whose frequencies are in a coherent relationship. In standard lasers the interaction among axial modes necessary for pulse formation is induced by ad hoc devices: either the system is made time dependent by means of an amplitude modulator, or a suitable nonlinearity, as the one provided by a saturable absorber[Saturable absorption is the property of a material with a certain absorption loss for light, which is reduced at high optical intensities. Since the absorption coefficient depends on the light intensity, the absorption process is nonlinear.], is added to the system dynamics. Between the two methods, which are commonly referred to as, respectively, active and passive mode-locking, only the latter is known to produce ultra-short pulses (of the order of femtoseconds). The mode-locking theory developed in the seventies <cit.> has many merits, such as the prediction of the pulse shape and of its duration. However, the underlying mechanism to pulse formation remained unclear, until the SLD approach was formulated. It was already known that pulse formation may be achieved when the optical power reaches a certain threshold (besides the one needed for the onset of lasing) and that the emergence of pulses upon reaching this threshold is abrupt. Several hypotheses were put forward to explain this phenomenon, by identifying some mechanism which opposed to mode-locking <cit.>, but no one was really satisfactory. In most of these approaches, the antagonist of optical power for the onset of pulse formation was correctly identified with noise, which however, was treated as a small perturbation. In fact, noise plays a central role in the dynamics of a laser: besides the usual sources of noise to which a physical system is subjected, in lasers a fundamental source of noise is represented by spontaneous emission, which can also be amplified due to optical activity. By treating noise in perturbation theory, many interesting features of the system can be missed when the noise is large. The main novelty introduced by SLD is represented by the inclusion of noise in the theory in a non-perturbative way, as an effective temperature. This has lead to the first many-body thermodynamic theory of multimode lasers, where the onset of mode-locking is interpreted as a phase transition driven by the ratio between external pumping and noise. As the energy pumped into the system makes the interactions strong enough to overcome noise, then, global correlations arise among the phases of the modes, which sharply divide the unlocked and locked thermodynamic phases. In this framework, the difference between active and passive mode-locking becomes evident: when considered from the point of view of the interaction networks, the passive case corresponds to a long-range model <cit.>, where a global order can arise below a certain level of noise, whereas the active case corresponds to a one-dimensional short-range model <cit.>, where a phase transition can in principle occur only at zero temperature[Amplitude modulation produces sidebands of the central frequency of the spectrum, say ω_0, at the neighbor frequencies ω=ω_0 ±δω, where the δω is the frequency spacing, which lock the corresponding modes to the central one, and so on. This leads to nearest neighbors interactions on a linear chain.]. Hence, the fragility of active mode-locking ca be interpreted as a manifestation of the lack of global ordering at finite temperature in the one-dimensional spherical spin model <cit.>: any weak noise breaks a bond between two modes, thus eliminating global ordering. In the following, we focus on the theory of passive mode-locking, which is the most interesting one for the random laser case. In an ideal cavity, i.e. by neglecting the leakages, the electromagnetic field can be expanded in N normal modes E(r,t) = ∑_k=1^N a_k(t) e^-i ω_k tE_k(r) + c.c. where the presence of nonlinearity makes the complex amplitudes a_k(t) time dependent. In the physical situation, N corresponds to the number of distinguishable resonances selected according to the distance between the mirrors. If the frequencies of adjacent modes are too close with respect to the spectral resolution, then the actual number of cavity modes is larger than the number of bins in the revealed spectrum. The frequency distribution of the modes is that imposed by a Fabry-Perot resonator, namely a linear comb: ω_k=ω_0 + k δω,           k∈[-N/2,N/2] where ω_0 is the central frequency of the spectrum and δω is the frequency spacing. In the high finesse limit, if we denote by Δω the bandwidth of the entire spectrum and by γ the typical linewidth of the modes[Even if photons are emitted exactly with the atomic frequency ω_ij = (E_i-E_j)/ħ, broadening effects give to the resonator modes a width γ. These effects can be homogeneous, like collision broadening, leading to a Lorentzian line-shape function or inhomogeneous, like Doppler broadening, leading to a Gaussian line-shape function: the Voigt profile takes into account both kinds of broadening <cit.>. In principle, then, each mode has a different linewidth. By neglecting these effects one would have a frequency distribution made of sharp delta peaks.] then we have γ≪δω≪Δω. Moreover, we consider the slow amplitude mode basis, in which given a mode with frequency ω_k, the time dependence of the amplitude of such a mode is on a time scale much larger than ω_k^-1. Lasing modes are by definition slow amplitude modes, since their expression in the frequency domain must be approximately equal to a delta centered in their frequency: a_k(t) e^-iω_k t→∫ t a_k(t) e^-iω_k t e^iω t≈ a_k(t) δ(ω-ω_k), where the Fourier transformation of the electromagnetic field basically reduces to a time average over the fast phase oscillations. The SLD description of passive mode-locking can be obtained by considering the standard Langevin master equation <cit.> a_l/ t = (G_l + i D_l)a_l(t) + (Γ - i Δ) ∑_i-j+k=l a_i(t) a_j(t) a_k(t) + η_l(t), where G_l is the net gain profile, i.e. the gain minus the losses, D_l is the group velocity dispersion coefficient, Γ is the self-amplitude modulation coefficient resulting from saturable absorption and Δ is the self-phase modulation coefficient responsible of the Kerr lens effect. The nonlinear term in the dynamics is characterized by the selection rule i-j+k=l, which comes from averaging away the fast phase oscillations in the slow amplitude basis. This is actually a condition on the frequencies, the so-called frequency matching condition (FMC), which reduces to a relation among indices because of Eq. (<ref>). The noise η_l(t), mainly due to spontaneous emission, is generally assumed Gaussian, white and uncorrelated: <η_l(t_1) η_k(t_2) > = 2 T δ_lkδ(t_1-t_2)           <η_l(t_1) η_k(t_2) > = 0, where T is the spectral power of the noise, which has a dependency on the actual temperature of the sample (i.e. the laboratory temperature in the experimental case). We distinguish two kinds of dynamics in Eq. (<ref>): a dissipative one, which involves gain and saturable absorption and a dispersive one. In order to develop an effective thermodynamic approach, a necessary requirement is laser stability. The total optical intensity of the laser ℰ = ∑_k=1^N |a_k|^2 is a constant of motion only in the purely dispersive limit, i.e. when G_l=Γ=0. However, in the general case the stability of the laser is ensured by gain saturation, the effect for which the gain decreases as the intensity increases <cit.>. In laser theory, gain saturation is usually implemented by assuming a time dependent gain, e.g. for a flat gain curve G = G_0/(1+ℰ/E_sat), where E_sat is the saturation power of the amplifier and G_0 is the unsaturated gain. In Ref. <cit.> a simpler alternative has been proposed: one can assume that at each time G_0 takes exactly the value necessary to keep the optical power fixed to its original value. The precise value of G_0 can be obtained by imposing that ∂ℰ/∂ t = 0 and exploiting the equation of motion (<ref>). This way, the stabilizing effect of gain saturation can be modeled by considering a time independent gain profile G_l and imposing a hard constraint on the total intensity, which forces the dynamics on the hypersphere ℰ=E_0. We refer to this choice as fixed-power ensemble <cit.>. This hard constraint might be relaxed studying the dynamics of the overall total intensity under saturation, evolving at a much larger time scale than the dynamics of the single mode phasors. The fluctuations in a variable-power ensemble might, then, be studied, in a way that pretty much resembles the relation between ensembles in statistical mechanics <cit.>. We will come back to this topic in Chap. <ref>. In the purely dissipative limit, i.e. D_l = Δ =0, it has been shown in Ref. <cit.> that the stationary distribution of configurations, solution to the Fokker-Planck equation associated to Eq. (<ref>), tends to a Gibbs-Boltzmann measure. Therefore, the equilibrium properties of the system can be investigated by studying the Hamiltonian = - ∑_k=1^N G_k |a_k|^2 - Γ/2∑_FMC(k) a_k_1a_k_2 a_k_3a_k_4, with the spherical constraint ∑_k |a_k|^2 = E_0 = ϵ N. The model can be, then, studied as a statistical mechanical system in the canonical ensemble at equilibrium in a thermal bath at the effective temperature T_ photonic = T/ϵ^2 = 1/𝒫^2, where 𝒫 is the so-called pumping rate. This temperature accounts for the competition between the optical power injected into the system, which favors the ordering action of the interactions, and the noise, which acts in the opposite direction. It is worth stressing that the effective temperature T_ photonic reduces to the spectral power of noise T, if one considers ϵ=1 (i.e. if one fixes the spherical constraint to a specific value of the total intensity). In Ref. <cit.>, the model has been solved in the narrow-bandwidth approximation, in which the typical linewidth of the modes γ is comparable to the total spectral bandwidth Δω. In this limit the FMC is always satisfied, by any quadruplet of modes, thus yielding a fully-connected graph of interactions. The mean-field analysis of the model reveals a first-order transition with respect to the value of 𝒫 between two thermally disordered and ordered phases, characterized respectively by unlocked and locked phases of the mode amplitudes a_k. The former is the low-𝒫 (high temperature) phase corresponding to an incoherent output of the multimode laser (continuous wave - CW), the latter is the high-𝒫 (low temperature) phase corresponding to a coherent output, equivalent to pulses in the time domain (mode-locking - ML). The theory has found experimental confirmation in Ref. <cit.>. It is worth noting that in the narrow-bandwidth limit, the modes are locked in a trivial way: almost all of them are aligned in the same direction in the complex plane, i.e. they have the same value of the phase. In the language of magnetic systems (the analogy here is with the ordered XY model), the locking of the phasors to the same angle leads to the presence of global magnetization. Correspondingly, the output of the laser is a approximately a plain wave, which is equivalent to sharp delta-like pulses in the time domain. Therefore, spectra are not possible in this case, in the sense that they reduce to a single spectral line plus some noise. If one goes beyond the fully-connected case, a different kind of global ordering arises at high pumping 𝒫 which consist in the onset of phase-waves, as it is discussed in the introduction to Chap. <ref>. In the general case, when the dispersive effects are included, the problem becomes hard to study analytically. However numerical simulations show that the presence of these effects do not change qualitatively the physical scenario of a first-order transition, leading only to a lowering of the critical value of the temperature (<ref>) which drives the transition <cit.>. § MULTIMODE LASER THEORY IN OPEN AND DISORDERED MEDIA The previous description is based on the underlying quantum theory of a multimode laser in an ideally closed cavity, which was first developed in the semiclassical approximation by Lamb in Ref. <cit.> and then generalized to the fully quantum case by Scully and Lamb in Ref. <cit.>. In the case of random lasers, a complete quantum theory is still missing given the difficulty of the problem. Besides the disorder of the active medium, which makes the spatial dependence of the electromagnetic field not easy to compute, one more fundamental problem is how to deal with the openness of the system, when the leakages are non-perturbatively relevant. The quantization procedure in this case presents the typical technical issues of quantum systems with dissipation, which are non-Hermitian problems where the spectral theorem for self-adjoint operators does not apply, i.e. the standard decomposition in a unique complete set of orthogonal eigenvectors corresponding to real eigenvalues is not possible. This problem was already present in quantum optics, even before the theory of Lamb for multimode laser was developed, since when Fox and Li first studied the effect of diffraction losses in a cavity <cit.>. More recently, several relevant studies have been put forward to overcome the difficulties <cit.>, but a part from the exceptional case of a two mode-laser <cit.>, to the best of our knowledge, the problem remains open. For a comprehensive review on the topic see Ref. <cit.>. Among the approaches that have been proposed, we focus on one based on the standard system-and-bath decomposition (see e.g. Refs. <cit.>) which develops the clearest physical intuition and seems to be the most convenient one for the case of random lasers. The experimental observations discussed in the Introduction push towards the development of a theory of the electromagnetic field which accounts for both a discrete and a continuous part of the spectrum, the former comprised by modes which are confined inside the medium by multiple scattering, the latter by diffusive modes radiating from the medium. The system-and-bath approach developed in Refs. <cit.> is based on regarding the quantum subsystem composed of electromagnetic cavity modes as embedded in an environment of scattering states into which the states of the system can decay, i.e. the bath. In the following, we briefly sketch the main features of the approach and report the results. We refer to Ref. <cit.> for a more detailed exposition. §.§ System-and-Bath Decomposition The starting point of this approach is the expansion in modes-of-the-universe developed in Refs. <cit.>, where the electromagnetic field quantization is carried out in the presence of a 3-dimensional dielectric medium with spatially dependent permittivity ϵ(r) and without specifying boundary conditions. The electromagnetic field can be expressed in terms of its vector potential A(r,t) and of its scalar potential Φ(r,t). The Coulomb gauge (transversal gauge), generalized to the case of inhomogeneous media, is defined by the following relations Φ(r,t) = 0 ∇· [ϵ(r) A(r,t)] = 0 and allows one to write the electric and magnetic fields in the form E(r,t) = - 1/cȦ(r,t) B(r,t) = ∇×A(r,t), where the dot denotes the time derivative. The Hamiltonian of the system is given by = 1/2∫r{c^2/ϵ(r)Π^2(r,t) + [ ∇×A(r,t) ]^2 }, where Π = ϵ(r) Ȧ / c^2 is the cojugated momentum of the vector potential. The modes-of-the-universe are defined as solutions of the Helmoltz equation ∇× [∇×f_m(ω, r)] - ϵ(r) ω^2/c^2f_m(ω, r) = 0, where the functions f_m(ω, r) are defined in all space and satisfy the transversality condition ∇· [ϵ(r) f_m] = 0. The index ω is a continuous frequency, but the formalism can be easily adapted to the case of a discrete spectrum by using a discrete index and replacing integrals with sums. The discrete index m specifies the asymptotic boundary conditions far away from the dielectric, including the polarization. We consider asymptotic conditions corresponding to a scattering problem with incoming and outgoing waves. Then f_m(ω,r) represents a solution with an incoming wave in channel m and only outgoing waves in all other scattering channels. The definition of the channels depends on the problem at hand: for a dielectric coupled to free space, one may expand the asymptotic solutions in terms of angular momentum states. Then m corresponds to an angular momentum quantum number. On the other hand, for a dielectric connected to external waveguides, m may represent a transverse mode index <cit.>. Equation (<ref>) is the classical equation of motion for the field dynamics in the generalized Coulomb gauge, which can be obtained through a variational principle from the Lagrangian of the electromagnetic field. By defining ϕ_m (ω, r) = √(ϵ(r))f_m(ω, r), the equation can be cast into a well-defined eigenvalue problem for the Hermitian differential operator ℒ: ℒϕ_m (ω, r) = ω^2/c^2ϕ_m (ω, r), ℒ = 1/√(ϵ(r))∇× [∇×1/√(ϵ(r))], where the eigenmodes ϕ_m (ω, r) form a complete set in the subspace of L^2 functions defined by the transversality condition. The vector potential can be then expressed in terms of the eigenmodes as A(r,t) = c ∑_m ∫ω q_m(ω, t) ϕ_m(ω, r)/√(ϵ(r)), and a similar expression holds for its conjugated momentum Π with coefficients p_m(ω, t). Quantization can be obtained by promoting the coefficients of the expansion to operators and imposing canonical relations on them. This normal mode expansion is a consistent field quantization scheme in presence of inhomogeneous media, but does not provide any particular information about the field inside the medium. As showed in Ref. <cit.>, a separation into cavity (else termed resonator) and radiative (or channel) modes can be obtained by means of a Feshbach projection <cit.>. The eigenmodes of the total system can be projected onto orthogonal subspaces by the operators 𝒬 = ∫_r∈ Vr |r⟩⟨r |          𝒫 = ∫_r∉ Vr |r⟩⟨r |, where V is the region of the whole space where the dielectric is present. The eigenmodes ϕ_m(ω, r) can be then written as |ϕ⟩ = |μ⟩ + |ν⟩, where |μ⟩ = 𝒬 |ϕ⟩ and |ν⟩ = 𝒫 |ϕ⟩ represent respectively the projections on the cavity and radiative subspaces and ϕ(ω,r)=⟨r|ϕ⟩. Similarly the actual modes-of-the-universe f_m(ω, r) can be written as |f⟩ = |u⟩ + |v⟩, where |u⟩ and |v⟩ correspond respectively to |μ⟩ and |ν⟩. The cavity modes vanish outside V and, hence, form a discrete set labeled by a discrete index λ; vice versa the radiative modes vanish inside V and form a continuum, labeled by a continuous index ω and a discrete index m specifying boundary conditions at infinity. Each set of modes is a complete and orthonormal set in the subspace of definition, but as whole they can not be considered eigenmodes of the total system. The eigenvalue problem in Eq. (<ref>) can be rewritten in this formalism and solved with suitable matching conditions at the boundaries. The differential operator ℒ can be decomposed into resonator ℒ_, channel ℒ_𝒫𝒫 and coupling ℒ_𝒫,ℒ_𝒫 contributions, in such a way that [ ℒ_ ℒ_𝒫; ℒ_𝒫 ℒ_𝒫𝒫 ][ μ(r); ν(r) ] = (ω/c)^2 [ μ(r); ν(r) ] where μ(r)=⟨r | μ⟩ and equivalently for ν. The solution yields an exact representation of the eigenstates in terms of cavity and radiative modes |ϕ_m(ω)⟩ = ∑_λα_λ (ω) |μ_λ⟩ + ∑_m ∫ω' β_m(ω,ω') |ν_m (ω')⟩, where μ_λ and ν_m(ω) are the solutions of the uncoupled problems for ℒ_ and ℒ_𝒫𝒫, while the coefficients α, β carry the dependence on the coupling operators ℒ_𝒫,ℒ_𝒫. The same decomposition holds for the wavefunctions f_m(r,ω) in terms of their projections u_λ and v_m(ω). The vector potential can be expanded in terms of cavity and radiative modes A(r,t) = c∑_λ Q_λ(t) u_λ(r) + c ∑_m ∫ω Q_m(ω,t) v_m(r,ω), and similarly for the conjugated momentum Π, with coefficients P_λ(t) and P_m(ω,t) respectively for the discrete and continuoous part of the spectrum. Quantization can be obtained as usual, by promoting the coefficients of the expansion to operators and imposing canonical commutation relations. Eventually, the field Hamiltonian takes the expected system-and-bath form, which in the rotating-wave approximation[The rotating-wave approximation <cit.> allows one to neglect fast oscillating terms in the Hamiltonian of an optical system. In the present case we only keep the resonant terms (a b^†,a^† b) in the system-and-bath coupling and neglect the nonresonant ones (a b,a^† b^†), which become relevant only when the frequencies of the modes are spread over a range comparable to their typical frequency. Then, if Δω is the width of the entire spectrum, the rotating-wave approximation holds as far as Δω≪ω, that is, the typical situation for random optically active materials.] reads as = ∑_λħω_λ a_λ^† a_λ + ∑_m ∫ωħω b_m^†(ω) b_m(ω) + ħ∑_λ∑_m ∫ω[ W_λ m(ω) a_λ^† b_m(ω) + h.c.], where a^†_λ,a_λ and b^†_λ,b_λ are couples of creation and annihilation operators respectively for the cavity and the radiative modes. The first two terms in account for the energy of the resonating system and of the radiative bath separately, while the third one accounts for the interaction energy of the system-and-bath coupling. The procedure followed allows to have explicit expressions for the coupling matrix elements W_λ m(ω) = c^2/2 ħ√(ω_λω)⟨μ_λ | ℒ_𝒬𝒫 | ν_m(ω) ⟩. Here, however, consistently with the rotating-wave approximation, we consider the matrix elements W_λ m independent of frequency, at least over a sufficiently large band around the typical mode frequency <cit.>. This is also compatible with the Markovian limit, which is equivalent to assume a time scale separation so that the typical cavity mode lifetimes are much bigger than the “bath correlation time” <cit.>. At this point, it is easy to find coupled dynamical equations for the operators a_λ and b_m(ω) in the Heisenberg representation. From the study of these equations one can find input-output relations based on the scattering matrix formalism, which are useful since the radiative states are the only accessible experimentally. However, here we are only interested in the cavity mode dynamics, which turns out to be given by the following Langevin equation a_λ/ t = - i ω_λ a_λ (t) - ∑_λ'γ_λλ' a_λ'(t) + η_λ(t), where we have defined the coupling matrix γ_λλ' = π[ W W^†]_λλ' and the quantum noise operator η_λ(t) = -i ∑_m W_λ m∫ω e^i ω (t-t_0) b_m(ω, t_0). Therefore, the system-and-bath separation leads to a quantum stochastic dynamical theory in the cavity modes subspace, where the effect of the external bath of radiative modes is included through a noise term. Moreover, an effective linear damping coupling mediated by the radiative modes, i.e. the matrix γ, acts on the cavity modes. The two main differences with respect to the closed cavity case are represented by: (i) the presence of non-diagonal elements γ_λλ' in the interactions; (ii) the fact that the noise is correlated in the mode space, as one can see from the relation ⟨η^†_λ(t_1) η_λ'(t_2) ⟩∝ 2 γ_λλ'δ(t_1-t_2) ≠δ_λλ'δ(t_1-t_2). §.§ Semiclassical Theory of Light-Matter Interaction So far, we have managed to deal with the openness of the system. However, in order to complete the theory for active media we have to bring into the game light-matter interactions accounting for the gain. The standard way to go beyond the cold-cavity modes[By cold-cavity modes we mean solutions of the Helmholtz equation obtained by neglecting scattering and nonlinear effects which could come from the interactions with the active medium. Eq. (<ref>), for instance, is written in terms of the cold cavity modes of an open resonator.] is by using the semiclassical Lamb theory and including the gain medium described as a collection of two-level atoms continuously pumped into the excited state. Let us denote by ρ(r) the atomic density and by ω_a the atomic transition frequency. If |g⟩ and |e⟩ denote respectively the ground and the exited states and E_e and E_g their energies, then ω_a = (E_e - E_g)/ħ. Only homogeneous broadening is considered, e.g. the Doppler effect is neglected in first approximation. The evolution of the atom-field operators can be derived from the Jaynes-Cummings Hamiltonian <cit.>, plus the contribution of the damping term accounting for the openness of the system, and can be expressed in the Heisenberg representation by the following set of quantum stochastic nonlinear differential equations <cit.> ȧ_λ = - iω_λ a_λ - ∑_μγ_λμ a_μ + ∫r g_λ^† (r) σ_-(r) + η_λ σ̇_-(r) = - (γ_⊥ + iω_a)σ_-(r) + 2 ∑_μ g_μ(r) σ_z(r) a_μ + η_-(r) σ̇_z(r) = γ_∥ (S ρ(r) - σ_z(r)) - ∑_μ(g_μ^†(r) a_μ^†σ_-(r) + h.c.)+ η_z(r), where σ_-^† = |e⟩⟨ g| and σ_- = |g⟩⟨ e| are the atomic raising and lowering operators and σ_z= |e⟩⟨ e| - |g⟩⟨ g| is the inversion density operator. The terms γ_⊥ and γ_∥ are the polarization and population-inversion decay rates, while S is the pump intensity resulting from the interaction between atoms and external baths, which also gives rise to the noise terms η_-(r) and η_z(r). The noise term η_λ and the damping matrix γ_λμ are the terms previously shown to be induced by the external radiation field. In the electric dipole approximation <cit.> the atom-field coupling g_λ(r) are given by g_λ(r) = ω_a /√(2 ħϵ_0 ω_λ)p_eg·μ_λ(r), where p_eg=⟨ e | r | g ⟩ is the atomic dipole and μ_λ is the complete and orthonormal set of cavity modes previously introduced. A full quantum treatment of these equations would require the use of the density matrix formalism to trace over the atomic degrees of freedom, as done in Ref. <cit.> for the case of a two-mode laser. In general, this is not doable and one resorts to the semiclassical approximation <cit.>, where the operators are downgraded to complex numbers corresponding to their expectation values and all the noise sources are neglected (and only later added back). Then, by considering laser media where the characteristic time of atomic pump and loss are much shorter than the lifetimes of the resonator modes, the atomic variables can be adiabatically removed obtaining a set of nonlinear equations for the field modes alone. We will not enter into the details of the procedure, which is carefully described in Ref. <cit.>, but just sketch the main steps and the final results. In order to eliminate the atomic variables, we resort to perturbation theory in the mode amplitudes. One can start by neglecting the quadratic term in Eq. (<ref>), obtaining the zeroth-order approximation, which replaced in Eq. (<ref>) gives the first-order approximation, which replaced back in Eq. (<ref>) gives the second-order approximation and so on. Once the expressions of σ_- and σ_z have been found at a given order of perturbation theory, by replacing them into Eq. (<ref>) one finally finds the the dynamic equation for the modes alone. The perturbation series can be resummed obtaining an expression which is valid at all orders in perturbation theory <cit.> only in the special case of the free-running approximation, for which the lasing modes are considered to oscillate independently from each other. However, this would only be adequate for a theory of random lasing with non-resonant (incoherent) feedback, where the role of interference is neglected, as in the original work by Letokhov <cit.>. As already mentioned in the Introduction, after the observation of structured random laser spectra with sharp peaks (see, e.g., <cit.>), it is generally believed that phases do play an important role in the mode dynamics, determining a coherent lasing action. Therefore, we do not use the free-running approximation and limit ourselves to the third-order theory. In fact, we expect higher orders to become relevant far from the lasing transition and, from the statistical mechanics point of view, not to change universality class of the transition, see, e.g. Ref. <cit.>. In the third-order theory, the atom-field couplings driving the mode dynamics in the cold-cavity mode basis { a_λ} (with Greek letter indices) contain terms of the kind G_λ_1,λ_2^(2)∝∫rρ(r) g_λ_1^*(r) g_λ_2(r) G_λ_1,λ_2,λ_3,λ_4^(4)∝∫rρ(r) g_λ_1^*(r) g_λ_2(r) g_λ_3^*(r) g_λ_4(r). However, it is convenient to express the mode dynamics in the slow amplitude mode basis, which we have already defined in the previous section, see Eq. (<ref>). By denoting with { a_k} (with Latin letters indices) the slow amplitude modes, the following change of variables is performed a_λ = ∑_k A_λ k a_k, which affects all the quantities in the dynamic equation for the modes. The matrix A accounts for the fact that each lasing mode can be thought as a single resonance given by the superposition of many cavity modes, whose fast oscillations can be averaged away. However, the decomposition of a slow amplitude modes in cavity modes is by no means unique: we can use this freedom to choose a basis in which the noise is diagonal, simplifying the stochastic dynamics. Eventually, the resulting equation turns out to be the generalization to random lasers of the SLD Langevin master equation for standard multimode laser Eq. (<ref>) and reads a_k_1/ t = ∑_k | FMC(k) g_k_1 k_2^(2) a_k_2 + ∑_k |FMC(k) g_k_1 k_2 k_3 k_4^(4) a_k_2a_k_3a_k_4 + η_k_1(t), where the couplings are g_k_1 k_2^(2) = S G_k_1,k_2^(2) - γ̃_k_1 k_2         g_k_1 k_2 k_3 k_4^(4) = 2 S G_k_1,k_2,k_3,k_4^(4), with γ̃_k_1 k_2= ∑_λμ A_λ k_1^-1γ_λμ A_μ k_2 G_k_1,k_2^(2)∝∫rρ(r) g_k_1^L*(r) g_k_2^R(r) G_k_1,k_2,k_3,k_4^(4)∝∫rρ(r) g_k_1^L*(r) g_k_2^R(r) g_k_3^L*(r) g_k_4^R(r), where g_k^L = ∑_μ (A^-1)^*_μ k g_μ and g_k^R = ∑_μ A_μ k g_μ, and the proportionality coefficients slightly depend on the frequency <cit.>. Most importantly, in the slow amplitude basis, where by definition a_k(ω) ≃δ(ω-ω_k), the relevant terms in the dynamics are selected by the frequency matching condition FMC(k): | ω_k_1 - ω_k_2 + ⋯ + ω_k_2n-1 - ω_k_2n | ≲γ , which generalizes the selection rule in the case of a comb-like frequency distribution. This can be seen as an adiabatic conservation law coming from averaging over fast mode oscillation. At this stage, the same techniques developed by the SLD approach can be applied to the Langevin master equation (<ref>). In order to clearly separate the dissipative contributions from the dispersive ones, we can pass to the real and imaginary parts of the couplings. By defining G_k_1 k_2 = 1/2(g_k_1 k_2^(2) + g_k_1 k_2^(2))           i D_k_1 k_2 = (g_k_1 k_2^(2) - g_k_1 k_2^(2)) Γ_k_1 k_2 k_3 k_4 = 1/2(g_k_1 k_2 k_3 k_4 ^(4) + g_k_1 k_2 k_3 k_4 ^(4))           i Δ_k_1 k_2 k_3 k_4 = (g_k_1 k_2 k_3 k_4 ^(4) - g_k_1 k_2 k_3 k_4 ^(4)), the dynamical equation can be written as a_k_1/ t = - ∂(_R + i _I)/∂a_k_1 + η_k_1(t), where we have defined _R = ∑_k | FMC(k) G_k_1 k_2a_k_1 a_k_2 + 1/2∑_k |FMC(k)Γ_k_1 k_2 k_3 k_4a_k_1 a_k_2a_k_3a_k_4 _I = ∑_k | FMC(k) D_k_1 k_2a_k_1 a_k_2 + 1/2∑_k |FMC(k)Δ_k_1 k_2 k_3 k_4a_k_1 a_k_2a_k_3a_k_4. By considering the purely dissipative limit, i.e. _I =0, and exploiting the fixed-power ensemble defined in <cit.>, i.e. imposing the spherical constraint to model gain saturation, one can prove that the dynamics converges to equilibrium <cit.>. As for the ordered case of standard multimodal lasers, the more general situation, in which one retains the dispersive part of the dynamics, is not supposed to change the nature of the results that we are going to discuss in the next section. § THE GLASSY RANDOM LASER In the previous section we have shown that an effective statistical mechanics theory of random lasers can be justified, along the lines of the SLD approach to multimode ordered lasers. The specific features of the mode coupling interaction have been exposed: linear interactions have non diagonal elements accounting for the damping effect due to the openness of the system and a 4-body disordered coupling term emerges from the atom-field interaction in the semiclassical approximation. The mode dynamics is described in the slow amplitude basis, where a generalized FMC applies to both the 2-body and the 4-body term of interaction. However, the model defined by the Hamiltonian in Eq. (<ref>) is still very hard to be addressed. The mean-field fully-connected solution obtained in Refs. <cit.> requires the following additional hypotheses: * extended modes: all modes have a spatial wavefunction extended all over the volume V, where the dielectric medium is present; * narrow bandwidth: the bandwidth of the entire spectrum Δω is comparable with the typical linewidth of the modes γ. The extended modes hypothesis guarantees that the only selecting rule in mode coupling is the FMC, while the narrow bandwidth limit ensures that all the modes satisfy the FMC. Hence, the combination of these conditions leads to a model defined on a fully-connected graph of interactions, where each phasor interacts with all the others. Moreover, the mode self-interactions (representing the gain profile) are taken independent from k and set to zero, without loss of generality. Another important assumption regards the magnitudes and phases of the couplings, which are related to the spatial overlap among the modes. The computation of their values requires a precise knowledge of the spatial structure of the electromagnetic field, which is difficult to access in presence of a disordered medium. Though difficult in practice, it is possible to accomplish the task, and, actually, it has been done in some simple cases <cit.>. The problem remains however to compute the value of the couplings in the slow amplitude basis, which is used to express the dynamics of the lasing modes. For the construction of the statistical mean-field model, it is then assumed that the couplings are independently drawn from a probability distribution. This is in not true in general, because of the nature of the couplings: for instance, all couplings involving the same mode are correlated. However, these correlations matter only in finite dimensions, while in mean-field theory each coupling coefficient vanishes as N increases and the role of correlation will be quantitatively negligible as far as the system displays enough modes. By considering all these assumptions together, the mean-field spin-glass model for random lasers is defined by the Hamiltonian ℋ[a] = - 1/2∑_i,j^1,N J_ija_ia_j - 1/4!∑_ijkl^1,N J_ijkla_i a_j a_k a_l, where the phasors a_k are subjected to the spherical constraint ∑_k=1^N |a_k|^2 = ϵ N and the coupling values[We remind that the couplings are real numbers, since we are considering the purely dissipative limit of the dynamics.] are independently extracted from the Gaussian probability distributions P(J_i_1,…,i_p) = 1/√(2 πσ_p^2)exp[- (J_i_1,…,i_p - J̃_0^(p))^2 / 2 σ_p^2]. In order to ensure the extensivety of the Hamiltonian the average J̃_0^(p) and the variance σ_p are taken as follows J̃_0^(p) = J_0^(p)/N^p-1         σ_p = p! J_p^2/2 N^p-1, with J_0^(p) and J_p independent from N. As usual, the variance of the distributions accounts for the strength of the disorder, while their average, by inducing a bias in the extraction of the couplings, acts as an aligning coupling, which tends to induce a long-range ordering in the system at low temperature. To gain a physical intuition of the role played by the free parameters of the model, it is useful to express them in terms of photonic parameters: J_0^(2) = (1-α_0) J_0           J_0^(4) = α_0 J_0, J_2 = (1-α)J           J_4 = α J where J_0 and J respectively fix the cumulative strength of the ordered (the coupling average) and disordered (the coupling variance) contributions to the Hamiltonian, while α_0 and α fix the strength of nonlinearity in the ordered and disordered parts. Then, we introduce the degree of disorder R_J and the pumping rate 𝒫 as R_J = J/J_0         𝒫 = ϵ√(β J_0), where β is the inverse of the noise spectral power T. The definition of 𝒫, like in Eq. (<ref>), accounts for the equivalence of increasing the optical power per mode ϵ or decreasing the temperature of the heat bath. We refer to the model defined by the Hamiltonian (<ref>) as spherical (2+4)-phasor model. This is the most general family of mean-field models that has been put forward to study the equilibrium properties of lasing systems, i.e. the properties of the steady state of a laser, expressed in terms of a thermodynamic equilibrium under the mapping discussed above. Indeed, by changing the values of J and J_0 one can tune the degree of disorder and adapt the model to the case of multimode laser with weak disorder or with no disorder at all, and, at the same time depending on α and α_0 one can tune the degree of nonlinearity and make the damping effect of the leakages more or less strong. In particular, by choosing J=0 and α_0=1 one finds back the Hamiltonian (<ref>) of the mean-field ordered model defined in <cit.>. Therefore, the model results in a comprehensive theory of multimode lasing phenomena. Before the spherical (2+4)-phasor model was considered, a simpler spin-glass model was proposed in Refs. <cit.> which does not take into account the amplitudes of the phasors. The model is a 4-body disordered XY model, defined by the Hamiltonian [ϕ] = - ∑_ijklJ_ijkl cos(ϕ_i -ϕ_j +ϕ_k - ϕ_l), where ϕ_k denotes the phase of the phasor a_k=A_k e^iϕ_k and J_ijkl are unbiased random couplings. Eq. (<ref>) can be recovered from the real part of the (2+4)-phasor Hamiltonian in the strong cavity limit, for which the damping coupling due to the openness can be neglected, and in the quenched amplitude approximation, for which the amplitudes are considered as fixed during the dynamics of the phases and are absorbed in the definition of the couplings. This model was the first mean-field statistical description of random lasers, which goes beyond the free-running approximation, by including the effect of interference. By means of the replica method it was shown for the first time that the competition for amplification in a multimode random optical system can lead to a behavior similar to that of a glass transition. The study was then completed by adding an average to the coupling distribution <cit.>, which extends the phase diagram of the model to a globally magnetized phase, and by computing the complexity of the glassy phase <cit.>. It is worth noting, that the (2+4)-phasor model we are considering can be regarded as a superposition of the XY (only phases) model of Eq. (<ref>), considered in <cit.>, and the real spherical (2+p)-spin (only magnitudes) model considered in <cit.>, for p=4. §.§ Quenched Disordered Systems In this section we briefly review the replica method, which lies at the heart of the solution of the mean-field model defined by Eq. (<ref>) and of the analytical part of this work. Consider a generic mean-field fully-connected spin-glass model with variables σ={σ_1,…,σ_N} and quenched disordered couplings J independently extracted from a probability distribution function P(J). The Hamiltonian _J[σ] may have pairwise interactions as in the case of the SK model <cit.> or nonlinear interactions as in the case of the p-spin model <cit.>. The variables may either take value in a limited, also multi-dimensional domain, in which case the model has N local constraints as in the case of Ising, XY or Heisenberg spins, or be continuous and subject to a global constraint of the kind ||σ||_ρ = N, for some choice of the norm. If ρ=2, the spherical constraint is recovered. The partition function of the model, which one aims to compute in order to study the equilibrium properties of the system, depends on disorder and is given by Z_J = ∫𝒟σ e^-β_J[σ], where we have used a shorthand notation for the sum over all the possible configurations of the variables compatible with the constraints. A fundamental quantity is the overlap q=1/Nσ·τ =1/N∑_i=1^N σ_iτ_i among two configurations σ and τ extracted from the Gibbs measure with the same Hamiltonian _J. We call P_J(q) the overlap probability distribution function for a given realization of the quenched disordered couplings J. The meaning of quenched disorder is that the coupling values, once extracted from P(J), remain fixed during the dynamics. Generally, this assumption is justified on the basis of a time scale separation between the dynamics of the system variables and the dynamics of the couplings, which evolve on a much larger time scale. This applies particularly well to the case of random lasers where the light mode amplitudes have a very fast dynamics when compared to changes in the displacements of the particles of the medium, which determine the time evolution of the couplings. The opposite case is the annealed one: when variables and couplings evolve on the same time scale, the disorder averages out leaving the system qualitatively equal to its ordered counterpart but for a rescaling of the free parameters. To perform an annealed average of the disorder, one just has to compute Z_J = ∫𝒟 J P(J) ∫𝒟σ e^-β_J[σ], from which one sees that in this case the disorder is just an additional thermodynamic degree of freedom, being at the same level of the σ. Once the partition function is averaged, no dependence on the J's remain. In principle, every macroscopic observable of a quenched disordered system measured at equilibrium depends on the particular realization of the disorder, leading to the idea of dealing with an ensemble of systems. However, observables whose fluctuations with respect to the J's decrease as 1/N^1/2 are expected to take the same value in the large-N limit irrespectively of the specific values of the J's. These quantities are called self-averaging and in most cases the free energy density is one of such kind. For self-averaging quantities it is sufficient to compute the average value over disorder to make comparisons with the experimental typical values measured on macroscopic samples. On the other hand, because of much stronger fluctuations, non-self-averaging quantities, such as the overlap distribution function P_J(q), do not lose their dependence on the disorder in the thermodynamic limit, so that their averaged value P(q)=P_J(q) is generally different from the typical P_J(q). We are interested in computing the quenched average f = - lim_N →∞1/β Nlog Z_J = - lim_N →∞1/β N∫𝒟 J P(J) log∫𝒟σ e^-β_J[σ]. In this case, the average over the couplings has to be taken after the sum over the configurations has been performed at fixed J. To avoid the problem of averaging the logarithm of a complicated function one can resort to the replica method, which is based on the following trick log x = lim_n → 0 x^n - 1/n, where x is a generic variable. Once applied to the partition function, the average reduces to f = - lim_N →∞1/β Nlog Z_J = - lim_N →∞lim_n → 01/β NZ_J^n - 1/n, where Z_J^n = ∫𝒟 J P(J) ∫∏_a=1^n 𝒟σ^a e^-β∑_a=1^n _J[σ^a]. The replica trick allows us to pass from the average of a function of the random partition function Z_J to the computation of the integer moments of the partition function distribution. This is more than just a simple algebraic trick: the n independent and identical copies of the system are of crucial importance for the study of the equilibrium properties. Once the average over disorder is carried out, a coupling among replicas is found, which naturally leads to the introduction of the global overlap matrices Q_ab = 1/Nσ^a ·σ^b = 1/N∑_i=1^N σ_i^aσ_j^b. In terms of these quantities (and possibly of other global parameters) the replicated partition function is such that the free energy reads f = - lim_N →∞lim_n → 01/β N∫𝒟Q e^N S(Q) - 1/n, which can be computed with the saddle-point method provided that the order of the limits is exchanged. This is the prescription of the so-called replica method, which leads to f = -lim_n → 01/β n S(Q_SP), where Q_SP is saddle-point value of the matrix Q. In order to solve the saddle point problem one may restrict the search for Q_SP to a specific matrix space, find self-consistency equations for the parameters and then check the solution a posteriori from the behavior of the thermodynamic potentials. The correct solution to this optimization problem is not always given by the intuitive replica symmetric (RS) ansatz, where the overlap matrix is parameterized by only one parameter q_0. Usually the RS ansatz describes the high temperature paramagnetic solution of the model, where the system is ergodic and there is only one pure state. If, however, an ergodicity-breaking transition takes place at a certain temperature to a phase where the Gibbs-Boltzmann measure breaks down in many pure states, the solution of the optimization problem can be captured by the more sofisticated Parisi replica symmetry breaking (RSB) scheme <cit.>, where the the overlap matrix is parameterized by more than one number. The solution has a very deep significance for the physics of complexity, in terms of understanding the structure of the states in the low temperature phase of quenched disordered systems <cit.>. Though the mathematical foundations of the replica method have not been laid yet, it has been rigorously proved by Guerra <cit.> and Talagrand <cit.> that the Parisi RSB scheme provides the correct solution for the free energy of the SK model. Quenched disordered systems can exhibit ergodicity breaking transitions corresponding to different kinds of replica symmetry breaking. Some transitions can be described by a finite number k of steps of replica-symmetry breaking (kRSB), where the overlap matrix is parameterized by k+1 numbers q_0,q_1…,q_k, whereas others lead to full replica symmetry breaking scheme (FRSB), where the overlap matrix is not parameterized by a discrete set of numbers, but rather by a continuous function q(x) defined on the interval [0,1]. Transitions which require a 1RSB ansatz, where the overlap can only take the values q_0 and q_1, are usually discontinuous, with a jump in the order parameter at the transition point and, at the same time, with a thermodynamic anomaly in the susceptibilities. This kind of phenomenology is known as Random First Order Transition (RFOT) and is the proxy of the glass transition in structural glasses <cit.>. The prototype for the RFOT is the spherical p-spin model. FRSB transitions are instead continuous and are the paradigm of the spin-glass transition in the context of magnetic systems, where the ergodicity broken phase space is organized in a hierarchical way. In this case the prototype model is the SK model. Historically, a distinction between 1RSB and FRSB models was made: this distinction has gradually faded over time, as soon as it was realized that more rich and variegated situations exist. In the case of the Ising p-spin model, for instance, a “glass to spin-glass” transition has been found in Ref. <cit.>, the so-called Gardner transition: by lowering the temperature the system undergoes first a transition from the paramagnetic RS phase to a 1RSB phase and, then, a transition to a FRSB phase. A similar scenario can be found in p-spin mixtures with spherical variables <cit.>, where also kRSB phases or hybrid 1-FRSB phases are possible. §.§ Replicated Partition Function In this section we present the solution of the spherical (2+4)-phasor model, by sketching the main steps of the mean-field replica computation and describing the most relevant results. The partition function of the model defined by the Hamiltonian (<ref>) with the spherical constraint (<ref>) is given by = ∫∏_k=1^N a_k a_k e^-β[a]δ(ϵ N - ∑_k=1^N |a_k|^2 ), i.e. the sum over all the phasor configurations on the complex hypersphere of radius √(ϵ N). In the following, the mode amplitudes will be expressed either in terms of real and imaginary parts or in terms of modulus and phase as a_k = √(ϵ)(σ_k + i τ_k) = A_k e^iϕ_k. Notice that both the moduli A_k and the phases ϕ_k are dynamical variables, i.e. they actually depend on time. However, under the general assumption that the dynamics of lasing systems is so fast that they can be considered, at least partially (cf. Sec. <ref>), at equilibrium, we are interested in the equilibrium properties of the system, which can be studied through the analysis of the partition function (<ref>). A different choice of the origin of time is not supposed to change the equilibrium properties of the system. The average over disorder of the replicated partition function naturally leads to the introduction of the following global overlaps matrices[Actually, the computation also requires the introduction of the overlap matrix T_αβ = 1/ϵ N∑_k=1^N [a_k^α a_k^β] = 2/N∑_k=1^N σ_k^ατ_k^β which however can be set to zero without loss of generality, as a consequence of the symmetry of the Hamiltonian (<ref>) under a global phase rotation a → a e^iϕ <cit.>.] _αβ = 1/ϵ N∑_k=1^N [a_k^αa_k^β] = 1/N∑_k=1^N (σ_k^ασ_k^β + τ_k^ατ_k^β) _αβ = 1/ϵ N∑_k=1^N [a_k^α a_k^β] = 1/N∑_k=1^N (σ_k^ασ_k^β - τ_k^ατ_k^β) and the coherence vector m^α = m_σ^α + i m_τ^α = 1/N√(2/ϵ)∑_k=1^N a_k^α m_σ^α = √(2)/N∑_k=1^N σ_k^α          m_τ^α = √(2)/N∑_k=1^N τ_k^α, which play the role of the order parameters of the model. In the following we will often refer to the parameter m as magnetization, in analogy with the language of spin-glass models. It is useful to discuss the physical meaning of the quantities defined above in terms of their connection with the optical properties of the system. The diagonal elements of the overlap matrix encode the stationarity of the optical intensity, being fixed by the spherical constraint _αα = 1/ϵ N∑_k=1^N A_k^2 = 1. The magnetization m and the diagonal part of the overlap matrix are directly connected to the coherence property of the corresponding optical regime m = 1/N√(2/ϵ)∑_k=1^N A_k e^i ϕ_k,           R_αα = 1/ϵ N∑_k=1^N A_k^2 cos(2 ϕ_k). In the photonic language, a globally magnetized phase corresponds to a regime in which all phasors point in the same direction in the complex plane, i.e. their phases are all equal. The off-diagonal terms of the overlap matrices can be written in terms of phases and magnitudes of modes in different replicas of the system, as _αβ = 1/ϵ N∑_k=1^N A_k^α A_k^βcos(ϕ_k^α - ϕ_k^β) _αβ = 1/ϵ N∑_k=1^N A_k^α A_k^βcos(ϕ_k^α + ϕ_k^β) The presence of more than one value in the off-diagonal part of the overlap matrices, i.e. the breaking of replica symmetry, is as usual interpreted as the existence of a nontrivial structure of thermodynamic states. Eventually, the averaged replicated partition function reads as ^n = ∫𝒟Φ𝒟Φ̂exp{-N [ℬ(Φ,Φ̂) - log𝒵_eff(Φ̂)] }, where the shorthand notations for the set of the order parameters Φ={,,m} and for their Lagrange multipliers Φ̂={,,m̂} have been introduced. The functional ℬ in the previous expression reads ℬ(Φ,Φ̂) = - ξ_2/2∑_αβ^1,n (_αβ^2 + _αβ^2) - ξ_4/4∑_αβ^1,n (_αβ^4 + _αβ^4 + 4 _αβ^2 _αβ^2) - b_2 ∑_α=1^n [(m_σ^α)^2+(m_τ^α)^2] - b_4 ∑_α=1^n [(m_σ^α)^2+(m_τ^α)^2]^2 + ∑_αβ^1,n (_αβ_αβ + _αβ_αβ ) + ∑_α=1^n (m̂_σ^α m_σ^α + m̂_τ^α m_τ^α) and the local partition function, which contains the integration over the phasors, is given by 𝒵_eff(Φ̂) = ∫∏_α=1^n σ^ατ^αexp{∑_αβ^1,n[σ^α (_αβ + _αβ) σ^β + τ^α (_αβ - _αβ) τ^β] } ×exp{∑_α=1^n [m̂_σ^ασ^α + m̂_τ^ατ^α] }. In the previous expressions the following symbols for the external parameters have been introduced for convenience b_2 = ϵ/4β J_0^(2)          b_4 = ϵ^2/96β J_0^(4) ξ_2 = ϵ^2/4β^2 J_2^2          ξ_4 = ϵ^4/6β^2 J_4^2, where we notice that b_2 and b_4 vanish if one considers zero-mean probability distributions for the couplings. The explicit expression of these parameters in terms of the photonic quantities J,J_0,α,α_0,R_J and 𝒫 is reported in Ref. <cit.>. Moreover, we notice that the ratios b_2/b_4 and ξ_2/ξ_4, which are of crucial importance in determining the nature of the phases of the model, do not depend on temperature, but only on the ratios between the free parameters of the coupling distributions, i. e. of the linear and non-linear contributions. The local partition function 𝒵_eff can be computed by performing the multidimensional Gaussian integration in σ and τ, while the other Lagrange multipliers can be eliminated by exploiting their saddle-point expressions in terms of overlap matrices and magnetization. After the integration over all the auxiliary variables has been carried out, one is left with ^n = ∫∏_α < β^1,n_αβ∏_α≤β^1,n_αβ∏_α=1^n[ m_σ^α m_τ^α]  e^- N G[,,m_σ,m_τ], where the action functional G reads as - G[,,m_σ,m_τ] = 1/2∑_αβ^1,n g(_αβ, _αβ) + n k(m_σ,m_τ) + 1/2log (+) + 1/2log (-) - m_σ^2/2∑_ab^1,n (+)^-1_ab - m_τ^2/2∑_ab^1,n(-)^-1_ab, and the functions g and k are defined as follows g(x,y) = ξ_2 (x^2 + y^2) + ξ_4/2 (x^4+y^4+4x^2y^2) k(x,y) = b_2 (x^2 + y^2) + b_4 (x^2 + y^2)^2. In the following section we describe the results of the replica computation and present the phase diagram of the model. §.§ Phase Diagram of the Glassy Laser Transition The saddle-point method applied to Eq. (<ref>) leads to a set of stationary equations for the functional G, which can be solved with an appropriate ansatz on the structure of the matrices depending on the value of the external parameters. The precise expression of the saddle-point equations in the RS and RSB ansatzes can be found in Ref. <cit.>, where their solution is discussed in detail for all the various cases. Here, we just aim to describe the phenomenology of the model. The phase diagram obtained by the solution of the saddle-point equations is comprised by four different phases distinguished by the values of the order parameters , and m: * Paramagnetic phase (PM): it is the RS solution with all the order parameters equal to zero (with the exception of _αα=1); it corresponds to the Continuous Wave (CW), where all the modes oscillate incoherently; it is the only phase at high (low) enough temperature (pumping); * Spin-Glass phase (SG): it is the RSB phase with vanishing global magnetization m=0; it is characterized by the freezing of the modes in configurations where the coherence of oscillations is frustrated by the presence of a nontrivial structure of states; it corresponds to the Random Laser (RL); it is the only phase at low enough temperature if ξ_2 and ξ_4 are large enough with respect to b_2 and b_4 (the degree of disorder R_J is large enough); * Ferromagnetic phase (FM): it is the set of all the phases with nonzero magnetization, regardless of possible replica symmetry breaking; all the modes oscillate coherently with the same phase; it corresponds to the Standard Mode-Locking Laser (SML); it is the only phase at low enough temperature if b_2 and b_4 are large enough with respect to ξ_2 and ξ_4; * Asymmetric Paramagnetic phase (APM): it is the RS solution with the order parameters all vanishing except for the diagonal elements of the overlap matrix , so there is a partial phase locking, without global magnetization, where the phases take different values but are locked; in the photonic langauge, we refer to this phase as Phase Locking Wave (PLW); it is an intermediate phase between the CW and the RL (or SML) phase, which exist only if ξ_4 ≠ 0. Both the FM and the SG phases are expected to present different kinds of replica symmetry breaking, depending on the values of the control parameters. In particular, one expects a FRSB structure if the 2-body term in the Hamiltonian is dominating, while a 1RSB one if the 4-body interaction prevails. When the interactions have comparable magnitudes, an intermediate 1-FRSB phase is expected in analogy to the case of real spherical spins <cit.>. The three cases can be distinguished either by the ratio ξ_2/ξ_4, or, equivalently, by the photonic parameter α, which measures the strength of the nonlinearity in the disordered part of the interactions. On the other hand, the system chooses between the FM and the SG phases depending on the strength of b_2 and b_4, or equivalently on the value of the photonic parameter R_J. Before presenting the phase diagram of the model, let us give a general description of all the system phases. Let us first consider the case in which b_2 and b_4 are low enough. Then, by lowering the temperature from the PM phase, the system may either enter the APM phase or, only in the case when ξ_4=0, remain in the RS phase with a non-vanishing value of the overlap[The fact that for ξ_4=0 the solution is always replica symmetric is expected in analogy to the p=2 spherical model <cit.>. The RS solution with non-vanishing overlap is only marginally stable: the addition of an infinitesimal perturbation to the Hamiltonian (in this case represented by an arbitrary small disordered nonlinearity) causes the solution to become unstable.]. From the APM phase, the system either enters the SG phase through a RFOT if ξ_2/ξ_4 is low enough, in which case the structure of the states is of the 1RSB kind, or it undergoes a continuous phase transition towards the SG phase with a FRSB structure in the opposite case when ξ_2/ξ_4 is high enough. On the other hand, if b_2 and b_4 are high enough, by lowering the temperature from both the PM and the APM phase, a transition towards a RS-FM phase is obtained, which can be either continuous (for b_2/b_4 high enough) or discontinuous (vice versa). For intermediate values of b_2 and b_4 the system is in the FM phase with the same kinds of RSB as the SG phase, depending on the ξ_2/ξ_4 ratio. As already mentioned, non-zero coupling averages yield the alignment of the phasors, acting as an effective field. In Ref. <cit.> the model has been mapped into an equivalent model with zero averages and a suitable effective field, which turns out to be related to the magnetization in the following way h = 2 b_2 m + 4 b_4 m^2. A non-vanishing value of the field signals that the system is globally magnetized, and, hence, is in the FM (SML) phase. We stress that it is sufficient that b_2=b_4=0 for the field to vanish, but it is not necessary: if b_2 and b_4 are small enough compared to ξ_2 and ξ_4, then the field vanishes because of m=0. Having developed the replica computation including the magnetization has the remarkable advantage of bridging with the ordered case. In this way, the theory describes general multimode laser phenomena, both standard and random and can be adapted to intermediate situations such as weakly disordered systems. However, for the purpose of this work, we are mainly interested in the glassy phase of light: hence, in order to simplify the picture, we present the phase diagram of the model at zero effective field, where no trace of the FM phase is present. The complete phase diagram of the model has an additional axis accounting for positive values of the effective field and can be found in Refs. <cit.>. In Fig. <ref> the (ξ_4,ξ_2)-phase diagram is displayed. The transition lines are obtained through the study of the phase stability, which can be performed with the standard method, by looking at the vanishing of the replicon, i.e. the highest eigenvalue of the stability matrix <cit.>. Let us briefly describe the results summarized in the phase diagram. Starting from a value of ξ_2 < 0.3434 …, i.e. the three-critical point in Fig. <ref>, by increasing ξ_4 one has the following scenario: below the red line the only stable solution is the PM one; on the red line the PM solution becomes unstable in favor of the emergence of the APM solution, which in turn becomes unstable on the blue line. The stability of both the PM and the APM phases is revealed by the vanishing replicon of the corresponding RS solution (λ_RS^PM=0 and λ_RS^APM=0). When the APM solution becomes unstable, a transition towards the 1RSB phase takes place: the first green line corresponds to the static transition at x=1, where x is the breaking parameter. At the transition, the usual mixed-order behavior of the RFOT is found: a jump in the order parameters and is present, but the internal energy remains continuous, a signature of no latent heat exchange. The study of the stability of the 1RSB solution reveals that, as anticipated, the 1RSB phase is not stable over the whole region of the parameters where the RS solution is unstable: the replicon of the 1RSB solution vanishes (λ_1RSB=0) on the black line of Fig. <ref>. Ideally, by starting from a value of ξ_2 > 0.3434 … in the 1RSB phase and lowering the value of ξ_4, the expected “glass to spin-glass” transition takes place: first the system enters a mixed 1-FRSB phase and then the FRSB phase emerges (magenta line). §.§ The glassy state of light The replica-symmetry broken phase represents the amorphous state of light, in analogy to the low temperature behavior of glass-forming liquids predicted by mean-field theory. All the concepts coming from the mean-field theory of structural glasses are then predicted by this model for optical waves in disordered media. When the glass transition is approached from the RS phase, the system exhibits a critical slowing down and dynamical arrest on the transition line, as could be revealed by study of time correlation functions. The cause of this behavior is, as usual, the breaking of ergodicity in a number 𝒩, increasing exponentially with the system size, of degenerate metastable states, which dominate the dynamics, before the static transition is reached. The role of these states can be revealed by the study of the complexity[Notice that the complexity in disordered systems is the only intrinsically dynamical quantity that can be computed from the statics.], i.e. the configurational entropy Σ = N^-1log𝒩, which also allows to find the spinodal line of the transition, corresponding to the value of the parameters where the 1RSB states are dynamically accessible. The complexity decreases when passing from the 1RSB to the 1-FRSB phase, until it reaches zero on the magenta line of the continuous transition to the FRSB phase (see Fig. <ref>). What of this dynamical scenario may be actually observed in real random lasers is not so clear: as already mentioned, the dynamics of light modes is so fast that dynamical phenomena (like aging) connected to the presence of metastable states may be difficult to reveal. However, besides the presence of exponentially many metastable states, the theory predicts a static transition to a ergodicity broken phase with multiple equilibria, which most likely can be put in correspondence with the experimental observations. What can be stated is that the theory predicts that lasing in random media displays a glassy coherent behavior with the following properties: (i) the subset of modes which are activated and actually lase is randomly chosen from all the cavity modes and (ii) the set of activated modes behave coherently and belong to one out of many possible states. It is useful to visualize the phase diagram of the glass transition in the photonic parameters 𝒫 and α for a fixed value of the degree of disorder R_J (Fig. <ref>). Actually, in this case the complete phase diagram has an additional axis for R_J: the fieldless case is compatible with values of R_J > 1, i.e. J>J_0, where the phenomenology of the model is described in terms of CW, PLW and RL phases[In order to visualize the RL-SML transition one has to consider a (R_J,𝒫)-section of the complete phase diagram at fixed α. We recall that it is not necessary that J_0=0 (i.e. b_2=b_4=0) to be in the fieldless case: h is zero if m=0, which may happen also if J_0 is small compared to J.]. The diagram in figure is the same diagram presented before, but with respect to 𝒫 and α. In this case, we gain a clearer physical intuition about the behavior of the model. In particular, by fixing the strength of the nonlinearity (as it is in a real random laser), we can isolate the role of the pumping. If we choose a value of α to the left of the tricritical point in Fig. <ref>, for instance α=0.4 which corresponds to the dashed vertical line in figure, we see that starting from the CW phase, by increasing the value of the pumping, the laser first enters the PLW phase, where there is partial coherence, and, then, reaches the glassy coherent phase, by going through all the RSB phases: first FRSB, then 1-FRSB and, eventually, 1RSB. On the other hand, if we choose a value of α on the right of the tricritical point, after the intermediate PLW phase, the laser enters directly in the 1RSB phase, by crossing the green line. This last regime will be the working setting of the original part of this Thesis. §.§ Intensity Fluctuation Overlap This section is devoted to the introduction of the key observable which allows to connect spin-glass theory to experiments on RLs: the Intensity Fluctuation Overlap (IFO). We refer in particular to the experiments already mentioned in the Introduction, where activated modes are observed to change in spectra acquired at different times from the same sample <cit.>. From these observations it is not possible to extract the mode phases needed to compute the overlap matrices previously defined, see Eqs. (<ref>). In particular, the random lasing emission is generally not intense enough to successfully use techniques based on second-harmonic generation to reconstruct the phases of the modes <cit.>. However, from the statistical mechanics point of view, the phenomenology presented by the experiments strongly suggests that an ergodicity breaking transition controlled by the pumping rate is taking place in real random lasers. To compare the theory with the experiments it would be great to define a quantity, which is experimentally measurable and is related in some way to the order parameters defined in the mean-field analysis. In Ref. <cit.>, shot-to-shot intensity fluctuations have been interpreted in terms of an overlap between intensity fluctuations of two real replicas, i.e. replicas with the same quenched disorder. Provided that the sample is kept in the same experimental conditions for all the data acquisition time, then real replicas can be associated to the different shots, each one thermalized into different equilibrium states characterized by a specific spectral profile of activated modes and sharp peaks. Thermalization is guaranteed by the fact that during a single pulse of the external pumping, several stimulated emission phenomena take place for each mode frequency ensuring a long enough mode dynamics. To be precise, this would only be a partial thermalization, since for a disordered system in the ergodicity broken phase, a complete thermalization would require the system to visit all possible states. Given the development of the theory and the current interpretation of experiments, we cannot say how many equilibrium states a random laser actually visits for each spectral shot, but we can quite safely say that the system has reached the static transition predicted by mean-field theory. This is not only a consequence of fast light mode dynamics, but also of a number of modes which is small compared to usual thermodynamics degrees of freedom (e.g. ∼ 10^23) and of the fact that these modes are highly connected as in the dense interaction network of a mean-field model. Let us first define the fluctuation of the intensity I_k^α of the resonance at the frequency ω_k in a single spectrum α with respect to the spectral intensity at that frequency averaged over all N_s acquired spectra as Δ_k^α = I_k^α - 1/N_s∑_γ =1^N_sI_k^γ. Each spectrum represents the realization of a replica. The experimental IFO measured between two real replicas can be represented by the following matrix 𝒞_αβ^exp = ∑_k Δ_k^αΔ_k^β/√(∑_k (Δ_k^α)^2)√(∑_k (Δ_k^β)^2) defined in the interval [-1,1], where I_k^α denotes the intensity of the mode k in the spectrum corresponding to the replica (shot) α, with α= 1,…, N_s. Since thermalization is assumed, the experimental value I_k^α can be thought as the equilibrium average of the intensity, i.e. I_k^α≡1/𝒯∫_t_0^t_0+𝒯 t |a_k^α(t)|^2, where 𝒯-t_0 is the time interval corresponding to random laser lifetime, slightly longer than the pumping pulse. The overlap is defined between intensity fluctuations rather than directly between intensities, in order to exclude the effect of amplified spontaneous emission on the measurements. Fluctuations are taken with respect to the intensity averaged over many different replicas. From the N_s measured spectra one can extract N_s(N_s-1)/2 values of the IFO and determining their distribution by building the histogram P(𝒞) = ∑_α < βδ(𝒞 - 𝒞_αβ). We report in Fig. <ref> the results obtained in Ref. <cit.>, concerning the measurement of the IFO distribution. At low pumping rate, P(𝒞) appears as a Gaussian-like distribution centered in 𝒞=0. Then, for increasing values of 𝒫, the distribution develops a nontrivial structure with three distinguished peaks, one in 𝒞=0 and two symmetric side-peaks, and a continuous part between them. Eventually, P(𝒞) reduces to a double-peaked distribution for high values of 𝒫: in this last case, as the pumping is varied, 𝒞 can in principle take all possible values in the interval [-1,1], while for a given value of 𝒫 the position of the peaks is fixed. This kind of behavior resembles the one of the Parisi overlap distribution function in the replica symmetry broken phase of the model. In order to build a precise correspondence at least in the mean-field fully-connected model, one has to define a quantity, depending only on intensity fluctuations to be analytically related to the Parisi overlap. In Ref. <cit.>, the IFO is expressed (in absolute value) by the following matrix 𝒞_αβ = 1/8 ϵ^2 N∑_k=1^N [ ⟨ |a_k^α|^2 |a_k^β|^2 ⟩ - ⟨ |a_k^α|^2 ⟩⟨ |a_k^β|^2 ⟩] defined in [0,1], where the average is taken with respect to the Gibbs-Boltzmann measure of the spherical (2+4)-phasor model. Clearly this distribution depends on the realization of the disordered couplings J and has to be averaged P(𝒞) = P_J(𝒞). It is worth stressing that the definition of emission spectra in statistical mechanical models is only possible in a model which takes into account both phases and intensities, and hence the introduction of the IFO distribution could not be possible within the phase-only approach originally developed in Refs. <cit.>. While the role of the phases is essential for reproducing the phase transition phenomenology of mode-locking, the role of the intensities is of crucial importance for bridging with the experiments, where we do not have access to the phases. The crucial result obtained in Ref. <cit.> is that the IFO matrix defined in Eq. (<ref>) can be expressed in terms of the overlap matrices and as 𝒞_αβ = _αβ^2 - m^4/4          a ≠ b 𝒞_αα = 1+ℛ_αα^2/2 - m^4/4, element by element, whatever the structure of and ℛ. This result reveals that if a RSB structure is present at the level of the configuration overlap, the same holds also for the IFO: in other terms the structure of the state organization is the same whether we look at the configurations of the modes or at their intensity spectra. In Fig. <ref> six different plots of the analytical IFO probability distribution are displayed for increasing values of the pumping rate 𝒫 along the dashed line at α=0.4 in Fig. <ref>, which goes through all kinds of RSB phases. At low pumping rate the distribution is a Dirac delta centered in zero, meaning that no correlations are present among the intensities and the modes are independent and non-interacting (CW phase); increasing the pumping the mode coupling becomes relevant and, accordingly, the overlap distribution function is nontrivial, since the system enters a phase where the modes are highly frustrated by disorder. First, P(𝒞) develops a small continuous part around the central peak in 𝒞=0, denoting the typical continuous FRSB shape (second panel); then, besides the continuous part, also symmetric side Dirac deltas emerge, corresponding to the 1-FRSB phase (third panel). Eventually, the analytical P(𝒞) looses the continuous part and becomes a linear combination of Dirac deltas, which is the usual 1RSB structure. The resemblance with the experiments, though only qualitative, is quite remarkable. Despite this similarity, it should be noted that the correspondence between theory and experiments is still under construction and many criticisms, both on the theoretical and experimental sides, can be raised with the perspective of improving it. The most obvious criticism is that, for now, no experimental data corresponding to different samples are yet available for averaging over the disorder. Due to the non-self-averageness of overlap probability distribution functions, the average over disorder is essential to observe the typical behavior of these observables and compare it with the theoretical predictions. Furthermore, a major problem lies in the fact that the emission of a random laser occurs in every direction, while the acquisition of spectra is not performed in the whole solid angle (at least not in Ref. <cit.>). This, coupled with spectral resolution issues, casts doubt on whether the detected spectra correspond to all the laser modes oscillating in the sample. Moreover, experimental data inevitably contain a part of dynamical relaxation to equilibrium, which should be taken into account by the theory in order to improve the comparison. Clearly, the analytical results obtained for the IFO distribution in Ref. <cit.> and reported in Fig. <ref> are purely at equilibrium. An additional issue that has to be addressed on the theoretical side is going beyond the narrow bandwidth limit, which includes the FMC in a trivial way. This is precisely the big goal of this Thesis work. In particular, we aim to develop a numerical tool to simulate the model diluted with the FMC both out-of and at equilibrium. Hopefully, this will also provide useful insight on the diluted model, in view of its analytical solution. PART: Numerical Simulations CHAPTER: MIXED-ORDER GLASS TRANSITION IN RANDOM LASERS The main goal of this work is to go beyond the fully-connected solution of the spin-glass model for random laser presented in the previous chapter. By this, we mean to release the narrow bandwidth approximation and include in a nontrivial way the mode coupling selection induced by the FMC. Out of the narrow bandwidth approximation, the interaction network of the glassy random laser can no longer be considered fully-connected, but has to be diluted by removing all the bonds that do not match the condition on the frequencies. The FMC is of key importance for the study of mode-locking and for the reproduction of real random laser spectra: for this reason its inclusion is essential to bridge with the experiments. However, when the solution of the mode-locked model is approached analytically, subtle technical difficulties emerge, which require the development of new techniques with respect to standard mean-field methods for disordered systems. The analytical approach will be developed in the second part of this work. Here, we resort to numerical simulations to get useful insights on the mode-locked model. A first step towards the inclusion of the FMC has been taken on the ordered version of the model <cit.>. In this case, strong deviations from the fully-connected behavior have been put in evidence. In particular, the mode-locked low temperature (high pumping) phase exhibits lack of global order due to the onset of phase waves[Phase waves is an evocative term reminding of spin waves in pairwise spin models with O(2) global symmetry, such as the XY model. In these models, when the symmetry is spontaneously broken, the global magnetization is reduced by the onset of collective excitations analogous to the Goldstone bosons in quantum field theory. In dimensions d ≤ 2 this phenomenon leads to lack of global magnetic order, as stated by the Mermin-Wagner theorem <cit.>; however, in the special case of d=2, topological transitions of the Kosterlitz–Thouless type <cit.> may be allowed. In the case of multimode lasers, however, we are dealing with dense models, so the comparison can not be pushed too far.] (see also Ref. <cit.>) produced by the tendency to align of modes which are close in frequency, due to the FMC. In particular the phases ϕ_k of the modes are not all equal as in the fully-connected case, but satisfy a linear relation with their frequencies ω_k, which for a linear comb can be written as ϕ_k = ϕ_0 + k Δ, where Δ, approximately independent from k, is the slope of the phase wave and is a configuration-dependent quantity <cit.>. As a result, the coherency of the laser is not trivial as in the narrow bandwidth limit, where the output consists of a train of almost perfectly delta-like (unchirped) pulses, but there is a phase delay in the emitted pulses, which depends on the slope Δ, and hence on the configuration. Therefore, in the magnetic analogy, when considering the thermal average, the model has vanishing magnetization. This effect has not been found in a model with ordered couplings and a random dilution of the same order of the FMC: in this case, the physical properties of the system are coherent with the fully-connected solution. Similarly, phase waves are not expected in the presence of quenched disordered couplings. After that, equilibrium numerical simulations of the spin-glass mode-locked model have been performed in Ref. <cit.>, where evidence of a mixed-order phase transition has been found and put in connection with an equipartition-breaking transition at the same critical temperature. The common root of the two transitions can be traced back to the same underlying phenomenon: the breaking of ergodicity. In the present chapter and in the following one, we focus on the mixed-order phase transition, while the analysis of the equipartition-breaking transition will be deepened in Chap. <ref>. With respect to the mean-field picture described in the previous chapter, we are particularly interested in checking what of the RFOT scenario remains in the diluted mode-locked model, which is much closer to real random lasers than the fully-connected one. In the following, first, we present the simulated spin-glass model, which is a slightly simplified version of the mode-locked spherical (2+4)-phasor model. Particular attention is devoted to the role played by the FMC in affecting the topology of the model interaction graph. The numerical technique is explained in detail, by presenting the Exchange Monte Carlo algorithm implemented to shorten the thermalization time to equilibrium at low temperature. Moreover, the simulated model presents the additional problem of being defined on an interaction graph, which, though diluted, is still very dense. To address this problem, parallel computing on graphic processing units has been adopted. The results pointing towards the presence of a static glass transition are collected and their problematic nature is discussed. In particular, the unexpected scaling of the critical region found in <cit.> motivates the need for a new campaign of numerical simulations of the model aimed at collecting data less affected by finite-size effects. § THE MODE-LOCKED 4-PHASOR MODEL The simulated model is described by the following Hamiltonian [a] = - ∑_k | FMC(k) J_k_1 k_2 k_3 k_4a_k_1 a_k_2a_k_3 a_k_4 + c.c. = - ∑_k | FMC(k) J_k_1 k_2 k_3 k_4 A_k_1 A_k_2 A_k_3 A_k_4cos(ϕ_k_1 - ϕ_k_2 + ϕ_k_3 - ϕ_k_4), where a={a_1,...,a_N} is a N-dimensional complex vector of electromagnetic field mode amplitudes. In the second expression, A_k and ϕ_k represents respectively the modulus and the phase of the mode amplitude a_k=A_k e^iϕ_k and a factor 2 has been absorbed in the definition of the random couplings. Configurations are constrained to the complex hypersphere of radius √(ϵ N), where ϵ = ℰ/N measures the average optical power per mode available in the system. The quenched disordered coupling constants J_k = J_k_1 k_2 k_3 k_4 are independently drawn from a zero-mean Gaussian distribution P(J_k) = 1/√(2 πσ^2)exp{J_k^2/2 σ^2}, with variance σ^2 = J_k^2 = 1/N^2, ensuring the extensivity of the energy. The scaling of the coupling distribution variance takes into account the dilution order of the interaction graph, which is determined by the condition FMC(k): |ω_k_1 - ω_k_2 + ω_k_3 - ω_k_4 | ≲γ, where ω_k are the frequencies of the modes and γ denotes their typical linewidth. We refer to this simplified version of the general (2+4)-phasor model discussed in the previous chapter as mode-locked (ML) 4-phasor model. With respect to the general model and in terms of the photonic parameters introduced in the previous chapter, see eqs. (<ref>) and (<ref>), we are here working in the limits R_J →∞ and α=1. The motivation for considering this model is that we are mainly interested in the study of the non-linear term of the Hamiltonian defined in Eq. (<ref>), which is the most relevant one for reproducing the phenomenology of optical waves in disordered media near the lasing transition. Indeed, the behaviour of multimode optical systems in this regime is generally believed to be dominated by non-linear mode interactions, see e.g. Refs. <cit.>. The simplest choice for the frequency distribution is to consider a linear comb as in the case of standard lasers, see Eq. (<ref>). In this case the FMC (<ref>) can be mapped into a relation among the indices of the interaction graph: |k_1 - k_2 + k_3 - k_4| = 0. More realistic dilution rules based on random frequency distributions will be considered in future works in order to improve the modeling of real random lasers. Besides being the simplest possible choice, the frequency comb distribution is compatible with the strong-cavity approximation <cit.>, which amounts to neglect the off-diagonal elements of the linear interaction term in the Hamiltonian of the 2+4 model. In fact, the 2-body FMC, which generally looks like |ω_k_1-ω_k_2| ≲γ, admits off-diagonal terms only in the case of modes whose frequencies differ less than the threshold fixed by γ. In principle, modes of this kind exist in random lasers <cit.>. However, in the case of a high-finesse linear comb, these modes are excluded: the condition for mode selection reduces to |k_1-k_2|=0, which leaves only the diagonal terms. Furthermore, by assuming a flat gain curve, i.e. by taking J_kk=g with constant g, the linear part of the interactions becomes an additive constant, which is only responsible for a shift of the energy and can be neglect it. This follows as a consequence of the spherical constraint. The assumption of a flat gain curve is also compatible with the regime we aim to explore through numerical simulations: as shown in <cit.> for the case of standard multimode lasers, the inclusion of a more complex gain profile only affects the fluorescence regime, while the transition and the lasing regime are stable under perturbations of the gain. The effective distribution of the phasor configurations which will be sampled in numerical simulations is given by 𝒫[ a] ∝ e^-βℋ[a] δ( ϵ N - ∑_k=1^N |a_k|^2 ), where β is the inverse of the spectral power of noise T. We notice that, by rescaling the variables as ã_k = a_k/√(ϵ), the new variables are constrained on a fixed hypersphere at the cost of introducing the effective inverse temperature β_ photonic = βϵ^2 = 𝒫^2, which corresponds to the photonic temperature introduced in Eq. (<ref>). In these rescaled variables the probability distribution of configurations reads as 𝒫[ã] ∝ e^- 𝒫^2 ℋ[ã] δ(N - ∑_k=1^N |ã_k|^2 ), making explicit the role of the pumping rate (i.e. a parameter accounting for both noise and external pumping) as the true control parameter of the system. Now, 𝒫 can be tuned in numerical simulations either by varying the effective temperature T = β^-1 and fixing the optical power ϵ, or by working at fixed temperature and varying the value of ϵ. Simulations are performed at ϵ=1 varying the temperature T in order to have a clear correspondence with the literature on glassy systems, but results are often described in terms of pumping rate 𝒫. One simply needs to remember that, since ϵ=1, the photonic temperature reduced to the spectral power of noise T, and so 𝒫 = 1/√(T). §.§ Topological Properties In this section, we aim to provide some details about the topology of the interaction network of the ML 4-phasor model. A mode-locked graph can be defined in full generality as a hypergraph whose hyperedges are selected according to the FMC (<ref>). Equivalently a mode-locked graph can be also defined on a factor graph, with fixed connectivity of the function nodes and connectivity of the variable nodes determined by the FMC. The following treatment is restricted to the case of interest, which is characterized by comb-like frequencies and 4-body interactions, but it can be extended to more general situations. An interesting quantity to compute is the total number of hyperedges that are left by the FMC in the interaction graph of the ML 4-phasor model, with respect to the fully-connected case. Let us denote by N^(f)_4 the number of tetrads in the fully connected graph, which is given by N^(f)_4 = N(N-1)(N-2)(N-3)/4!∼N^4/4!, with N ≫ 1. We notice that even if the adjacency tensor defined by the FMC with equispaced frequencies is not completely symmetric under permutations of the indices, each term entering the Hamiltonian (<ref>) has some symmetry. The condition (<ref>) can be satisfied by 24 permutations of the indices, which can be grouped into 3 independent orderings with 8 equivalent permutations each. Given a tetrad of indices k ={k_1,k_2,k_3,k_4}, the 3 non equivalent orderings in Eq. (<ref>) can be chosen to be * FMC_1. 𝒫_1 = (k_1,k_2,k_3,k_4) identifying the combination k_1-k_2+k_3-k_4=0 and all indices permutations; * FMC_2. 𝒫_2=(k_1,k_3,k_2,k_4) identifying k_1-k_3+k_2-k_4=0 and all indices permutations; * FMC_3. 𝒫_3 = (k_2,k_1,k_3,k_4) identifying k_2-k_1+k_3-k_4=0 and all indices permutations; All the permutations inside each of the 3 groups correspond to terms inside the Hamiltonian (<ref>) which have the same value of the energy. Consider for example the first ordering: FMC_1: ω_k_1 + ω_k_3= ω_k_2 + ω_k_4→ k_1 + k_3 = k_2 + k_4, where we have used Eq. (<ref>). Following Ref. <cit.>, we consider uniformly distributed indices, i.e. P(k)=1/N for k ∈ [1,N]. In this case, the probability for the sum of two indices k_ij^+=k_i+k_j to take the value k^+ ∈ [2,2N] can be easily determined: P_+(k^+) = {[ k^+ - 1/N^2    k^+∈ [2,N+1] 2N - (k^+ -1)/N^2    k^+∈ [N+2,2N]. ]. We can now evaluate the probability that the quadruplet k satisfies the condition _1 as the probability that the left and right hand side of Eq. (<ref>) take the same value k^+: P(_1) = ∑_k^+=2^2N P_+(k^+)^2 = 1+2N^2/3N^3∼2/3N, where the last relation holds in the large-N limit. The same occurs for the other independent orderings, say _2,3. This leads to the removal of the factor 1/3 from the probability that the FMC is satisfied by any ordering. Eventually, the number of couplings in the interaction network of the ML 4-phasor model is N_4^* = 2/N[1 + 𝒪( 1/N) ] N_4^(f). Hence, the FMC tends to cut 𝒪(N) couplings from the fully-connected graph, reducing the total number of couplings in the system to 𝒪(N^3). This prevision has been verified numerically with great accuracy. It is worth stressing that, though diluted, the graph is still dense. The FMC condition is a deterministic selection rule, which induces non-trivial correlations among the interacting modes. To gain an insight into the kind of correlations, we consider an analogy with random networks, following Ref. <cit.>. A way to build random but correlated networks is to introduce a distance among the nodes: in the case of a random graph, e.g. Erdös-Rényi graph, the distance can be chosen as the absolute value of the difference of the node indices d_k_1 k_2 = |k_1 - k_2|. Then, one can select bonds accordingly to a probability that depends on that distance. In the case of the ML 4-phasor model, which has factor nodes of connectivity 4, one needs a 4-indexed metrics in order to select the interacting quadruplets. This metrics can be taken as d_k= |k_1 -k_2 +k_3 -k_4|: therefore, the FMC with equispaced frequencies is equivalent to including only quadruplets presenting the minimum value, d_k=0. In this way, the mode frequencies are not degrees of freedom, but coordinate driving correlations playing the role of a distance on a graph. It should be stressed again that in the present case the mode coupling is deterministic and not random: no probability is associated with the distance. As a result, modes that are in the center of the spectrum are preferred for combinatorial reasons. Indeed, the central modes have a higher probability of having close frequencies in the sense of the distance d_k. This is the reason for the narrowing of the intensity spectrum observed through the lasing transition, see e.g. Fig. <ref>, when the external pumping is increased (or equivalently the temperature is reduced). §.§ Generation of a Mode-Locked Graph Let us here describe in detail how the FMC is implemented in our code in order to build the interaction network of the ML 4-phasor model. First, a virtual complete graph with N_4^(f)=N4∼𝒪(N^4) interactions is generated with ordered quadruplets of indices k_1<k_2<k_3<k_4. Then the FMC is applied to the complete graph. Notice that for each ordered quadruplet of indices, the FMC can be satisfied only by the permutation class 𝒫_3(k_1,k_2,k_3,k_4). Each time a quadruplet of indices matches the previous condition, the corresponding interaction is added to the real graph and a random value extracted from the Gaussian distribution Eq. (<ref>) is assigned to it. This procedure is repeated by randomly picking a quadruplet from the complete graph until a preassigned number N_4 ∼𝒪(N^3) of interactions for the ML graph is reached. In order to be able to perform a neat finite-size scaling analysis, the number N_4 is chosen to be the largest power of 2 below the total number of couplings satisfying the FMC, which is given by the quantity N_4^* computed in Eq. (<ref>). In practice, the number N_4 is chosen first and then the corresponding size N to be simulated is selected in order to minimize the difference Δ N = N_4-N_4^*. We notice that this way of building the mode-locked interaction network introduces an artificial source of disorder in the model, besides the original one. Each one of the N_s simulated disordered samples is characterized by a realization of the couplings {J_k} that differs from the others both in the quadruplet network and in the numerical values. However, the fluctuations of the observables with respect to the randomness of the quadruplet topology turn out to be much smaller than the fluctuations due to the numerical values of the couplings, already for the smallest simulated sizes. In fact, as it has been observed also for the ordered mode-locked graph <cit.>, when compared on the log-scale the energy fluctuations occurring during the equilibrium dynamics (i.e. the ensemble fluctuations) are at least two orders of magnitude larger than the graph-to-graph fluctuations, which are therefore negligible for all practical purposes. § NUMERICAL ANALYSIS This section is devoted to present the details of the Monte Carlo algorithm implemented for the simulation of the model. §.§ Exchange Monte Carlo Algorithm The numerical simulations of the ML 4-phasor model have been performed by means of an Exchange Monte Carlo algorithm parallelized on GPU's [Graphic Processing Units. The code, written in CUDA, has been running on three types of GPU: Nvidia GTX680 (1536 cores), Nvidia Tesla K20 (2496 cores) and Nvidia Tesla V100 (5120 cores).] in order to sample the equilibrium probability distribution Eq. (<ref>). In this section, we briefly describe the most salient features of the method, by following Ref. <cit.>; then, we provide a few details on our implementation for the simulation of the ML 4-phasor model. The Exchange Monte Carlo method, else called Parallel Tempering (PT), was introduced by Hukushima and Nemoto in Ref. <cit.> as a variation of simulated tempering <cit.>. In fact, PT is the most simple and general form of simulated tempering, which has been proposed as a finite-temperature generalization of the famous simulated annealing <cit.>. All these algorithms have been developed in order to cope with complex optimization problems, characterized by the presence of many minima of the cost function, where usual Monte Carlo algorithms are not feasible. In particular, PT has been widely used for equilibrium simulations of finite-dimensional spin glasses, see e.g. Refs. <cit.>, which are known to be “hardly-relaxing” systems. Indeed, for glassy systems standard iterative algorithms tend to get stuck in small regions of the state space from which they cannot escape, facing the so called critical slowing down. One explanation for this phenomenon is that each state in the Markov chain of a Monte Carlo algorithm is chosen from the previous one and is in some sense close to it. Therefore, starting from a certain initial configuration, there are states that can be reached with a small number of moves, while there are other states, farther in the configuration space, which can only be reached in a large number of moves. The state space of glassy models contains many stable and metastable states, as it can be revealed by several techniques [Metastable states can be revealed by directly studying the model dynamics, typically in the Martin-Siggia-Rose functional integral formalism (developed by De Dominicis and Janssen for disorderd models, see e.g. Ref. <cit.>), but also by the study of the complexity or by an analysis based on the Thouless-Anderson-Palmer (TAP) equations <cit.>. The prototype case is that of the spherical p-spin model, for which the dynamical equations have been closed and solved in Ref. <cit.> and the TAP approach has been developed in Refs. <cit.>. Moreover, the structure of the metastable states has been carefully analyzed by means of the Franz-Parisi potential in Refs. <cit.>, while their basins of attraction have been studied in Ref. <cit.>.]. It turns out that the metastable states have a relatively low energy with respect to the states by which they are surrounded. If a simulated system is initialized in a configuration close to a metastable state (or directly in a stable state), to escape its basin of attraction, the algorithm must pass through one of the surrounding states with higher energy, an occurrence that has an exponentially low probability, since configurations are sampled with Boltzmann weights, which depend on the energy difference. It has to be pointed out that, of course, multiple states are also present in much simpler systems, such as the standard Ising model below the critical temperature. In this case there are only two low temperature states in which the Gibbs measure breaks down when the system size is sent to infinity: a positively magnetized state and a negatively magnetized one. In fact, to pass from one state to the other, the system configuration has to jump an energy barrier whose height scales exponentially with the size of the system. This event has a probability that is exponentially small, similarly to the corresponding case in spin-glass models. However, in this case the low temperature states are symmetric under spin reversal transformation and one gets the same information on the measured properties of the system, no matter whether the initial configuration is chosen close to a state or to the other. Conversely, in glassy models the states are usually not related by any symmetry, so it happens that for different simulations the algorithm gets stuck in different basins each time depending on the initial condition, giving completely different answers for the observables. This can be regarded as a finite-size evidence of ergodicity breaking. To avoid this situation, PT has been developed based on the idea that system thermalization may be facilitated by a reversible Markovian dynamics of configurations among heat baths at close temperatures. In particular, configurations belonging to copies of the system at higher temperature may help the copies at lower temperature to jump out of the local minima of the rugged free energy landscape. While the dynamics is carried out in parallel for all the heat baths simulated, once after a fixed number of steps an exchange of configurations between baths at neighboring temperatures is proposed. We refer to this kind of move as swap, to distinguish it from the usual Monte Carlo step. A swap is proposed sequentially for all pairs of neighboring inverse temperatures β_i and β_i+1, with the following acceptance probability implementing detailed balance with the equilibrium Boltzmann distribution for each thermal bath: p_swap = min [1 , e^(β_i - β_i+1)( ℋ[a_i] - ℋ[a_i+1])]. Thus, each state in each of the simulations is sampled with exactly its Boltzmann weight, so that in PT simulations measurements can be performed in the same way as in a usual Monte Carlo simulation. We will come back on the measurement process in the following. It should be noted that a key role is played by the time after which a move is proposed. This time has to be large enough in order to avoid exchanges among configurations that are very similar to those just exchanged, and, on the other hand, not too large, otherwise thermalization will require a very long time. In one word, one needs the time between two subsequent swaps to be the smallest possible in order to make the most of a PT algorithm. §.§ The choice of temperatures The reason why the PT method overcomes energy barriers is strictly related to the choice of the simulation temperatures. To give a more intuitive understanding of the situation, we go through the following argument, taken from Ref. <cit.>. Let us focus of two copies of the system at temperatures T_1<T_2, one below and the other above the glass transition of the system. Suppose that the two copies start from configurations which belong to the same energy basin. Since the high temperature simulation do not show ergodicity breaking, it will freely explore the phase space on a time scale similar to that of a simulation of a normal, non-glassy system. On the contrary, the low temperature copy of the system will remain stuck in the initial energy basin. If one attempts to swap the states of the two simulations, Eq. (<ref>) says that, unless the swap does not increase the energy of the low temperature simulation by a great deal, then it is unlikely to be accepted. However, from time to time, it will happen that the system at T_2 finds its way into a region of low energy, that is another basin with respect to the initial one, where the simulation at T_1 is stuck. When this happens a swap will quite likely be accepted. Thus the low temperature copy is transported in one move to another energy basin, and the high temperature one finds itself back in the basin that it started in. By repeating the process over a long time, the low temperature simulation is moved repeatedly to new energy basins. Thus, the PT algorithm effectively overcomes the problem of barrier crossing, which makes simulation of glassy systems so hard, and allows us to sample a significant fraction of the state space, while still sampling with the correct Boltzmann weights for a temperature below the glass transition <cit.>. In view of this argument, before running the simulations one must have an approximate knowledge of the critical temperature of the model, in order to establish properly the temperature interval, i.e. define a β_min and a β_max such that the critical temperature falls inside the interval. Moreover, one has to bear in mind that temperatures should be close enough, so that the typical configuration domains at nearby temperatures overlap. If this does not occur, the energy distributions at some nearby heat-baths might display no sensitive overlap, thus yielding an extremely low probability of a swap between them. If a critical point is there, this is likely to occur when one heat-bath is at a temperature above the critical one and the other one at a temperature below it. If this is the case there will be a drastic drop in the exchange frequencies (swapping rate) between these two temperatures, above and below the phase transition, making the algorithm extremely inefficient. One essential criterion to decide if the algorithm is working efficiently, both regarding on how often we propose a swap move and the choice of temperatures, is to compute the swapping rate between adjacent temperatures, that is the fraction of accepted swaps. The algorithm works efficiently only if, for all couples of temperatures is not too small. An optimal value lies between 0.6 and 0.8 and this interval has been taken as a reference in this work. §.§ Details of the Algorithm In what follows, the specific PT algorithm designed for the simulation of the ML 4-phasor model is described. The first step is to build the mode-locked graph of interaction is built according to the procedure described before. Then a PT dynamics is run for copies of the system with the same quenched disorder at different temperatures. Each of the system copies follows its own dynamics, except when a swap is accepted among neighboring heat baths. In the following, we focus on the main features of the algorithm. §.§.§ Local update The algorithm uses local Metropolis updates for the dynamics of each PT copy of the system. Then, a configuration update has to be proposed with the requirement to keep ∑_k=1^N | a_k |^2 = const. In order fulfill the constraint, each update of the configurations is carried out by choosing at random two variables a_i = A_i e^iϕ_i and a_j=A_je^iϕ_j and extracting three random numbers <cit.>: x,y ∈ [0,2π] and z ∈ [0, π/2]. The first two correspond to the new (attempted) phases x=ϕ_i' and y=ϕ_j', the third one mixes the intensities of the modes selected by preserving the spherical constraint A_j' = √(A_i^2 + A_j^2)cos z      A_j' = √(A_i^2 + A_j^2)sin z Then the attempted update is accepted according to the usual Metropolis formula, in order to implement detail balance. §.§.§ Parallel computation It is worth noticing that for the case of the ML 4-phasor model the updates have to performed sequentially, rather than in parallel. In fact, the parallel Monte Carlo algorithm of a system of interacting variables needs a sparse network of interaction, such as nearest neighbors, e.g., to be implemented, in order to split the system in smaller non-interacting sub-systems, that can be updated in parallel. This procedure clearly speeds up the computation of the update. In our system, however, this is not possible due to the density of the mode-locked interaction network, in which each variable participate in O(N^2) interacting quadruplets. However another kind of code optimization can be implemented for the ML 4-phasor model, by exploiting the computing capability of GPU's: the parallelization of the energy computation in the local Metroplis update. In order to accept or reject the update of two spins a_i and a_j, one has to compute the energy difference between the attempted configuration and the current one. This operation has a computational complexity which scales like the number of quadruplets involved in the computation, i.e. N_4^(i,j) = O(N^2): Δ E = ∑_k=1^N_4^(i,j)Δ E_k, where Δ E_k denotes the energy difference between each quadruplet. The computation of each Δ E_k is realized in parallel on a distinct kernel on GPU. Last, but not least, also the PT dynamics at different temperatures has been parallelized on GPU's. The two kinds of parallelization considered together reduce the execution time of the entire simulation by a factor of 8 <cit.>. §.§ Observables of Interest Before listing the observables considered, let us clarify the measurement procedure. In order to properly estimate statistical errors, time correlations have been taken into account. A correlation time τ_ corr can be identified as the maximum among all the correlation times of each thermal bath dynamics. Consequently, the observables can be measured every τ_ corr Monte Carlo steps. If N_ MCS is the total amount of Monte Carlo steps of the simulation, for each disordered sample the number of thermalized, uncorrelated configurations is given by 𝒩≡N_ MCS-τ_ eq/τ_ corr, where τ_eq denotes the thermalization time of the replica with the lowest temperature. We will come back in a while on how the equilibrium time is defined. Then, for a given observable O function of the configurations a, the ensemble average is estimated by the following time average ⟨ O[a] ⟩ = 1/𝒩∑_t=τ_ eq/τ_ corr^N_ MCS/τ_ corr O[a_t]. On top of that, the average over the quenched randomness of the couplings is defined as follows. For each {J_k} realization we have a thermal average ⟨ O[A] ⟩_J. Averaging over the random samples yields the least fluctuating finite-N proxy for the average in the thermodynamic limit: O[a] = 1/N_s∑_j=1^N_s⟨ O[a] ⟩_J^(j). The observables that will be considered in the present chapter and in the next one are the following: * intensity spectrum: normalized to the square root of the temperature, in order to connect with the physical intensities I_k = A_k^2/√(T), * specific heat: measured as the equilibrium energy fluctuations as c_V_N = 1/N⟨^2 ⟩ - ⟨⟩^2/T^2, * Parisi overlap distribution, P(q), where for the ML 4-phasor model the overlap among configurations (see Eq. (<ref>)) is given by q_αβ = 1/N∑_k=1^N a_k^α a_k^β = 1/N∑_k=1^N A_k^α A_k^βcos(ϕ_k^α - ϕ_k^β), * plaquette overlap distribution, P(𝒬), where the overlap among the plaquettes of two replicas is defined as <cit.> 𝒬_αβ = 1/N_4∑_kℰ_k^αℰ_k^β with the plaquette overlap given by ℰ_k^α = a_k_1a_k_2 a_k_3a_k_4 + c.c. and N_4 the number of quadruples satisfying the FMC, as discussed in Sec. <ref>. * intensity fluctuations overlap distribution, P(𝒞), where the intensity fluctuation overlap (IFO) among two replicas of the system (see Eq. (<ref>)) is given by 𝒞_αβ = 1/N∑_k=1^N Δ_k^αΔ_k^β, where the intensity fluctuations are defined as Δ_k^α = I_k^α - ⟨ I_k^α⟩/2 √(2)ϵ. §.§.§ Thermalization In order to guarantee that the data used to compute the displayed observables are taken from correctly equilibrated samples thermalization can be tested in several ways. First, one can look at energy relaxation on sequential time windows whose length is each time twice the length of the previous one. For each simulated heat bath dynamics, a minimal requirement is that the time average of the energy ⟨⟩ takes the same value at least on the last and second but last windows. A similar test is performed on the specific heat, by computing energy fluctuations over the last and the second-last “logarithmic” window and checking that the values obtained for each temperature match inside their statistical errors. However, one can not rely solely on the trend of the energy and of its fluctuations over time to assess thermalization. In fact, as mentioned before, due to the presence of energy barriers that scale exponentially with the system size, one may mistake a single local minimum for a good equilibrium state. Therefore, a further and stronger requirement for thermalization is the symmetry of the Parisi overlap distribution P_J(q) for each disordered sample, which can be tested by checking that its skweness is approximately zero in the low temperature phase. Once dynamical thermalization to equilibrium has been assessed and a thermalization time τ_ eq identified, the time average Eq. (<ref>) coincides with the canonical ensemble average. The number of Monte Carlo steps necessary to reach thermalization for each simulated size are reported in Tables <ref> and <ref>, for the simulations performed to obtain the results of Ref. <cit.>. § EVIDENCE OF A RANDOM FIRST ORDER TRANSITION In this section we present the results of numerical simulations, where evidence of a glass transition in the ML 4-phasor model has been first reported <cit.>. Here, we are just interested in discussing the physical picture drawn from these simulations: more technical details on measurement and data analysis will be provided in the next chapter, where the results of new simulations are presented. As already mentioned in the previous chapter, a Random First-Order Transition (RFOT), the paradigm of the glass transition <cit.>, is a mixed-order phase transition, characterized by the divergence of the thermodynamic susceptibilities and, at the same time, by the discontinuity of the order parameter at the static transition point. The former is the signature of a critical phenomenon, which is determined by a continuous second-order phase transition; the latter is, instead, a feature which is typical of first-order transitions, where the new dominant thermodynamic state is already present before the transition, differently from the continuous case, where it arises at the transition. The observables which help to investigate the presence of a RFOT[Remember that it is the static transition to be relevant for experiments on random lasers, and not the dynamic one, as it usually is in structural glasses. This is why our attention is devoted to the simulation of the equilibrium properties of the model at the static glass transition.] in the ML 4-phasor model are the specific heat (<ref>) and the overlap distribution function (<ref>). A singularity in the specific heat puts in evidence the second-order nature of the transition, whereas a jump in the overlap probability distribution P(q) is a signature of its first-order nature. In models with continuous variables, the P(q) is expected to be a distribution with a single peak in q=0 in the high temperature phase and to develop side peaks, as well, in the low temperature glassy phase. At finite N, of course, exact Dirac delta peaks in the P(q) appear as smoothed functions of q, due to finite-size effects. One does not have to confuse this finite-size behavior of the P(q) in a RFOT, with the behavior of the P(q) in the spin-glass transition of the SK model <cit.>, where the overlap distribution is expected to take a non-trivial shape, different from a bimodal one, even in the N→∞ limit. In Fig. <ref>, we display the behaviour of the specific heat and of the configuration overlap distribution function. The specific heat diverges as the size increases: in the inset panel data are collapsed in the critical region with an exponent 3/2, which turns out not to be compatible with a mean-field theory of second-order phase transitions. The reason why this comes about will be clarified in the next chapter: there, we will see how this unexpected exponent 3/2 turns out to be a preasymptotic finite size effect. The configuration overlap distribution function in Fig. <ref> turns out to be Gaussian in the low-𝒫 phase. Then, for 𝒫 > 𝒫_c, the distribution shows a clear deviation from Gaussianity, but only “shoulders” are displayed at the simulated sizes, rather than proper side peaks. These results can be compared with those obtained through numerical simulations of a 4-phasor model with random dilution of the same order of that induced by the FMC, see Ref. <cit.>. The comparison reveals two important differences: first, the scaling of the critical region in the case of random dilution yields an exponent of 1/2 which perfectly matches the expectations of standard ϕ^4 mean-field theory; secondly, in the case of random dilution the overlap distribution function exhibits clear secondary peaks at a finite distance from the origin in the high-𝒫 phase, signaling a glassy RSB phase. It is evident, then, that the finite-size effects are stronger in the mode-locked model than in the randomly diluted one. This is also quite intuitive: at the small simulated sizes the correlations induced by a deterministic selection rule are not negligible. In fact, the diluted mode-locked graph of interactions, though comprised by an extensive number of couplings less than the fully-connected graph, is still dense, suggesting compatibility with mean-field theory. However, up to the precision of the study reported here, the question whether the ML 4-phasor model is a mean-field theory or not remains open and needs a more refined analysis to be answered. Regarding the first-order nature of the transition a stronger indication comes from the study of the plaquette overlap distribution (<ref>). At variance with the configurational overlap, which is computed over N variables, the plaquette overlap is computed over 𝒪(N^3) quadruplets, hence it is less plagued by finite-size effects. In Fig. <ref> we display both the plaquette overlap and the IFO (<ref>) probability distribution functions. In the low-𝒫 phase the plaquette overlap has a very peaked distribution in ≃ 0, while for 𝒫≈𝒫_c there is clear evidence of a secondary peak at > 0. A similar behavior is shown by the IFO distribution function: in this case, although at first sight there is no clear evidence of secondary peaks at high pumping rates, for the same reason why this happens in the P(q), non-Gaussian tails appear in the vicinity of the transition. To deepen the analysis one can study the moments of these distributions. In particular, it is useful to consider the third and fourth moments, which are related to the symmetry of the distribution and to the swelling of its tails. By denoting with q a general overlap (be it the Parisi, the plaquette or the intensity fluctuation overlap) the skewness and the kurtosis of its distribution function are given by γ = Δ q^3/(Δ q^2)^3/2         k = Δ q^4/(Δ q^2)^2. In the high temperature phase, where the order parameter is zero, the distribution at finite size will be a Gaussian centered in zero, with k = 3. At the phase transition and in the low temperature phase the distribution will be different from a Gaussian. A very useful quantity to measure the deviation from Gaussianity is the Binder parameter <cit.>, which is defined as ℬ = 1/2(3 - k ). If we are dealing with a first-order phase transition, then the Binder parameter displays a nonmonotonic reversed bell behavior with a maximal deviation from Gaussianity in the coexistence region, where the distribution is bimodal. This is precisely what is observed for the IFO distribution function and has been reported in the inset of panel (b) in Fig. <ref>, where ℬ is plotted as a function of the effective temperature T=𝒫^-2. As a term of comparison, we refer to Ref. <cit.>, where the Binder parameter is computed in numerical simulations of both the SK model and the Ising p-spin model (with p=3). In the former case, characterized by a continuous transition to a FRSB phase, the Binder parameter is zero at high temperature and then increases monotonically. Moreover, the Binder parameter for different sizes exhibits a crossing of the curves for different sizes, from which the critical temperature of the model can be estimated (cf. also Ref. <cit.> for the case of the Edwards-Anderson model in two or three dimensions). On the other hand, in the case of the Ising p-spin model, characterized by a discontinuous transition to a 1RSB phase (regarding the distribution of the order parameter) the Binder parameter exhibits the reverse bell behavior that has been here reported in the case of the IFO probability distribution function. For the study of the plaquette overlap distribution, one can not use the Binder parameter as a good indicator because, although clearly bimodal at the transition, the distribution is not Gaussian far from the transition. However, in order to study the bimodal nature of the distribution at the transition, one can introduce another parameter, the so-called bimodality parameter, which is defined as b = γ^2 + 1/k + 3(n-1)^2/(n-2)(n-3) where n is the number of data composing the histogram of the probability distribution. In the inset of panel (b) in Fig. <ref> the behavior of b as a function of temperature is reported: the region where the parameter b signals a bimodal distribution of the overlap is precisely the interval of pumping rates around 𝒫_c. CHAPTER: UNIVERSALITY CLASS OF THE TRANSITION In this chapter, the study of the static glass transition in the ML 4-phasor model is improved with respect to the previous analysis (Chap. <ref>). The main goal is to determine the universality class of the model, through a refined finite-size scaling analysis of the transition. In particular, we aim to assess whether the unexpected scaling exponent of the critical region found in Ref. <cit.> is a genuine non-mean-field feature of the mode-locked diluted model or if it is only a preasymptotic effect, due to the very strong finite-size effects, which derive from the difficulty to simulate large enough sizes in dense disordered models. First, a preliminary argument for the scaling of the critical region in mean-field theory is presented in a very general way, which does not require any specific knowledge about the glass transition, besides its second-order nature concerning the divergence of the susceptibilities at the critical temperature. The outcome of the analysis is that the exponent for the scaling of the critical region in a generic mean-field model must fall in a compact interval and its specific value is related to the order of the leading nonlinear term of interaction. The result obtained is coherent with the scaling exponent of the Random Energy Model (REM), the simplest model exhibiting a glass transition <cit.>. In the second part of the chapter, results obtained from new numerical simulations are presented <cit.>. Besides performing simulations of the original ML 4-phasor model increasing the number of simulated sizes, temperatures and samples, a slightly different version of the model is introduced, in order to reduce the specific kind of finite-size effects induced by the FMC. In particular, the bond filtering action of the FMC is considered on a frequency space with periodic boundary conditions, leading to simulations of the bulk spectrum of the original model. The study of the critical region in both cases leads to a reduction of the value of the exponent for the scaling of the critical region, which turns out to fall into the interval defined by the mean-field argument, inside the experimental uncertainty. Finally, the study of the glass transition is completed by analyzing data in order to obtain the overlap distribution functions introduced in the previous chapter. § A MEAN-FIELD ARGUMENT FOR THE SCALING OF THE CRITICAL REGION Second-order phase transitions are characterized by scale invariance at criticality: in finite-dimensional systems (e.g. lattices in d dimensions), fluctuations extend over regions of all possible sizes and a characteristic scale of length no longer exists. As a consequence, the behavior of physical quantities near the critical point is described with respect to some control parameter, e.g. temperature, by power laws, whose exponents are universal, in the sense that they depend only on very general properties of the system (such as the dimensionality of space d, the dimensionality of the order parameter and the symmetries of the Hamiltonian), but not on the details of the microscopic interactions <cit.>. Out of the mean-field approximation, the critical exponents can be computed with renormalization group techniques, such as the ϵ-expansion <cit.>, or can be extrapolated from the study of the system at finite size L (i.e. the linear size of a lattice), through the finite-size scaling analysis <cit.>. However, here we are interested in the case of infinite-dimensional models defined on graphs not embeddable into any space with finite dimensions d, for which mean-field theory is exact. Even in this case, the scaling regime amplitude has a dependence on the system size N (i.e. the number of nodes in the graph of interactions), governed by an exponent, which we will denote as ν_ eff, through the following relation |T-T_c| ∼ 1/N^1/ν_ eff, where T is an effective control parameter for the transition - from now on we will refere to it as temperature - and T_c denotes the critical point. We will make sense of ν_ eff in terms of standard critical exponents in the following. The prototype of a mean-field theory of continuous phase transitions is represented by the Landau effective potential[Let us stress that ϕ is a global quantity, not a local magnetization field ϕ(x) as in the Landau-Ginzburg λϕ^4 theory. The potential V(ϕ) in Eq. (<ref>) is the result of the Landau approximation of the λϕ^4 field theory, which consist in taking ϕ(x)=ϕ for all points in space, hence neglecting the Laplacian term: this is nothing but a mean-field approximation.] V(ϕ) = τ/2ϕ^2 + g/4!ϕ^4, where ϕ is the global order parameter of the transition, τ = T/T_c - 1 is the reduced temperature and g is the coupling constant. The probability distribution of the order parameter ϕ is p(ϕ) = e^-N V(ϕ)/Z, where the partition function Z is given by Z = ∫ϕ e^-N V(ϕ). In a second-order transition, the relevant quantities that have to be considered are the fluctuations of the order parameter around the minimum of the effective potential δϕ^2 = ⟨ϕ^2 ⟩ - ⟨ϕ⟩^2, where the brackets denote the average with respect to the distribution p(ϕ). The critical behaviour of the susceptibilities, which include the specific heat, is related to the fluctuations of the field near the critical point. In order to estimate the value of the exponent ν_ eff we aim to match the fluctuations above and below T_c. When τ≳ 0 the effective potential is well approximated by V(ϕ) ≈1/2τϕ^2 for values of the order parameter close enough to the minimum ϕ^* = 0. The probability distribution of ϕ is a zero-mean Gaussian distribution and its normalization is given by Z ≈∫ϕ e^- Nτ/2ϕ^2∼1/√(Nτ). In this regime, the fluctuations of the order parameter centred around the minimum ϕ^* = 0 are given by the variance of the Gaussian distribution: δϕ^2_ _T>T_c∼ 1/Nτ. We notice that the previous equation is the usual scaling law for the linear susceptibility above the critical point in the standard mean-field ϕ^4 theory: χ∼1/Nτ^-γ, with γ = 1, see, e. g. Ref. <cit.>. On the other hand, when τ≲ 0 the quartic term of the potential becomes relevant and can not be neglected. In this regime, the fluctuations of the order parameter are centered around one of the two symmetric minima of the potential, namely ϕ_± = ±ϕ^*, depending on the initial conditions. Since we are interested in matching the fluctuations above and below the critical temperature, we assume the temperature to be sufficiently close to T_c in order for the amplitude of the fluctuations to be of the order of the distance from the origin: δϕ^2_ _T<T_c∼ (ϕ^*)^2. Clearly, this is no longer valid well below T_c, where the curvature of the potential has to be considered. The minima ϕ_± can be easily determined according to the saddle-point approximation of the partition function Z = ∫ϕ e^ - N V(ϕ)≈ e^ - N V(ϕ^*), where ϕ^* such that V(ϕ)/ϕ|_ϕ^* = 0 is ϕ^* = √(-6 τ/g). Hence, δϕ^2_ _T<T_c∼ -τ/g. Therefore, we have an estimate of the fluctuations on the two sides of the critical point, respectively δϕ^2_ _T>T_c ∼1/Nτ δϕ^2_ _T<T_c ∼ -τ/g. By matching the previous expressions we find δϕ^2_ _T>T_c∼δϕ^2_ _T<T_c   ⟹   |τ| ∼1/N^1/2, yielding the estimate ν_ eff = 2. Notice, that Eq. (<ref>) corresponds to the usual scaling law for the order parameter below T_c <cit.> ϕ∼ - τ^β with exponent β = 1/2 for the mean-field ϕ^4 theory. In terms of the exponents β and γ defined by the usual scaling laws, Eq. (<ref>) can be written as τ^2 β∼1/Nτ^-γ   ⟹   |τ| ∼1/N^1/2β +γ. This allow us to identify the scaling exponent ν_ eff for the critical region of infinite-dimensional models with the expression ν_ eff = 2 β + γ. Therefore, ν_ eff can be taken just as a shorthand symbol for the expression 2 β + γ. This argument can be straightforwardly extended to more general mean-field potentials, in order to obtain a range of values for the exponent ν_ eff. Let us consider the family of potentials V(ϕ) = 1/2τϕ^2 + g/(2n)!ϕ^2n, where the choice of an even lowest order non-linearity is still compatible with the phenomenology of a second-order phase transition. The fluctuations of the order parameter above the critical temperature are the same as in the case n=2, which means that the exponent γ=1 for every n. Below the critical temperature, by using the saddle-point method we find ϕ^* = [-(2n-1)! τ/g]^1/2(n-1). Incidentally, this means that the general expression of the exponent β for the scaling of the order parameter in a ϕ^2n mean-field theory is β = 1/2(n-1). Therefore, the amplitude of the fluctuations above and below T_c are given by δϕ^2__T>T_c ∼1/Nτ δϕ^2__T<T_c ∼ (-τ/g)^1/(n-1), and their matching leads to δϕ^2_ _T>T_c∼δϕ^2_ _T<T_c   ⟹   |τ| ∼1/N^n-1/n. The range of values that ν_ eff = 2β + γ can take in a mean-field model can be found by taking n=2 and n →∞ in the previous expression. Thus, we have 1 < ν_ eff≤ 2    ⟺   1/2≤1/ν_ eff < 1 . With respect to this argument, it is clear that the value 1/ν_eff = 3/2 found in Ref. <cit.>, is out of the interval found for mean-field values of the exponent ν_eff. It is worth noting that the validity of this argument is restricted to the large-N limit, where the saddle-point approximation holds. This is something that we have to keep in mind, when comparing results of numerical simulations at finite N with the estimate on the scaling exponents obtained from the previous argument. §.§ The Random-Energy Model In order to compare the previous argument with a well-known model we have performed a numerical analysis of the Random-Energy Model (REM), which is the reference mean-field model for disordered systems with non-linear interactions. Let us briefly review the main features of the model, following Ref. <cit.>, before presenting our results. The REM is the simplest statistical mechanics model of a disordered system exhibiting a phase transition <cit.>. It was originally introduced by Derrida in Refs. <cit.> as an exactly solvable model arising from the limit p→∞ of the p-spin model. As a result of this procedure, the model does not take into account any specific interaction among the variables: the energy is a random process rather than a deterministic function. For this reason, the REM has the remarkable advantage that results obtained through the replica method can be compared with formal mathematical approaches, see e.g. Refs. <cit.>. Given M=2^N energy levels (possibly corresponding to the configurations of N Ising spin variables) the corresponding energies { E_ν}_ν∈{1,…,M} are taken as independent random Gaussian variables[A generalized version of the model, where correlations among the energy levels are introduced in a hierarchical way, has been formulated by Derrida in Ref. <cit.> and exactly solved by Derrida and Gardner in Ref. <cit.>. Here, we are just interested in the original version of the model, since the generalized version is in the same universality class.] extracted from the distribution function p(E) = 1/√(π N J^2)exp( -E^2/N J^2), where the scaling of the variance with N ensures the extensivity of the thermodynamic potentials and J is a parameter. An instance of the quenched disorder corresponds to an extraction of the M energy levels. A Boltzmann weight p_ν = exp(-β E_ν) / is then assigned to each configuration, where is the partition function of the model = ∑_ν=1^M e^-β E_ν. We have developed a simple enumeration algorithm to study the REM, which works as follows: for each disorder sample of a given system size N the energy levels {E_ν} are generated, by independently extracting a set of 2^N random numbers from the Gaussian distribution Eq. (<ref>) with J=1. A set of equispaced temperatures T is generated in the interval [T_min, T_max]. The internal energy of the model is computed as a function of temperature by evaluating the following thermal average ⟨ E ⟩ = ∑_ν E_ν e^-β E_ν/∑_ν e^-β E_ν, for each of the β=1/T values extracted before. The specific heat is computed from the fluctuations of the internal energy as c_V_N,i(T) = 1/N⟨ E^2 ⟩ - ⟨ E ⟩^2/T^2, where the index i accounts for the sample. The procedure is repeated for several independent extractions of the random energies {E_ν}. Eventually, when a sufficiently large number of samples N_sam is collected for a certain simulated size, the disorder average of the specific heat is computed by averaging over the samples: c_V_N(T) = 1/N_sam∑_i=1^N_sam c_V_N,i(T). The number of samples N_sam is chosen in such a way that the estimated value of the specific heat remains stable upon fluctuations of the samples included in the average. §.§ Finite-Size Scaling Analysis Let us consider momentarily a model defined on a d-dimensional lattice of linear size L. We consider an observable Y_L that in the thermodynamic limit scales like Y_∞(T) ≈ A t^-ψ near the critical point, where t is the modulus of the reduced temperature, A is some constant and ψ some critical exponent. The fundamental assumption of the Finite-Size Scaling (FSS) Ansatz <cit.> is that the behaviour of Y_L near the critical temperature is controlled by the ratio ξ_∞/L. The parameter ξ_∞ denotes the correlation length of the infinite-size system that scales as ξ_∞(T) ≈ξ_0 t^-ν, where ξ_0 is a constant. The previous equation is the definition of the critical exponent ν governing the scaling of the correlation length. The scaling hypothesis for Y_L can be then written as Y_L(T) = L^ω f_Y(ξ_∞/L), where ω is the critical exponent for the scaling of the peak of the observable and f_Y is a dimensionless function that depends on the observable Y. The function f_Y is such that in the limit L →∞ one recovers the scaling law Y_∞(T) ≈ A t^-ψ, therefore ω = ψ/ν <cit.>. Moreover, by exploiting Eq. (<ref>) the scaling relation (<ref>) can be rewritten as Y_L(T) = L^ψ/νf̂_Y(L^1/ν t_L) where t_L = |T/T_c(L) - 1|, being T_c(L) the finite-size critical temperature, and f̂_Y is another scaling function that depend on Y. It is important to notice that, since the lattice size L is linked to the number of degrees of freedom N by L=N^1/d, the scaling relation for the observable Y can be written as Y_N(T) = N^ψ/ ν df̂_Y(N^1/ν d t_N) i.e. in terms of N, rather than L. By considering the well-known hyperscaling relation ν d = 2β + γ <cit.>, we find Y_N(T) = N^ψ/(2β + γ)f̂_Y(N^1/(2β + γ) t_N). This is the only possible scaling relation that one can use when studying an infinite-dimensional model, where there is no lattice size L, but the only finite-size parameter is N. In the case of the specific heat, the previous finite-size scaling law takes the following form (see e.g. <cit.>) c_V_N(T) = N^α/ν_ efff̂_C_V_N(N^1/ν_ eff t_N), where α denotes the critical exponent of the specific heat peak divergence, and we have used fact that ν_ eff = 2β + γ. Since the dimensionless function f̂ is scaling invariant, if one uses the correct values of the exponents α and ν_ eff, the curves c_V_N(T)/N^α/ν_ eff for different values of N should collapse on the same curve. In Fig. <ref> we show the finite-size behaviour of the specific heat around the critical temperature. In the main panel, the specific heat is plotted as a function of temperature, for different system sizes. The simulated sizes are N=16,20,24,28, and for each size N_sam=100. In the inset data belonging to different sizes are collapsed near the critical temperature with exponents α = 0.52 ± 0.07 and ν_ eff = 2β + γ = 1.94 ± 0.22. As expected, due to the fact that the scaling hypothesis holds near the critical point, the collapse of the data succeeds around the peaks of the specific-heat curves. Moving away from the critical temperature the curves begin to separate one from another. In the inset we show only the interesting part of the collapse. At variance with the result found in Ref. <cit.> for the ML 4-phasor, the result 1/ν_ eff = 1/2 for the scaling exponent of the critical region is in perfect agreement with the mean-field argument we have presented at the beginning of the chapter, in the case of a Landau potential. Moreover, we notice that the same behavior of REM specific heat divergence, has been found in a 4-phasor model with a random diluted topology <cit.>. Both these model belong to the same universality class, the one of the standard ϕ^4 Landau theory. § SIMULATION WITH PERIODIC BOUNDARY CONDITIONS IN THE FREQUENCY SPACE There are many sources of finite-size effects when simulating a dense model, most of which can be coped with by using powerful algorithms, such as the PT Monte Carlo algorithm. However, the ML 4-phasor model has another specific source of finite-size effects induced by the FMC. As discussed in the previous chapter, because of this condition, modes near the boundaries of the spectrum (k≳ 1, k≲ N) interact much less than modes whose frequency lays in the middle of the spectrum (k∼ N/2). Though their dynamic evolution is taken into account in the simulations, the edge modes are less and less important as the external pumping increases. To circumvent this problem we introduce a slightly different model network, imposing periodic boundary conditions on the frequencies <cit.>. In the perspective in which the FMC can be thought as a distance on the graph, we can introduce a distance with periodic boundary conditions. This has the effect of eliminating band-edge modes, or, equivalently, it is like considering only modes at the center of the spectrum, as if pertaining to a larger system. The periodic boundary conditions on the frequencies are obtained in practice by representing the frequency indices as variables on a ring, see Fig. <ref>, and taking their distance as the smallest one between any two of them |k_i-k_j| = {[ |k_i-k_j| |k_i-k_j|≤N/2; ; N/2-|k_i-k_j| (N/2) |k_i-k_j| > N/2 ] . The generalization to the case of quadruplets is straightforward and will be denoted as d_k^PB. In Fig. <ref> we plot the distribution of tetrads per mode k. Data in green pertain to the original mode-locked network built by selecting modes according to d_k=0; data in purple pertain to the mode-locked network with periodic boundary conditions on the frequencies, where modes are selected according to d_k^PBC=0. The distribution is approximately flat in the case of modes selected by the FMC with periodic boundary conditions, whereas it is peaked on the center of the spectrum in the original case. From now on, we will refer to the version of the model with periodic boundary conditions on the frequencies as PBC, whereas the original one, with free boundary conditions will be termed FBC. An important remark is that, for a certain number of modes N, the total number of quadruplets N_4^* satisfying the FMC with periodic boundary conditions is slightly greater than the corresponding number in the case of free boundary conditions, since all the modes participate in approximately the same number of interactions. However, the order of the dilution with respect to the fully-connected graph is the same in the two cases. It is appropriate to anticipate that the fact that with periodic boundary conditions the central modes are no more preferred by the interactions clearly affects the intensity spectrum of the model, by eliminating the global narrowing at high pumping, see e.g. Fig. <ref>. Actually, one can think that there is no more distinction among central modes and edge modes, though the interacting quadruplets are still selected through a deterministic condition. We are loosing the ability of qualitatively reproducing the central narrowing of random laser spectra, in favor of a significant reduction of finite-size effects. §.§ Details of the Simulation We have performed numerical simulations of the ML 4-phasor model, both with free and with periodic boundary conditions on the frequencies. First, the number of interacting quadruplets N_4 is chosen as a power of 2. Then, the corresponding size N is selected in such a way that the difference between N_4 and the true number of tetrads N_4^* is minimum, as explained in the previous chapter. Since N_4^* is greater with PBC rather than with FBC, we managed to perform simulations in the PBC case with networks which are larger from the point of view of the number of couplings, though with smaller sizes. In both cases, however, the simulated network with the highest number of quadruplets has N_4=2^17. For each size N of the simulated systems, we have run PT simulations with N_PT replicas of the system at temperatures T_i ∈ [T_ min, T_ max], with i=1,…,N_ PT. On top of that, we have performed simulations on N_ identical replicas of the PT simulations in order to compute overlap distributions. The temperature interval has been chosen properly in order to have a sufficient number of replicas above the size-dependent critical temperature T_c(N) and ensure faster thermalization of the replicas in the low temperature glassy phase. We have chosen equispaced temperatures with spacing Δ T = 0.025 and a number of Monte Carlo steps N_swap=64 after which a swap of configurations between adjacent heat baths is proposed. Both these choices have been checked to be compatible with high acceptance ratios for the swaps for the whole duration of the simulation. Each copy of the system at each temperature share the same realization of quenched disordered couplings {J_k}. The number of simulated disordered samples has been chosen in relation with the size of the system. The values of the simulation parameters are reported in Table <ref> for the model with FBC and Table <ref> for the model with PBC. § NUMERICAL RESULTS We devote this section to the results of the Monte Carlo analysis of the ML 4-phasor model Eq. (<ref>) with FBC and PBC on the frequencies. §.§ Spectra The first observable we display is the intensity spectrum Eq. (<ref>). Let us, first, briefly comment on the relationship between the physical intensities I_k and the complex amplitude variables of the simulated model (<ref>). In real experiments the heat bath temperature T is typically kept fixed (there are exceptions like, e.g., in Ref. <cit.>) and the overall system energy ℰ=ϵ N is varied by tuning the pumping power. As already discussed (Chap. <ref>), in our simulations, ϵ is fixed and kept equal to one in the spherical constraint, ∑_k |a_k|^2 = ∑_k A_k^2 = N, whereas T is varied. Therefore, according to 𝒫=ϵ/√(T), a change in the pumping rate 𝒫 because of a shift in the energy ϵ pumped into the system corresponds to a shift of 1/√(T). If we rescale the intensity of the mode k as in Eq. (<ref>), i.e. I_k = A_k^2/√(T), we have ∑_k I_k = N/√(T) = N ϵ, as in Eq. (<ref>). In Fig. <ref>, we show the emission spectra at equilibrium for the ML 4-phasor model with FBC and PBC, both for a single instance of disorder (panels (a,c)) and averaged over roughly a hundred instances of disorder (panels (b,d)). The most relevant difference between the model with FBC and PBC is the complete absence of global narrowing in the spectrum of the PBC case, which corresponds to the absence of band-edge modes: all modes interact with identical probability with the rest of the system. On the other hand in the FBC case, one can observe the typical central narrowing occurring in random lasers <cit.> as the pumping energy increases. This phenomenon becomes particularly evident in the averaged spectrum (panel (b)), which is smoother than the single sample one. Finally, we notice that the averaged spectrum in the PBC case looks like the central part of the FBC spectrum. One of the most relevant features of all the intensity spectra shown in Fig. <ref> is that they become more and more structured and heterogeneous upon decreasing the temperature. The pattern of the peaks is disordered and strongly depends on the random sample and on the single dynamic history. A first analysis of this phenomenon in terms of intensity equipartition breaking among the different modes has been performed in <cit.>, and a deepening of the collective inhomogeneous behavior of the modes will be presented in Chap. <ref>. §.§ Specific Heat In Figs. <ref> and <ref> we show the specific heat respectively for the ML 4-phasor model with FBC and PBC. In the main panel data are plotted with respect to temperature: each point corresponds to the equilibrium fluctuations of energy within a certain heat bath averaged over the disordered samples. In the inset, data are collapsed according to the scaling hypothesis previously discussed. The finite-size scaling analysis of the specific heat has been performed in a more refined way with respect to the simple case of the REM. In order to get the two exponents α and ν_ eff=2β + γ of Eq. (<ref>) from our numerical data we follow the method proposed in Refs. <cit.>. First, the size-dependent critical temperatures T_c(N) are more precisely assessed by fitting the points around the peak of each curve in the main panels of Figs. <ref> and <ref> with a quadratic function of the temperature f_N(T)=a_N+b_NT+c_NT^2. The critical temperatures are identified with the maximum of each of the fitting functions T_c(N)=-b_N/(2c_N), with a statistical error estimated accordingly. The results of this procedure are reported in Table <ref>. The critical temperature T_c(∞) of the models can be extrapolated from the fit of the finite-size critical temperatures with the following function: T_c(N) = T_c(∞) + a N^-b, where the exponent b gives a first rough estimate of the critical exponent 1/ν_ eff. The results of the fit are: T_c(∞) = 0.86 ± 0.03, b = 1.6 ± 0.5 ,    T_c(∞) = 0.61 ± 0.03, b = 0.98 ± 0.3. As one can see, already from this rough estimate the model with PBC has an exponent 1/ν_ eff falling inside the interval derived for generic mean-field models (<ref>) at the beginning of this chapter. In comparison to the estimate provided from FBC data, indeed, there appears to be a drastic reduction of finite-size effects. Then we take the following Ansatz on the form of the scaling function f̂ in Eq. (<ref>) f̂(x) = A + C x^2, where x=N^1/ν_ eff t_N, with t_N computed by using the T_c(N) reported in Table <ref>. In the previous Ansatz we have not included the linear term, since the points are translated in order for the peak of each curve to be in the origin and we expect the linear term not to matter. With this Ansatz the scaling hypothesis for the specific heat Eq. (<ref>) reads as c_V_N(T) = Ã_N + C̃_N t_N^2, where X̃_N = X_N N^α + x/ν_ eff, with X_N = {A_N,C_N} and x={0,2}. For each size we select a temperature interval centered in T_c(N) corresponding to the points plotted in the insets of Figs. <ref> and <ref>. We fit the points in the selected interval with the previous function and determine the values of the coefficients. We notice that the behaviour of the logarithm of the coefficients absolute value, i.e. log|X̃_N| = log |X_N| + α + x/ν_ efflog N is linear in ln N; hence, the estimates of α and ν_ eff can be obtained by linear interpolation. For the systems with FBC, this finite-size analysis provides the following values for the critical exponents α=0.48 ± 0.05 , 1/ν_ eff=1.1 ± 0.16. With respect to the estimate 1/ν_ eff≃ 3/2 found in <cit.>, a much larger statistics allows to find an estimate of 1/ν_ eff closer to, and compatible with, the mean-field threshold, suggesting that deviations from mean-field theory might be due to pre-asymptotic effects in N. The confirmation that this is, indeed, the origin of the anomalous value previously found for 1/ν_ eff comes from the analysis with PBC. In this case, the critical exponents turn out to be α=0.27 ± 0.05 , 1/ν_ eff=0.86 ± 0.14. With PBC we find an estimate of 1/ν_ eff well below the threshold for a mean-field universality class. Therefore, up to the precision of our analysis, despite being possibly still of a different universality class with respect to the REM, for which 1/ν_ eff =1/2, we can assess the mean-field nature of the glass transition in the ML 4-phasor model. §.§ Overlap Distribution Functions In this section we complete the study of the glass transition in the ML 4-phasor model, by presenting results of our numerical simulations about the first-order nature of the transition. §.§.§ Parisi overlap Let us first discuss the case of the Parisi overlap distribution (<ref>). The protocol used in numerical simulations to measure overlaps corresponds to the definition of replicas as independent copies of the system with the same quenched disorder. For each sample, i.e. each realization of disorder, we run dynamics independently for N_rep replicas of the system, starting from randomly chosen initial phasor configurations. In this way, replicas explore different regions of the same phase space, passing through configurations typically belonging to separated equilibrium states, if there are many of them, and, sometimes, to the same state. To study the behavior of the P_J(q) we choose N_rep=4, so that at any measurement time six values of the overlap are available q_αβ={q_01, q_02, q_03, q_12, q_13, q_23}. Hence, passing from two replicas to four increases the statistics by a factor six. In order to accumulate statistics, we measure the value of q_αβ using 𝒩 equilibrium, time uncorrelated, configurations of replicas at the same iteration of the simulated dynamics, see Eq. (<ref>). Hence, for each disordered sample the P_J(q) histograms are built with 𝒩× N_rep(N_rep-1)/2 values of the overlap. The overlap distribution functions P_J(q) are computed as the normalized histograms of the overlaps for each one of the samples. This has been done for each simulated size of the ML 4-phasor model with both FBC and PBC. In Fig. <ref> we present the overlap distributions for five samples at the temperature T=0.25≃0.45T_c of the size N=54 of the ML 4-phasor model with PBC, together with the overlap distribution averaged over 100 samples. Given the fluctuations of P_J(q) among the different samples, it is clear that the only physical quantity to be considered in order to assess the glass transition is the averaged P(q)≡P_J(q). This is particularly important in the case of the overlap distribution function, since, it is not a self-averaging quantity <cit.>, i.e., the average P(q) cannot be reached simply by increasing the size of the system over which a single sample P_J(q) is built, as, for instance in the case of the free energy or of the specific heat, but only by averaging over disorder. In Fig. <ref> and Fig. <ref> the average overlap distribution function of the ML 4-phasor model with FBC and PBC are, respectively, reported for the whole simulated temperature range in systems whose size correspond to N_4=2^14, i.e. respectively N=62 and N=54 spins. The reduction of the finite-size effects obtained by using periodic boundary conditions in the choice of interacting modes leads to display P(q) with more distinct secondary peaks in the case of the ML 4-phasor model with PBC. We have showed the overlap distribution functions for the N=62 (for the model with FBC) and N=54 (for the model with PBC), because these are the largest sizes for which we performed simulations with N_rep=4. However, it can be useful to show also the behavior of the P(q) for a higher size. In Fig. <ref> we show the overlap distribution function for the size N=82 of the model with periodic boundary conditions. One can see that the side peaks of the distribution at low temperatures are not appreciably more pronounced than than those of the distribution for N=54. However, a reduction of the finite size effects with respect to N=54 can be appreciated at high temperature, where the distribution is still a Gaussian, but it slightly narrows around the peak in q=0. We expect that in the large N limit the overlap distribution function at high temperature becomes a delta function peaked in q=0. §.§.§ Plaquette overlap and IFO The study of the plaquette overlap and IFO distribution functions has been performed both for the model with FBC and PBC. In the following we only display results for the PBC case, which is less affected by finite-size effects. In Fig. <ref> the plaquette overlap distribution is plotted for all the simulated temperatures and for the system size N=54. For each of the 𝒩 uncorrelated configurations at equilibrium, the plaquette overlaps are computed over N_4 ∼ O(N^3) quadruplets, leading to a reduction of the statistical error on the overlap values. Then, for each sample, the distribution is computed with 𝒩× N_rep(N_rep-1)/2 values of the plaquette overlap. Data in figure are averaged over N_ s = 100 disordered samples. In the low temperature region, the distribution clearly develops a nontrivial shape, with a heavy tail for values of > 0. The presence of the three visible peaks in the distribution tail at the lowest temperatures is a consequence of the single sample behavior of the P(), which we present in Fig. <ref> for the simulated temperature T=0.25. As one can see, the shape of the distribution dramatically changes from sample to sample: in particular, for some instances of disorder a single peak, for others two peaks are displayed in the P(), at sample dependent positions. The effect of taking the average over all the simulated samples is reported in the sixth panel in Fig. <ref>, which corresponds to the lowest T curve in Fig. <ref>. We notice that, even with N_s = 100 disordered samples, the average is far from a smooth function, displaying the strong lack of self-averaging of the plaquette distribution. In Fig. <ref> the averaged IFO distribution is displayed for the system size N=54, at all simulated temperatures. Even with the significant reduction of finite-size effects obtained through the PBC, the distribution obtained with our data at equilibrium does not show clear side peaks in the low temperature region. Actually, side peaks are more evident very close to the transition temperature (gray curves in figure), rather than for the lowest simulated temperatures. Once again, we get a clear understanding of the averaged distribution by looking at single instances of disorder. In Fig. <ref>, one can see that, while many samples have a distribution which is still peaked in 𝒞=0, a signature that they may not have entered in the glassy phase yet, in many others the P(𝒞) has developed nice side-peaks at the temperature T=0.25. However, when taking the average, the combined effect of samples of the first kind and of the variable position of the peaks in samples of the second kind, leads to the smoothed and almost flat shape of the IFO distribution at low temperature. CHAPTER: BREAKING OF EQUIPARTITION AND PSEUDO-LOCALIZATION AT THE TRANSITION The present chapter is devoted to the study of a particular phenomenon taking place in the ML 4-phasor model, which has been mentioned previously in this work, when describing the intensity emission spectra of random lasers: intensity localization of light, else termed power condensation <cit.>, and its relationship to the high pumping replica symmetry breaking (RSB) random lasing phase. Localization is a widespread phenomenon in physics, which has always been related to the breaking of ergodicity. In the pioneering work of Anderson on semiconductors, the spatial localization of the electron wavefunction induced by a large degree of disorder was identified as the underlying mechanism for a semiconductor/insulator transition <cit.>. The absence of diffusion in the insulating-localized phase was interpreted as a clear manifestation of dynamical ergodicity breaking. Besides the seminal work of Anderson, localization was related to ergodicity breaking also in classical systems, since the famous numerical study of Fermi-Pasta-Ulam-Tsingou (FPUT) on the anharmonic chain <cit.>. In this case, localization was observed in the Fourier space of the chain modes. Starting from an atypical initial condition, e.g. only the lowest harmonic excited, one would expect that a slight anharmonicity is sufficient to cause the system to relax on a state where the energy is equally divided among all the modes[If one initializes a harmonic chain on a eigenmode of the Hamiltonian, the time evolution will leave the system on that eigenmode, resulting in a breaking of ergoditicy. Nonlinearity is believed to facilitate the recovery of ergodicity, because it introduces a coupling which makes the eigenmodes of the unperturbed Hamiltonian more connected.]. Contrary to expectations, the system showed a recurrent dynamics for all the duration of the experiment, with no sign of relaxation to equipartition. Therefore, localization was coming along with dynamical ergodicity breaking also in the case of energy localization in the Fourier power spectrum. Ergodicity breaking was understood as a purely dynamical phenomenon until the '80s, when the theory of replica symmetry breaking was established as a new thermodynamic paradigm and statistical ensembles formalism for ergodicity breaking transitions in complex disordered systems. Quite interestingly, while localization phenomena in quantum many-body systems have been widely investigated during the last decades <cit.>, they have seldom been probed in disordered systems, apart from the attempts of Refs. <cit.>. The lack of a broad analysis of localization phenomena in the context of disordered glassy systems is mainly due to the nature of the variables which are customary for these systems: Ising, XY or Heisenberg spins, all locally bounded, |s⃗|=1. In those models, such as the spherical p-spin model, where variables are used with continuous locally unbounded magnitude as a proxy for magnetic spins in spin-glasses or density fluctuations in structural glasses, in order to be able to perform analytical computations, the interaction network is usually fully connected, thus hindering any sort of magnitude localization. Indeed, this kind of mean-field representation on a complete graph, together with a local potential (soft spins) or a global constraint (spherical spins) guarantees magnitude equipartition. Therefore a careful investigation of how magnitude localization coexists with replica symmetry breaking in disordered systems is a gap that needs to be filled. Once again, the spin-glass theory of random lasers represents a very fortunate research field for this kind of analysis. The fundamental variables, i.e. the amplitudes of the light modes, are naturally continuous and locally unbounded; the global constraint which they are subjected to is a quite natural requirement for the stationary regime of a lasing system (see Chap. <ref>); the presence of dilution is a direct consequence of the specific selection rule in light mode coupling, i.e. the FMC. Hence, these systems have all the ingredients necessary to exhibit a power condensation transition. As a disclaimer, it is worth stressing that this is not the generalization to light waves of the spatial wavefunction localization occurring in Anderson theory, that is known to be inhibited in 3D random lasers because of the vectorial nature of light waves <cit.>. Rather, it is a condensation of the overall magnitude on a few variables in a set of locally unbounded variables subjected to a global constraint, i.e. a global conservation law. This kind of condensation has already been observed and studied analytically in non-interacting systems <cit.>. When the value of this globally conserved quantity exceeds a given (non-universal) threshold the system undergoes a transition where a macroscopic fraction of the conserved quantity concentrates on a finite portion of the system. Given the clarification, from now on we will use the terms `localization' and `condensation' interchangeably to refer to the phenomenon of our interest. This chapter is organized as follows: first, a few details are provided regarding the condensation transition in non-interacting systems, pointing out the emergence of a pseudo-localized phase, which has been discovered in Refs. <cit.>; then we clarify to what extent, in the presence of an interaction network, the order of dilution is key to understand possible regimes of equipartition or localization of a globally conserved quantity; finally, results from the numerical simulations of the ML 4-phasor model are presented, revealing the presence of a hybrid phase analogous to the pseudo-localized phase of non-interacting systems, where the intensity of light modes is neither equipartitioned among all modes nor really localized on a few of them. § PSEUDO-LOCALIZATION IN NON-INTERACTING SYSTEMS Condensation of a global quantity on a finite number of degrees of freedom has been found and very precisely described in the framework of large deviation calculations and ensemble inequivalence in the case of mass-transport models <cit.> or for bosonic condensates in optical lattices described by the Discrete Non-Linear Schrödinger Equation (DNLSE) <cit.>. The fundamental ingredient involved in the localization phenomenology for non-interacting systems is the presence of a global constraint. Let us consider, for instance, the DNLSE. We focus on the infinite temperature limit in which the hopping term, i.e. the kinetic two-body term in the Hamilotnian of the DNLSE, can be neglected. In this case, the partition function of the model reads Ω(μ,E) = ∫∏_i=1^N ψ_iψ̅_i e^-μ∑_i=1^N |ψ_i|^2 δ(E - ∑_i=1^N |ψ_i|^4), where μ is the chemical potential. Clearly the quantity A = ∑_i=1^N |ψ_i|^2, represents the mass of the condensate, so that the partition function in Eq. (<ref>) corresponds to an ensemble where exact conservation of energy is enforced by means of a Dirac delta function, whereas mass is conserved only on average by means of a field μ. In the following, we will refer to the former way of imposing a global conservation law as “hard” constraint and to the latter as “soft” constraint. In the case of the DNLSE, as in all cases where it takes place, the physical quantity that localizes is the one controlled by the hard constraint, hence in this case it is the energy. It is only thanks to the global action of the constraint on the total energy that configurations with a strongly heterogeneous distribution of energy on lattice sites are allowed. The analytical calculations of Refs. <cit.> show that as soon as energy is constrained above a certain critical value, E>E_c, these localized configurations dominate the partition function. It can be shown analytically, but it can be easily guessed by looking at Eq. (<ref>), that localization cannot take place for a quantity controlled “on average”. It is not possible to have strongly inhomogeneous fluctuations and/or localization of something which is controlled homogeously by means of a Lagrange multiplier like the chemical potential in Eq. (<ref>). This is precisely the same mechanism characterizing the Bose-Einsten condensation, which is a form of localization in Fourier space: the condensed phase cannot be reached by controlling density with a chemical potential; density must be tuned directly, for instance, by decreasing the volume for a given number of bosons <cit.>. The fact that the Bose-Einstein condensation cannot be implemented by tuning the chemical potential of a reservoir in contact with the system is analogous to the fact that energy localization cannot be achieved by tuning the temperature of a thermostat, i.e. by studying the partition function where conservation of energy is imposed “on average”, as exp(-βℋ), rather than exactly, as δ(E-ℋ). In both the above examples localization entails lack of statistical ensemble equivalence: for the energy localization in the DNLSE it is the lack of equivalence between fixed temperature (canonical) and fixed energy (microcanonical) ensembles, for the Bose-Einstein condensation it is the lack of equivalence between fixed chemical potential (grancanonical) and fixed density (canonical) ensembles. One important feature that has been predicted for the DNLSE <cit.> is the existence of a pseudo-localized phase similar to what we are going to show in numerical simulations of the ML 4-phasor model. The peculiarity of the localization phenomenon in the case of the DNLSE is that it takes place in two steps. First, by increasing the energy, i.e. the quantity which controls localization, one first encounters a second-order transition at a value of the energy – E_ th – where the equivalence of ensembles breaks down and temperature becomes negative. Then, at a larger value of energy, E_c > E_th, there is a first-order transition to a localized phase. For E∈[E_ th,E_c] the system finds itself in a pseudo-localized phase, where a thermodynamic anomaly is indicated by the lack of ensemble equivalence and the presence of negative temperature, but localization has not been really achieved yet. The main difference between the ML 4-phasor model of the glassy random laser and DNLSE, is that in the random laser case the joint distribution of the variables over which the global constraint is imposed is not factorized. Therefore, the analytical results discussed in <cit.> cannot be straightforwardly extended to this case. In the following, first we try to derive some understanding of localization in generic interacting systems; then, we resort to numerical simulations in order to probe the localization transition in the ML 4-phasor model. § SCALING ARGUMENT FOR THE OCCURRENCE OF INTENSITY LOCALIZATION IN INTERACTING SYSTEMS In this section we go through a scaling argument on generic p-spin interacting systems (both with ordered and disordered interactions) with continuous variables and a global constraint, of which the ML 4-phasor model with the spherical constraint (<ref>) is a special case. Let us consider the p-spin Hamiltonian ℋ[σ] = -∑_k_1… k_p^# N^A J_k_1… k_p σ_k_1…σ_k_p, whose N continuous spin variables σ are subjected to a generic ρ-metrical constraint (e.g., ρ=2 is the case of the spherical constraint) ∑_i=1^N σ_i^ρ =N, where N^A on top of the sum, with A∈ [1,p], denotes the scaling with the size of the number of p-uples contributing to the energy and the spin indices k_i run from 1 to N. If A=p we have a fully-connected interaction graph, i.e. each spin contributes in (N^p-1) p-uples. At the other extreme, if A=1 the graph is sparse, i.e. each spin only interacts in a finite number of p-uples, not growing with the size of the system. All dilutions in between will be considered hereafter. For simplicity, we will take the variables as real valued here, yet keeping the word intensity for the spin magnitude |σ|. We notice that the glassy random laser is a model belonging to this family, with p=4 but with complex spins. Though complex variables yield new physical features (see Chap. <ref>), we stress that these are not relevant for what concerns the influence of connectivity on the possible onset of intensity localization. §.§ Ordered Couplings First we consider the ordered case J_k_1… k_p=J. The typical ground state spin configurations in the canonical ensemble, where equipartition is expected to hold (in the average, not strictly), are those minimizing Eq. (<ref>). The energy is extensive E=ℋ[σ_ gs]=𝒪(N) provided that the coupling constant scales as J∝1/N^A-1. In the occurrence of intensity localization, that is, if only a few modes take the overall intensity, equal to N according to Eq. (<ref>), whereas all the other are zero, we are interested in the energy contribution of a localized spin configuration. First of all, let us notice that in order to have a non-zero contribution the intensity at least p coupled spins must localize. If we represent by □ such a localizing p-uple, the intensity localized configuration is {σ_ loc}: σ_k ∈□∝ N^1/ρ, σ_k ∉□ =0. According to Eqs. (<ref>), (<ref>) the energy of such a configuration of spins scales with N like E_ loc=ℋ[σ_ loc] = 𝒪(N^p/ρ/N^A-1). To figure out whether intensity condensation might occur and dominate, one eventually has to compare the scaling behaviors of the energies of an equipartitioned and a localized configuration: 𝒪(N) 𝒪(N^p/ρ+1-A). Hence, we notice that the kind of global constraint imposed is also key to understand whether a localization transition may take place. The following cases occur for ρ=2, depending on the interaction connectivity scaling N^A: * A>p/2. Any possible intensity localized configuration of spins would yield subextensive contributions to the energy. The equipartition regime is, therefore, dominant. The case A=p is the fully connected interaction graph. * A=p/2. Both kinds of spin configurations yield an 𝒪(N) contribution to the energy. In this case a pseudo-localized phase might occur. * A<p/2. Intensity localization provides the most prominent contribution to the energy, that is, 𝒪(N^>1). The case A=1 is the sparse case. §.§ Quenched Disordered Couplings If the interaction couplings J_k_1… k_p are quenched disordered, independently distributed and with zero mean J_k_1… k_p=0, the typical ground state of the Hamiltonian (<ref>) is extensive – E=ℋ [σ_ gs]=𝒪(N) – if the variance of the distribution of the couplings scales like J^2_k_1… k_p∝1/N^A-1. If the total intensity of the system is localized in a single interacting p-uple, as in (<ref>), Eqs. (<ref>), (<ref>) imply that the energy scales with the size like E_ loc=ℋ[σ_ loc] = 𝒪(N^p/ρ/N^(A-1)/2). Comparing the equipartitioned contribution E=𝒪(N) and the localized contribution E_ loc we find for ρ=2 the following three regimes as the exponent A varies: * A=p. Localized energy contibutions are subextensive. The equipartition regime is dominant. This is the fully connected interaction graph case. * A=p-1. Both kinds of spin configurations yield an 𝒪(N) contribution to the energy. This is the case, e.g., of the mode-locked glassy random laser, where p=4, A=3. In this case one might conjecture the occurrence of a pseudo-localized phase. * A<p-1. Intensity localization provides superextensive contributions to the energy. In figure (<ref>) we summarize the scaling argument predictions on a pictorial diagram with the known cases of equipartition, localization and pseudo-localization for the 4-spin model. § EVIDENCE OF PSEUDO-LOCALIZATION IN THE ML 4-PHASOR MODEL Let us turn back to our system. Besides the presence of interactions, there is another important difference between the glassy random laser and the case of the DNLSE. In the partition function of the ML 4-phasor model, which reads as Z_N(β,ℰ) = ∫∏_k=1^N a_k a̅_k e^- βℋ[ a]δ( ℰ- ∑_k=1^N |a_k|^2 ), with given by Eq. (<ref>), the conservation of energy is realized on average, by imposing homogeneously a temperature for all interacting quadruplets, while the conservation of the total intensity is realized exactly, by means of the hard global constraint (<ref>). It is then possible to guess that in the ML 4-phasor model a localization transition might occur at the level of intensity, rather than energy. If that were the case, though, the algorithms used in our numerical simulations would not be suitable anymore. An intensity localized system, indeed, would display a small, fixed number of variables whose energy fluctuations are no longer dependent on temperature. In a way analogous to the condensation in the DNLSE, intensity localization is achieved by tuning the physical quantity ℰ = ϵ N controlled by the overall hard constraint (<ref>). More precisely, when a certain threshold is overcome in the controlling parameter of the constraint, ℰ>ℰ_c, configurations for which a finite amount of the overall intensity is stored in 𝒪(1) quadruplets might become thermodynamically dominant. However, as already discussed in Chap. <ref>, numerical simulations are performed by keeping the optical power fixed (ϵ=1) and varying the temperature T or the pumping rate 𝒫. We notice that this is equivalent to sample configurations from the equilibrium distribution P[â] ∝ e^-[â]δ( 𝒫 N - ∑_k=1^N |â_k|^2 ) as one can see by performing the change of variables â_k = a_k β^-1/4. A decrease (increase) in T (in 𝒫) can be read off equivalently as an increase of the spherical constraint value. Therefore, the occurrence of a localization transition will be revealed in terms of a critical temperature T_c or equivalently a critical pumping rate 𝒫_c. Qualitative information about the presence of a localization transition can be already traced in the behaviour of the emission spectra, when the temperature is lowered (or equivalently the pumping rate is increased), see Fig. <ref>. It can be very clearly seen that, as the pumping is increased the overall intensity is heterogeneously distributed among the modes. This might hint that a localization phenomenon in intensity occurs, but it is not enough to establish it. In the following sections the analysis is refined by introducing and studying suitable observables for localization and equipartition breaking transitions. Data are referred to the simulations of the ML 4-phasor model with PBC, whose details are reported in Table <ref>. §.§ Participation Ratio Whether the system truly localizes or not can be ascertained only from the study of the participation ratio, i.e. the localization order parameter, which for our system is defined as Y_2 = ⟨∑_k=1^N I^2_k /(∑_k=1^N I_k)^2⟩ =1/N^2⟨∑_k=1^N I^2_k ⟩, where I_k = |a_k|^2 and we have used the fact that ∑_k=1^N I_k = N because of equation (<ref>), with ϵ=1. In this case, it is irrelevant to normalize the intensities to the square root of the temperature (as done for the spectra in Eq. (<ref>)), since this factor cancels out between numerator and denominator. As in the previous chapters, ⟨·⟩ denotes the thermal average, i.e. the average computed over all the uncorrelated equilibrium configurations, which have been sampled. The dependence on the number of degrees of freedom of Y_2 can be easily rationalized in two extreme situations: equipartition and localization of the mode intensity. Let us consider localization first: in this case a finite fraction of the whole intensity is taken by a finite number of modes that does not increase with N. That is, in the localized phase, a few modes k have intensity I_k ∝ N, whereas all the others have I_≠ k= 0. Then, we have ∑_k=1^N I^2_k≃∑_k=1^# loc modes I^2_k∝ N^2. This implies that in a localized phase the participation ratio Y_2 in the limit N→∞ is a constant that does not depend on N: localization ⟺ lim_N→∞Y_2 =. On the contrary, in the equipartition phase of nearly homogeneous spectral intensities, any of the N modes has intensity I_k=𝒪(1), so that in the thermodynamic limit equipartition ⟺  Y_2 ∼1/N. Now we are ready to display, in Fig. <ref>, the first important quantitative information obtained from the study of the equilibrium distribution of the intensity among modes. In the figure we have plotted for convenience the average over quenched disorder of N Y_2(T), which we expect of 𝒪(1) in the equipartition phase and of 𝒪(N) in a possible localized phase. We observe that in the high temperature phase NY_2∼ and, therefore, the system is in the equipartition regime. Below the critical point, indicated by the vertical line at T_c=0.61, which is the glass transition temperature (see <cit.>), we find, instead, an anomaly. Despite the main panel of Fig. <ref> gives a clear indication that below the glass temperature NY_2(T) grows with the system size, the collapse of data in the inset shows that the scaling of the growth is definitely less than N. Therefore, the regime is not localized, though it is not equipartitioned, either. In fact, for T≲ T_c it occurs to be N Y_2(T)∼ N^1-Ψ with a value close to Ψ≃ 1/3. This means that, since Y_2 = 1/N^2∑_k=1^N ⟨ |a_k|^4 ⟩∼ N^-Ψ, those modes k on which the intensity is mostly concentrated scale like |a_k|^4 ∼ N^2-Ψ → |a_k|^2 ∼ N^1-Ψ/2. The difference from the localization scaling N cannot be accounted for as a finite size effect, as we might hypothesize in the estimate of critical exponents of a second-order phase transition. These effects are usually due to the cutting of long-wavelength fluctuations in a finite size simulation lattice. Localization is, instead, controlled by a first-order mechanism where a finite fraction of the whole localizing quantity concentrates on a few variables in such a way that Y_2 is strictly independent from N. In the glassy phase we, thus, have a phase that might show some signature of incipient localization but it is certainly not localized in intensity. Intensity equipartition is broken but no finite group of modes (i.e. independent of N) takes all the intensity of the system. §.§ Spectral Entropy What can play the same role played by temperature in DNLSE to help us recognizing that we are in a non-trivial phase, rather than in a localized one? A possible answer is to look for an indicator of equipartition. In Ref. <cit.> both the spectral entropy and the effective number of degrees of freedom were considered for the ML 4-phasor model. The spectral entropy is defined as S_sp = - ∑_k=1^N ℐ̂_k ln(ℐ̂_k), where ℐ̂_k is the thermodynamic averaged intensity of the mode k normalized to the total intensity of the spectrum ℐ̂_k = ⟨I_k ⟩/∑_k=1^N ⟨I_k ⟩ = ⟨ |a_k|^2 ⟩/Nϵ. The effective number of degrees of freedom, which is a function of the spectral entropy and is more easily understandable, is defined as n_eff = e^S_sp/N . The behaviour of n_eff, averaged over quenched disorder, is reported in Fig. <ref> and shows the clear signature of a phase transition, where equipartition breaks down, at the same critical temperature where the glass transition takes place. The transition is first-order, as shown by the analysis of the Binder and bimodality parameters of the probability distribution of n_eff. This study has been performed on the first simulations of the ML 4-phasor model in Ref. <cit.>. We notice that in the high temperature phase all the curves perfectly approach one, while they decrease to a size-dependent quantity for low temperatures: the larger the size, the steeper n_eff decreases. In the inset, we plot the rescaled effective number of degrees of freedom for the three largest sizes. In the low temperature phase they collapse on each other with an exponent[In order to avoid any possible source of confusion, we stress that this exponent has nothing to do with the exponent α of the specific heat peak scaling, obtained in the previous chapter.] α=1/3. Thus, in the thermodynamic limit, this quantity tends to 0 in the low temperature phase, marking the breaking of equipartition. The transition taking place at T_c, besides being a static glass transition as it has been discovered in Refs. <cit.> and discussed in the previous two chapters, can be characterized as the transition to a phase with a thermodynamic anomaly which consist in the breaking of equipartition, in an analogous way for which we have a breaking of ensemble equivalence in a non-interacting system such as the DNLSE. §.§ Amplitude Marginal Distribution The fact that this thermodynamic anomalous phase with lack of equipartition is, indeed, a phase with incipient localization is signaled, as in the case of DNLSE (see <cit.>), by the non-monotonic shape of the spectral intensity distribution P(I_k). In Fig. <ref> we display P(I_k) for single instances of quenched disorder for the size N=82 at the lowest simulated temperature T=0.3. We notice that some of the samples (such as sample 2,3 and 4) exhibit a clear peak in the tale of the distribution corresponding to accumulation of the intensity on a single mode. The position of the peak is sample dependent. On the other hand, many samples behave as sample 1, exhibiting only a deviation from monotonicity in the marginal intensity distribution. Other samples behave in an intermediate way, such as sample 5: the peak is developing, but still barely visible. The effect of averaging over disorder is reported in the sixth panel. Finally, Figs. <ref> shows the behavior of the averaged P(I_k) for the size N=82 when lowering the temperature. Notice that the variance of the marginal distribution of the intensities is related to the participation ratio (<ref>). This can be seen in the following way: by denoting the sample average over P(I_k) as ⟨·⟩_I, which accounts both for the thermal average and for the average over the modes, the variance of I_k is given by σ_I^2 = ⟨ (I_k-⟨ I_k ⟩_I)^2 ⟩_I = ⟨ I_k^2 ⟩_I - ⟨ I_k ⟩_I^2 = 1/N⟨∑_k=1^N I_k^2 ⟩ - 1/N^2⟨∑_k=1^N I_k ⟩^2 = 1/N⟨∑_k=1^N I_k^2 ⟩ - 1, where the spherical constraint has been used. Therefore, we have 1/N∑_k=1^N I_k^2 = σ^2 + 1. Since Y_2 has the expression in Eq. (<ref>), we find that NY_2 = 1+σ_I^2. PART: Analytical Approach CHAPTER: THE MERIT-FACTOR PROBLEM In the first part of this work, we have presented results obtained through Monte Carlo numerical simulations of the ML 4-phasor model for optical waves in random media. In particular, we have shown that, notwithstanding the deterministic dilution of the interaction network induced by the FMC, the model is still compatible with mean-field theory, though, up to the accuracy of our analysis, it may not be in the same universality class as the REM. However, as already mentioned, the analytical solution of the model is not achievable through standard mean-field techniques for disordered systems. Consider, for instance, the replica method: due to the dilution, the heterogeneities induced by the quenched disorder do not disappear as in fully connected mean-field models after averaging over the couplings and a nasty dependence on the site indices remains in the computation of the free energy, which impairs the introduction of the usual global order parameter of the glass transition, i.e. the configuration overlap between replicas. Much of this discussion will be made more explicit in Chap. <ref>. Furthermore, implementing the model on a sparse graph and adopting the cavity, or belief propagation, methods <cit.> is not possible in this case, because the variables are not locally constrained and intensity localization would occur because the interaction network is too diluted, as thoroughly illustrated in Sec. <ref>. In order to develop the analytical technique to address the solution of the ML 4-phasor model, in the present chapter we temporarily turn to a different problem, which, in fact, has some striking formal resemblance to our model. The Merit Factor problem (MF), see Ref. <cit.> for a survey paper on the topic, is a long standing problem in digital sequence design, with applications in many communication engineering problems, such as synchronization, pulse formation and especially radar <cit.>. The problem lies in finding Low Autocorrelation Binary Sequences (LABS), according to some suitable measure <cit.>. The merit factor was first introduced by Golay <cit.> as an important measure of the kind, which is maximized by the LABS. Though an upper bound has been conjectured <cit.>, the problem of finding the merit factor highest value has resisted decades of attempts by mathematicians and it is still an open issue <cit.>. Interestingly, the MF problem is not only related to the LABS: determining the best asymptotic merit factor is also an unsolved problem in complex analysis, which was proposed by Littlewood <cit.> even before Golay's definition and until the early 00's was studied along independent lines. From the point of view of theoretical physics, a major contribution to this line of research was given by Bernasconi in Ref. <cit.>, where the problem of finding sequences which maximize the merit factor has been reformulated in statistical mechanics terms as the problem of determining low-energy configurations of a specific spin model with long range 4-spin interactions. The Bernasconi model represents the formal connection between the MF problem and the problem of finding the solution of the glassy random laser, in which we are primarily interested. The non linear 4-body nature of the couplings is the first feature shared between the two systems, though in the case of the Bernasconi model the couplings are antiferromagnetic, rather than extracted from a zero mean, symmetric probability distribution. Moreover, the most important resemblance between the models is that the interaction graph of the Bernasconi model has the same structure determined by the FMC on the mode-locked graph, a quite fortunate occurrence, given the very different fields from which the two models originated. To be more specific, if one starts with a 4-spin antiferromagnet defined on the fully connected graph, with tetrads of spins denoted by k = {k_1,k_2,k_3,k_4} in order to obtain the Bernasconi model, one should dilute the interaction network with the rule k_1-k_2+k_3-k_4=0, which is precisely the FMC in the case of a linear comb (cf. Eq. (<ref>)). Our perspective is to take advantage of these similarities between the two models, in order to develop and test in a simpler environment the analytical methods, with which we aim to address the solution of the ML 4-phasor model. After Bernasconi's reformulation, the merit factor problem has captured the attention of physicists working in the field of spin glasses and disordered systems, see Refs. <cit.>. The model proposed by Bernasconi belongs to a class of models which exhibit frustration and glassy features without structural disorder - besides the reference just cited, see also the interesting case studied in Ref. <cit.>. Indeed, the finite-size analysis performed in Ref. <cit.> with the simulated annealing procedure provides results which are compatible with the properties of systems characterized by complex energy landscapes, leading to conjecture an ergodicity-breaking phase transition at finite temperature. The outcome of the numerical analysis performed in Ref. <cit.> on a slightly modified version of the Bernasconi model points in the same direction, an evidence which has led the authors to develop a technique which is based on the introduction of random unitary matrices in the model and allows to perform a replica computation. It is precisely this technique that we aim to carefully review and master in order to apply it to the case of the ML 4-phasor model. In the first part of this chapter, after introducing the model and presenting a high temperature approximation, which was already discussed in Refs. <cit.>, we perform new numerical analyses of the model, broadening the results of Ref. <cit.>. In the second part of the chapter, we complete the replica analysis of the model, along the lines of Ref. <cit.>, and draw some tentative conclusion on the low temperature nature of the model. § THE BERNASCONI MODEL Consider a binary sequence of length N denoted by s = {s_1,…,s_N}, where the variables are Ising spins s_i = ± 1. The autocorrelation of the sequence at distance k is given by the scalar product of the sequence with itself shifted by k. Two kinds of autocorrelations can be defined, leading to two versions of the model: aperiodic correlations R_k = ∑_j=1^N-k s_j s_j+k, where the summation has to be stopped at N-k, and periodic correlations R_k = ∑_j=1^N s_j s_(j+k-1)(modN)+1, where the summation contains N terms at distance k and (j+k-1)(modN)+1 is a formal way to implement periodic boundary conditions. If we think of the sequence as of a chain, then the model with aperiodic correlations is defined on an open chain, whereas the model with periodic correlations is defined on a closed chain. Historically, the problem of determining the LABS was defined for aperiodic correlations. The quality of a LABS can be measured either by minimizing the quantity max{ |R_k|, k ≠ 0 }, or by maximizing the merit factor <cit.>, which is defined as F = N^2/2 ∑_k=1^N-1R_k^2. In the Bernasconi model <cit.>, one considers the equivalent problem of minimizing a cost function defined as the inverse of the merit factor = 1/N-1N^2/2 F = 1/N-1∑_k=1^N-1 R_k^2 which can be considered to represent the energy function of a one-dimensional spin system with long-range 4-body interactions. In this formulation the MF problem turns into an optimization problem with which we are more familiar in statistical mechanics. Following Ref. <cit.>, in the rest of this work we will be only concerned with the periodic model, due to some particular features which allow a deeper investigation and a generalization to the ML random laser models. We, thus, consider the Hamiltonian = 1/N-1∑_k=1^N-1 R_k^2 = 1/N-1∑_k=1^N-1∑_i=1^N∑_j=1^N s_i s_i+k s_j s_j+k, where we implicitly assume periodic boundary conditions. Incidentally, we notice that this Hamiltonian is equivalent in the large-N limit to the Hamiltonian obtained from a fully-connected 4-spin antiferromagnet, =1/N-1∑_i_1,i_2,i_3,i_4|⋆^1,N s_i_1 s_i_2 s_i_3 s_i_4 , ⋆: i_2-i_1 = i_4-i_3 ∈ [1,N-1] which is diluted with the selection rule i_1-i_2+i_4-i_3 = 0. For this reason, we recognize in this problem the same topology as the mode-locked graph. §.§ The Golay-Bernasconi Approximation In this section, we discuss the Golay-Bernasconi (GB) approximation <cit.> of the periodic model, in which the correlations R_k are assumed to be Gaussian distributed independent random variables with variance N, i.e. extracted from p(R) = e^-R^2/2N/√(2 π N). We notice a certain resemblance between this approximation and the way the REM <cit.> is built: here the correlations, rather than the energy levels, are taken as independent random variables, but the spirit is the same. However, while the REM is defined at all temperatures and exhibits a non-trivial low temperature behavior, in the GB approximation a negative entropy is found at finite temperature, which is not acceptable with discrete variables. Hence, we are dealing with a high temperature approximation, which breaks down when the entropy becomes negative. Notice that the periodic correlations satisfy R_k = R_N-k. This is very easy to see as follows: by implying the operation of mod N whenever the index of summation yields a value greater than N in the subscript of the spins, we have R_N-k = ∑_i=1^N s_i s_i+N-k = ∑_i=1^N s_i+k s_i+N = ∑_i=1^N s_i+k s_i = R_k, where we have used i → i+k and (i+N) (mod N) = i. Therefore, we can rewrite the Hamiltonian of the periodic model by taking into account only one half of the contributions and multiplying by a factor two. If N is odd, we can write = 2/N-1∑_k=1^N-1/2 R_k^2, whereas if N is even = 2/N-1∑_k=1^N-2/2 R_k^2 + 1/N-1 R_N/2^2, as can be checked with simple renaming of the summation indices. However, since the difference between the two cases is completely irrelevant in the large-N limit, in the following we use the expression of the Hamiltonian for N odd, which is easier to handle. The solution of the model in the GB approximation is immediate. The partition function can be computed as follows = ∑_s e^-β[s] = ∑_sexp[-2 β/N-1∑_k=1^N-1/2 R_k^2(s)] = ∑_s∫∏_k=1^N-1/2[ R_k δ( R_k - ∑_j=1^N s_j s_j+k) ] exp[-2 β/N-1∑_k=1^N-1/2 R_k^2 ] = ∫∏_k=1^N-1/2[ R_k e^-2 β/N-1 R_k^2] ∏_k=1^N-1/2[ ∑_s δ( R_k - ∑_j=1^N s_j s_j+k) ] , where we have changed variables using the relation 1 = ∏_k=1^N-1/2∫ R_k δ( R_k - ∑_j=1^N s_j s_j+k). The quantity ∑_sδ(R_k - ∑_j=1^N s_j s_j+k) accounts for how many times a certain value R_k is found over the configurations: it is therefore a measure of the entropy of the variable R_k. Given the statistical independence of the correlations, this quantity is just 2^N times the probability of R_k and therefore we have that = 2^N ∏_k=1^N-1/2∫ R_k/√(2 π N)exp[-( 1/2N + 2 β/N - 1) R_k^2] = 2^N (N-1/N-1+4β N)^N-1/4, which in the large-N limit eventually yields = exp[ N (log 2 -1/4log(1 + 4β) ) ]. From the partition function we can deduce the behavior of the thermodynamic observables of the model. The expression of the free energy, of the energy and of the entropy densities in the GB approximation are reported below f(β) = -1/βlog = -1/βlog 2 + 1/4 βlog(1 + 4β) s(β) = β^2 ∂ f/∂β = log 2 - 1/4log (1 + 4β) + β/1 + 4β u(β) = ∂ (β f)/∂β = 1/1+4β. It is clear, then, that the approximation breaks down at low temperature, since the entropy becomes negative for β > 10.3702, i.e. T < 0.0964, and this is not possible for a model with discrete variables. As shown in Ref. <cit.>, the GB approximation can be recovered in the high tempererature regime of a disordered model with a Hamiltonian that looks like Eq. (<ref>), but where the variables R_k are given by R_k = ∑_ij^N J^(k)_ij s_i s_j, where J_ij^(k) are random connectivity matrices, independent for different k's, whose values are extracted from some probability distribution. The same kind of strategy was developed also in Ref. <cit.> for the aperiodic model. The replica analysis of the disordered model reveals a phenomenology which is compatible with the REM, with a stable 1RSB solution and zero entropy at the transition point. The physical interpretation behind this scenario might be that of “self-induced” disorder <cit.>. However, following Ref. <cit.>, we notice that this result is not sufficient to draw definitive conclusions about the low temperature behavior of the original model without quenched disorder. The disordered model defined by Eqs. (<ref>) and (<ref>) has to be regarded just as a test model, which is only capable of reproducing a high temperature approximation to the deterministic model. It would be too simplistic (if not wrong) to think that the glassy phase of this model provides an explanation of the low temperature complex behavior of the Bernasconi model. § NUMERICAL STUDY This section is devoted to present some numerical results obtained by studying the Hamiltonian (<ref>) with periodic correlations. The ground state of the model is not known in general and is still the object of research: the effort involves both number theory approaches <cit.> and extensive searches <cit.>. We have no claim here to compete with the most recent achievements reached in the field, but rather to replicate and extend the finite-temperature study proposed in Ref. <cit.>, in order to improve our knowledge of the model phenomenology. Although no systematic procedure to construct ground state configurations for a generic size is known, ad hoc constructions based on number theory exist for some specific values of N. One of these constructions works for prime numbers of the kind N=4n+3, with n ∈ℕ. In this case configurations with the lowest energy are given by the Legendre sequences σ_k = k^(N-1)/2mod N, which gives σ_k = ± 1 for all k, but k=N where σ_N=0. This is a consequence of a theorem by Fermat <cit.>, which states that unless k is a multiple of N, then k^N-1 N =1, so that in this case k^(N-1)/2 N =± 1. To obtain a legal binary sequence, then, one has to see what happens by replacing the last bit, which now is zero, with ± 1. This operation leads to an increase of the energy by a finite amount, with respect to the value computed before changing the last bit, apart from the lucky case N=4n+3, which leaves the energy untouched. The degeneracy of the ground state (and of the other energy levels as well) is related to the symmetries of the Hamiltonian[The Hamiltonian of the Bernasconi model is invariant under translation, i.e. the shift of all the spins of a given number of positions, and under parity, i.e. the flipping of all the spins. Actually, the reflection of a configuration is another symmetry of the Hamiltonian, but it can be obtained as a combination of translation and parity, hence not contributing to the total degeneracy.]. Other ground states can be constructed from linear shift register sequences based on primitive polynomials over Galois fields. This construction requires N=2^p - 1 with p prime, see <cit.>. In the case where no such constructions exist, one may resort to brute force algorithms: for a given size N, one lists all the 2^N possible sequences and computes the corresponding values of the energy. Then, the ground state configurations can be found by sorting the obtained values. The computation of the energy through the long-range Hamiltonian (<ref>) is very demanding, since it requires O(N^3) operations. Even considering the degeneracy of the energy levels, in order to exclude from the list of configurations those connected by the symmetries of the Hamiltonian, one cannot reach very high values of N in reasonable times. To the best of our knowledge, the largest size studied with this method is N=66 in Ref. <cit.>, where although parallel computing is also exploited, it took 55 days of machine time to obtain the results. Alternatively, one can resort to Monte Carlo optimizations of the Hamiltonian to find approximately optimal sequences for arbitrarily large values of N. In this work, we used a simple exhaustive search algorithm and studied sizes up to N=32. We are interested in all energy levels, not only the ground state, since as soon as the temperature is added to the system, there is a finite probability of finding the system in a level with higher energy than the lowest one. The finite-temperature behavior of the model can be deduced from the density of states expressed as a function of the energy E. We, then, build the histograms p_N(E) = ∑_sδ(E - _N[s] ), in terms of which the canonical partition function of the model can be written as _N(β) = ∑_s e^-β_N[s] = ∫ E p_N(E) e^-β E, where the subscript N is just a reminder of the finite size nature of the quantities here studied. From the partition function, we get the free energy of the model, from which, in turn, we derive the specific heat. In Fig. <ref> we display the energy and the specific heat for most of the sizes studied. Fluctuations from one volume size N to a similar one are large and macroscopic. Such fluctuations forbid any simple extrapolation to the limit N →∞ from the sizes analyzed. They decrease however for increasing N. The pronounced peak in the specific heat strongly suggests that in the infinite volume limit the system undergoes a phase transition. However, the position in temperature of the specific heat peak changes a lot from size to size and seems to decrease towards a small value of T, even if in a irregular pattern, which makes it difficult to extrapolate an estimate of the critical temperature of the model. This becomes clearer if one selects specific sequences of sizes: for instance in Figs. <ref> we plot the thermodynamic observables for even sizes increasing with a Δ N = 4, and in Fig. <ref> for the sequence of “good primes”, whose ground state can be constructed analytically. Finally, we have computed the overlaps between all the configurations belonging to the ground state and to the first excited state. In Fig. <ref> we display the histograms built with the overlap frequencies for some of the studied sizes. The fact that the overlap can take many values is evidence of a nontrivial structure of the ground state: most of the configurations minimizing the energy are not correlated. This study, repeated for the first excited state, basically retraces the results obtained for the ground state, meaning that a small thermal excitation of the system does not change dramatically how the configurations are organized. Taken together, these results constitute the phenomenology of a phase transition to a complex low temperature phase, which is well captured by the theory of replica symmetry breaking in spin-glass models. However, given the small sizes considered, the output of this study cannot be taken as a proof that a phase transition occurs in the thermodynamic limit. § THE RANDOM-UNITARY MODEL The main reason why we focus on the periodic model is that it is suitable for an analysis in Fourier space <cit.>. Let us first introduce the Fourier-space version of the model and then discuss a first tool of investigation. The discrete Fourier transformation (DFT) of the spin variables and its inverse are defined as follows B_p = 1/√(N)∑_j=1^N e^i 2 π p/Nj s_j         s_j = 1/√(N)∑_p=1^N e^-i 2 π j/Np B_p, where the symmetric convention for the normalization has been adopted. Then, the correlations can be easily expressed in terms of the Fourier variables as R_k = ∑_p=1^N ∑_q=1^N e^-i 2 π k/N q B_p B_q δ_p,-q = ∑_p=1^N e^i 2 π k/N p |B_p|^2, where the definition of the Kronecker delta as the inverse DFT of 1 has been used, together with the fact that B_p = B_-p, which holds since the original spin variables are real. Hence, the periodic Hamiltonian (<ref>) can be rewritten as = ∑_p=1^N/2 |B_p|^4, where an irrelevant additive constant has been neglected. Moreover, both the two previous expressions are correct in the large-N limit, while at finite N one should take into account the precise definition of the periodic correlations. A useful tool to yield physical insight about the model, which has been implemented in Ref. <cit.>, is the high-temperature expansion. In principle this technique can be applied in a very straightforward way; in the present case, however, the non-locality of the interaction term causes complications, such as the fact that the expansion coefficients do not behave well in the large-N limit. It turns out that the procedure simplifies in Fourier space, allowing for the computation of the thermodynamic observables at the first orders in β, which better represent the high temperature regime of the Bernasconi model, with respect to the GB approximation. This is expected, since the high temperature expansion does not require any ad hoc approximations, but to take carefully the limit β→∞. At this stage, one would like to define a model based on the Hamiltonian (<ref>) and capable of: (i) resumming the high temperature expansion and (ii) showing (hopefully) a non-trivial low temperature phase that reproduces the complex phenomenology observed in numerical simulations. This has led the authors of Ref. <cit.> to introduce a model based on random unitary matrices, which we will refer to as Random Unitary model and to which the rest of the chapter is dedicated. §.§ Definition of the Model The core idea of the Random Unitary model is to substitute the standard Fourier transformation of the spin variables with a generic unitary transformation. In fact, the DFT matrix, which can be denoted as U_pj = (u^pj/√(N))_p,j=1,…,N with u=e^-2iπ/N, is just a particular choice of matrix belonging to the unitary group, and one may be interested in studying a more general case. In the language of Lagrangian mechanics, a random unitary transformation brings us from spin variables to a set of random generalized coordinates in the configuration space: in order to visualize this operation, one can think of a rotation with an arbitrary angle. In fact, the introduction of random rotations defines a very similar class of models, which were extensively studied in Refs. <cit.>, namely the Random Orthogonal models. In the language of disordered systems, the unitary group plays the role of quenched disorder, of which the DFT matrix is a particular instance. When passing from the DFT matrix to a generic unitary matrix, one has to pay attention to a subtle aspect of the procedure. In fact, if one starts with real variables, i.e. the spins s_i, the Fourier transformed variables B_p of Eq. (<ref>) satisfy the property B_p=B_-p, as a consequence of the particular expression of the Fourier transformation. In general, this is not true for a unitary matrix[The only property which defines a matrix representing an element of the unitary group is that U^†=U^-1, so that U U^†=U^†U=1. This leads to the fact that a unitary matrix can always be written in an exponential form, such as U=e^ih where h is a generic Hermitian matrix, i.e. h=h^†. However, it is not necessary that the elements of h satisfy the relation h_pj=h_j(-p) which implies the property of complex conjugation B_p=B_-p in the special case of a DFT matrix]. However, it turns out that this property of complex conjugation is crucial if one aims to reproduce the results of the high temperature expansion of the original model through a model defined with generic unitary matrices <cit.>. In other terms, if one just substitutes to Eq. (<ref>), the following relation B_p=∑_pj^N U_pj s_j where U_pj are the elements of a generic unitary matrix, one finds different results from the high temperature expansion already at the first order. One way to solve the problem is to introduce a model based on a double orthogonal transformation with Hamiltonian = ∑_p=1^N/2|A_2p-1 + i A_2p|^4, where the variables A are related to the variables B as B_p=A_2p+iA_2p+1 and are defined in terms of the spin variables as A_p = ∑_j=1^N O_pjs_j, with O_pj orthogonal matrices, over which we aim to integrate. We call this model double Random Orthogonal model. However, by following Ref. <cit.> we define N/2 complex spin variables in the following way τ_j = s_2j-1 + i s_2j and apply a random unitary transformation to τ_j. Notice that this is just a redistribution of the degrees of freedom of the theory, not changing the physical content of the model. Clearly, in this case, the unitary transformation has to be represented by a N/2× N/2 matrix. With these prescriptions, the Hamiltonian of the Random Unitary model is given by [τ] = ∑_p=1^N/2 |C_p(τ)|^4, where the dynamic variables of the model are the unitary-transformed of the complex spins τ_j: C_p = ∑_j=1^N/2 U_pjτ_j. §.§ Replicated Partition Function In this section we develop the replica computation for the model defined by the Hamiltonian (<ref>), where the dynamic variables C_p=∑_j U_pjτ_j are unitary-transformed of the complex spins τ_j = s_2j-1 + i s_2j and U_pj are the elements of a random matrix belonging to the unitary group. The matrices U play the same role in the computation as the quenched couplings in usual spin-glass models. The partition function of the model for a specific transform U is given by _U = ∑_τ e^-β[τ] = ∑_τexp[ - β∑_p=1^N/2 |C_p(τ)|^4 ], where ∑_τ = ∏_j=1^N/2∑_{τ_j} and the sum runs over the four possible values of the complex numbers τ_j. First of all, let us change variables in the computation of the partition function, by exploiting the relation 1 =∏_p=1^N/2∫ C_p C_p δ(C_p-∑_j=1^N/2 U_pjτ_j ) δ(C_p-∑_j=1^N/2U_pjτ_j ) = ∏_p=1^N/2∫ C_p C_p λ_p λ_p exp[ i λ_p (C_p-∑_j=1^N/2 U_pjτ_j ) - i λ_p (C_p-∑_j=1^N/2U_pjτ_j ) ], where we used the Fourier integral representation of the delta functions and neglected irrelevant constant factors. We have _U = ∫∏_p=1^N/2[ C_p C_p λ_p λ_p ] exp[- β∑_p=1^N/2 |C_p|^4 + ∑_p=1^N/2(i C_p λ_p - i C_p λ_p ) ] ×∑_τexp[ ∑_p=1^N/2∑_j=1^N/2(-i U_pjτ_j λ_p + i U_pjτ_j λ_p ) ]. The free energy of the model depends on the choice of the matrices U. Since we are interested in the typical behaviour of the system with respect to this source of randomness, the average of the free energy over the unitary group has to be computed. At this level, we expect the system to exhibit a low temperature glassy phase, which can only be revealed by computing a quenched average. This can be done through the replica method. Therefore our goal is to find the following quantity f(β)= lim_N→∞ - 1/β Nlog_U = lim_n → 0lim_N→∞ - 1/β nNlog_U^n, where (⋯) denotes the integration over the unitary group and, as usual, the two limits have been exchanged, in order to compute first the saddle point of the averaged replicated partition function. As a consequence, in the following we will use several times the fact that we are taking the large-N limit at finite n. Considering Eq. (<ref>), the n-th power of the partition function reads as _U^n = ∫∏_a=1^n ∏_p=1^N/2[ C_p^a C_p^a λ_p^a λ_p^a ] exp[-β∑_a=1^n∑_p=1^N/2 |C_p^a|^4 + ∑_a=1^n∑_p=1^N/2(i C_p^a λ_p^a - i C_p^a λ_p^a ) ] ×∏_a=1^n[∑_τ^a] exp[ ∑_a=1^n∑_p=1^N/2∑_j=1^N/2(-i U_pjτ_j^a λ_p^a + i U_pjτ_j^a λ_p^a ) ]. By defining the auxiliary variables Ω_pj = i ∑_a^n τ_j^a λ_p^a, the disorder dependent term of the replicated partition function can be compactly written as exp[ ∑_a=1^n∑_p=1^N/2∑_j=1^N/2(-i U_pjτ_j^a λ_p^a + i U_pjτ_j^a λ_p^a ) ] = exp[(U Ω^† + U^†Ω) ]. In order to average the replicated partition function, we have to compute an integral over the Haar measure of the unitary group. This problem was first encountered in the large-N limit of lattice gauge theories in Ref. <cit.>, where the authors considered an approach which is analogous to standard mean-field theory for magnetic systems and leads to the computation of partition functions of the kind Z = ∫ U U^†exp N [(U A^† + U^† A) ], where A is an arbitrary matrix source. This is exactly what we aim to compute, when averaging the right hand side Eq. (<ref>) over disorder, if we replace the generic matrix A with Ω/N. The integral was solved in full generality by Brezin and Gross and the result is reported in Eq. (33) of Ref. <cit.>. However, as noted in <cit.>, in the present case at finite non-zero n, only terms containing a single trace operator survive in the large N limit. Hence, the cited result reduces to exp[(U Ω^† + U^†Ω) ] = exp[N/2( Ω^†Ω/N^2) ], where is a function of the eigenvalues of a matrix defined as (z) = -log(1 + √(1+z)) + √(1+z). This is a main technical part of the computation: in Appendix <ref> we provide the demonstration of this result in a particularly simple case, where the integral can be directly performed with the saddle point method. As is usual in standard replica computations, the average over quenched disorder leads to the coupling of the originally independent copies of the system. The coupling among replicas suggests what are the global order parameters of the theory. In the present case, let us define the following overlap matrices _ab = 1/N∑_j=1^N/2τ_j^a τ_j^b       Λ_ab=1/N∑_p=1^N/2λ_p^aλ_p^b. The right hand side of Eq. (<ref>) can be expressed in terms of the overlaps as follows exp[N/2( Ω^†Ω/N^2) ] = exp[N/2(Λ) ]. The previous relation can be shown for the expansion of of the function , i.e. for any integer power K of its argument: ( Ω^†Ω/N^2)^K = 1/N^2K∑_j_1=1^N/2 (Ω^†Ω)^K_j_1j_1 = 1/N^2K∑_j_1,…,j_K^1,N/2∑_p_1,…,p_K^1,N/2Ω_p_1 j_1Ω_p_1 j_2⋯Ω_p_K j_KΩ_p_K j_1 = 1/N^2K∑_a_1,…, a_K^1,n∑_b_1,…, b_K^1,n∑_j_1,…,j_K^1,N/2τ_j_1^a_1τ_j_1^b_K⋯τ_j_K^a_Kτ_j_K^b_K-1∑_p_1,…,p_K^1,N/2λ_p_1^a_1λ_p_1^b_1⋯λ_p_K^a_Kλ_p_K^b_K = ∑_a_1,…, a_K^1,n∑_b_1,…, b_K^1,n_a_1b_KΛ_a_K b_K_a_K b_K-1⋯_a_2 b_1Λ_a_1b_1 = (Λ)^K, where we have used the fact that Λ_ab = Λ_ba. In principle, both the matrices defined in Eq. (<ref>) are Hermitian; however, since the overlaps of the original spin variables ∑_j=1^N s_j^(a)s_j^(b) are symmetric quantities, we can take both and Λ as real valued. The symmetry _ab=_ba (and analogously Λ_ab=Λ_ba) is indeed preserved by any replica symmetry breaking ansatz[In a replica symmetry breaking ansatz it is not the symmetry of the matrices for index exchange to be broken, but the symmetry under the n-dimensional permutation group of replicas.]; so even if we carry on the computation for complex order parameters, we will end up with real valued matrices at the level of the saddle point equations. In Eq. (<ref>), we change variables to the overlap matrix through the following relations 1 = ∏_a < b^1,n∫_ab δ(N _ab-∑_j=1^N/2τ_j^a τ_j^b) = ∫∏_a < b^1,n𝒬_ab∫_-i∞^+i∞∏_a < b^1,nN ℛ_ab/2 π ie^1/2∑_a ≠ b^n ℛ_ab( N𝒬_ab-∑_r^N/2τ_j^aτ_j^b ) for the off-diagonal terms and 1 = ∏_a=1^n ∫𝒬_aaδ(N𝒬_aa-N) = ∫∏_a=1^n 𝒬_aa∫_-i∞^+i∞∏_a=1^nN ℛ_aa/4 π i e^1/2∑_a^n ℛ_aa(N𝒬_aa-N) for the diagonal terms. Similarly, we use following relations for the overlap matrix Λ 1 = ∏_a < b^1,n∫Λ_abδ(NΛ_ab-∑_p=1^N/2λ_p^aλ_p^b) = ∫∏_a < b^1,nΛ_ab∫_-i∞^+i∞∏_a < b^1,nN ℳ_ab/2 π i e^1/2∑_a ≠ b^n ℳ_ab( NΛ_ab-∑_p^N/2λ_p^aλ_p^b) and 1 = ∏_a=1^n ∫Λ_aaδ(NΛ_aa-∑_p=1^N/2|λ_p^a|^2) = ∫∏_a=1^n Λ_aa∫_-i∞^+i∞∏_a=1^nN ℳ_aa/4 π i × e^1/2∑_a^n ℳ_aa(NΛ_aa-∑_p^N/2|λ_p^a|^2). All the delta functions have been represented in terms of their Laplace transformations through the integration over the Lagrange multipliers ℛ and ℳ respectively for 𝒬 and Λ. The factors 1/2 in front of the summations over the off-diagonal terms follow from the symmetry of the overlap matrices. By considering the previous relations all together and neglecting all constant prefactors and subleading terms, we have 1 = ∫∏_a ≤ b^1,n_abΛ_ab∫_-i∞^+i∞∏_a ≤ b^1,nℛ_abℳ_ab ×exp[N/2(ℛ𝒬)+ N/2(ℳΛ)-1/2∑_j=1^N/2∑_ab^n τ_j^aℛ_abτ_j^b - 1/2∑_p=1^N/2∑_ab^n λ_p^aℳ_abλ_p^b ] Therefore, by looking back at Eq. (<ref>) and using the previous relation together with the result in Eq. (<ref>), the averaged replicated partition function reads as ^n_U = ∫𝒟C 𝒟C𝒟λ𝒟λD𝒬DℛDΛ DℳexpN/2[(ℛ𝒬)+ (ℳΛ) + (Λ) ] ×exp[-β∑_a=1^n∑_p=1^N/2 |C_p^a|^4 + ∑_a=1^n∑_p=1^N/2(i C_p^a λ_p^a - i C_p^a λ_p^a ) - 1/2∑_p=1^N/2∑_ab^n λ_p^aℳ_abλ_p^b ] ×∏_a=1^n[∑_τ^a] exp[ -1/2∑_ab^n ∑_j=1^N/2τ_j^aℛ_abτ_j^b ], where in order to shorten our notation the integration measures in the global X={𝒬,Λ,,} and local variables x={C,λ} have been written respectively as DX=∏_a ≤ b^1,n X_ab       𝒟 x= ∏_j=1^N/2∏_a=1^n x_j^a. The expression of the partition function can be simplified by performing the complex Gaussian integral in the Lagrange multipliers λ. We notice that all the terms involving the variables λ are diagonal with respect to the local unitary-transformed space index p. Hence, the dependence on λ can be factorized in N/2 Gaussian integrals. Each of these integrals can be compactly written in a vector formalism for the replica indices and has the following solution ∫λ_p λ_p exp[ - λ_p ℳ/2λ_p+i λ_p ·C_p-i C_p ·λ_p ] = (2π)^n (2 ^-1) exp[2 C_p ^-1C_p ] = exp[ - log + 2 C_p ^-1C_p ]. To obtain this result one has to complete the square with the change of variables λ_p →λ_p + 2i ^-1C_p and λ_p →λ_p - 2i C_p ^-1. Moreover, in the last expression we neglected an overall constant additive terms (4π)^n. Since we have N/2 of these contributions, the partition function reads as ^n_U = ∫ D𝒬DℛDΛ DℳexpN/2[(ℛ𝒬)+ (ℳΛ) + (Λ) - log] ×∫𝒟C 𝒟Cexp[-β∑_a=1^n∑_p=1^N/2 |C_p^a|^4 + 2 ∑_ab^n∑_p=1^N/2 C_p^a ^-1_abC_p^b ] ×∏_a=1^n[∑_τ^a] exp[ -1/2∑_ab^n ∑_j=1^N/2τ_j^aℛ_abτ_j^b ], where we have used the relation log = log. The first line of the previous equation contains entropic contributions in terms of the global order parameters and their Lagrange multipliers, whereas the second and third line correspond to the local contributions obtained by tracing respectively over the unitary-transformed variables C and the complex spins τ. At this point of the computation the dependence on the local indices both of the direct and of the unitary-transformed space can be factorized in N/2 equivalent contributions. By defining the following local free energies f_C() = log∫∏_a=1^n[ C_a C_a ] exp[-β∑_a=1^n |C^a|^4 + 2 ∑_ab^nC^a ℳ_ab^-1C^b ], f_τ()= log∏_a=1^n [∑_{τ^a }] exp[ -1/2∑_ab^nτ^aℛ_abτ^b ], the replicated partition function averaged over disorder eventually reads as ^n_U = ∫ D𝒬 Dℛ DΛ Dℳexp[N S(,,Λ,)] where, after singling out the overall factor N, we have defined the action density S(,,Λ,) = 1/2{ f_τ(ℛ) + f_C(ℳ) + (ℛ𝒬)+ (ℳΛ) + (Λ) - log}. The subscripts τ and C in the notation for the local free energies are just reminders of the variables over which the trace is taken. §.§ Reduced Theory The free energy Eq. (<ref>) is determined by the stationary point of the action function derived in the previous section, which has to be computed through the saddle point method in the large-N limit and evaluated in the limit n → 0: f(β)= -1/βlim_n → 0S_sp/n, where S_sp is a shorthand notation for the action computed in the solution of the saddle point equations. In order to solve the optimization problem, it is convenient to eliminate some of the variables that have been introduced along the computation. From Eqs. (<ref>) and (<ref>) it is easy to derive the saddle point equations, by using well-known matrix identities <cit.>. The full set of saddle point equations for the action S reads as ∂ S/∂ℛ_ab= ∂ f_τ (ℛ)/∂ℛ_ab+𝒬_ab =0 ∂ S/∂Λ_ab= _ab+[𝒬𝒢'(𝒬Λ)]_ab =0 ∂ S/∂𝒬_ab= ℛ_ab+[Λ𝒢'(𝒬Λ)]_ab =0 ∂ S/∂_ab=∂ f_C()/∂_ab -^-1_ab+Λ_ab =0, where 𝒢' formally denotes the derivative of the function 𝒢 with respect to its argument. In the following, we adopt the matrix formalism to shorten our notation. The main idea to derive the reduced theory is to manipulate the saddle point equations in order to eliminate the matrix Λ from the theory. A key ingredient in this procedure is the fact that the function 𝒢 satisfies the following relation (𝒢'(z))^2=z^-1(1/4- 𝒢'(z)), which can be checked a posteriori by direct substitution of the expression of , Eq. (<ref>). In fact, the previous relation can be seen as the ordinary differential equation that defines the function : this equation was derived in Ref. <cit.> to find the solution of the integration over the unitary group and has been reported in Eq. (<ref>). In the following, we will assume that all the matrices commute: this can be justified in view of the fact that the saddle point value of the action with respect to variations of these matrices will be computed by restricting the optimization on the subspaces of RS or RSB (in the Parisi scheme) matrices which are all subspaces of commuting matrices. From the saddle point equations Eqs. (<ref>) and (<ref>) one finds 𝒢'(𝒬Λ)=-𝒬^-1 𝒢'(𝒬Λ)=-Λ^-1ℛ, from which, by subtracting them, one finds an expression of Λ in terms of the other matrices Λ=𝒬ℛ^-1 . By plugging the Λ-independent expression of 𝒢' into Eq. (<ref>) and using Eq. (<ref>) one gets after some algebra the relation ℛ𝒬 -1/4𝒬^-1= I, which, with the change of variables ℳ→^-1/4, yields the relation (ℛ-ℳ)𝒬= I. It is worth stressing that the change of variables performed does not modify the integration measure of the partition function up to a subleading term in the large-N limit. Eq. (<ref>) is an algebraic relation that connects the saddle point value of the overlap matrix to those of the Lagrange multipliers: hence, it can be viewed as a constraint to a new (reduced) system of saddle point equations. The saddle point equations Eq. (<ref>) and (<ref>) can be then substituted by the constraint Eq. (<ref>). Finally, by modifying the saddle point equation (<ref>) in order to eliminate Λ and substituting with ^-1/4, the reduced set of saddle point equations reads as ∂ f_τ(ℛ)/∂ℛ_ab+𝒬_ab =0 -∂ f_C(ℳ)/∂ℳ_ab + 𝒬_ab =0 (ℛ-ℳ)𝒬 = I. The reduced theory can be now induced from the previous set of saddle point equations, which in fact can be derived by applying the saddle point method to the following partition function 𝒵 = ∫ DℛDℳexp[N A(ℛ,ℳ) ], where A(ℛ,ℳ)= 1/2[f_τ(ℛ)+f_C(ℳ) + log(ℛ-ℳ)] and f_C(ℳ) = log∫∏_a=1^n[ C_a C_a ] exp[-β∑_a=1^n |C^a|^4 + 1/2∑_ab^nC^a ℳ_abC^b ] f_τ(ℛ)= log∏_a=1^n [∑_{τ^a }] exp[ -1/2∑_ab^nτ^aℛ_abτ^b ]. In the redefinition of the first local free energy a numerical factor coming from the change of variables ℳ→^-1/4 has been absorbed in the integration variables C,C and the temperature has been rescaled accordingly. On the other hand, the second local free energy has remained untouched and has been reported here only for the sake of completeness. The saddle point equations (<ref>) and (<ref>) have a very intuitive physical meaning: they fix the value of the overlap matrix to the thermal average of the product of two different replicas. This can be seen by computing explicitly the derivatives of the local free energies Eqs. (<ref>) and (<ref>), which yield ⟨τ^aτ^b ⟩_τ = 2 𝒬_ab ⟨ C^aC^b ⟩_C = 2 𝒬_ab, where the averages induced by the two free energies are defined as ⟨ (⋯) ⟩_τ = ∏_a=1^n [∑_{τ^a }]e^-1/2∑_ab^nτ^aℛ_abτ^b (⋯)/∏_a=1^n [∑_{τ^a }] e^-1/2∑_ab^nτ^aℛ_abτ^b and ⟨ (⋯) ⟩_C = ∫∏_a=1^n[ C_a C_a] e^-β∑_a^n |C^a|^4 +1/2∑_ab^nC^aℳ_abC^b (⋯)/∫∏_a=1^n[ C_a C_a] e^-β∑_a^n |C^a|^4 +1/2∑_ab^nC^aℳ_abC^b. Finally, the thermodynamic free energy of the system is given by f(β)= -1/βlim_n → 0A_sp/n, where A_sp is the action computed in the solution of the reduced saddle point equations (<ref>) and (<ref>) with the algebraic constraint (<ref>). There is an important remark that has to be made regarding this procedure. The reduced theory is not completely equivalent to the original one: the action defined in Eq. (<ref>) has not been derived through manipulations of the original one Eq. (<ref>), it has rather been guessed from the saddle point equations. For this reason we have chosen a different notation for the two quantities. The identities exploited in order to define the new action function are satisfied only at the saddle point, which however yield the correct free energy in the large N limit. Thus, the reduced theory is expected to correctly reproduce the thermodynamics of the original theory. § DISCUSSION OF THE RESULTS Before entering into the details of the analysis, we anticipate the discussion of the results in this section for the convenience of the reader. We have performed three kinds of computations: the annealed, the Replica Symmetric (RS) and the one step Replica Symmetry Breaking (1RSB) one. The annealed limit is obtained by considering n=1 in the previous equations and yields the paramagnetic solution of the model. The RS and 1RSB computations are based on different parametrizations of the replica matrices: if a phase transition from the paramagnetic phase to another phase occurs, we expect some of the off-diagonal parameters of these matrices to become non-vanishing and starting increasing by further cooling the system. However, contrary to expectations, from the numerical study of the model, neither the RS nor the 1RSB computations give a different solution from the paramagnetic one at finite temperature, up to the accuracy of our analysis. In Fig. <ref> we display the free energy of the model both as a function of temperature and of its inverse, obtained from the solution of the saddle-point equations pertaining to each one of the three computations: data fall on the same curve, corresponding to the paramagnetic state. The fact that the RS solution does not differ from the annealed one reminds of the situation found in the spherical p-spin model, where no trace of the glass transition is found at the RS level, i.e. the off-diagonal element q_0 of the overlap matrix is zero at every temperature. Conversely, in the SK model one can observe a phase transition already within the RS ansatz, since a non-vanishing value of q_0 can be found at finite temperature. However, this solution becomes unstable on the de-Almeida-Thouless line <cit.> and, also, leads to a low temperature negative entropy. In the spherical p-spin model the RS solution describing the paramagnetic state is always stable, a typical feature of first-order phase transitions, and a negative entropy is not physically inconsistent with continuous variables; nevertheless, the 1RSB solution, with the overlap value of the diagonal blocks q_1 ≠ 0, gives a higher free energy, which in the replica method means that the 1RSB ansatz yields the thermodynamically dominant phase. If our analysis is correct, the lack of a similar scenario in the Random Unitary Model for the Merit Factor problem leads to the following possible situations: (i) the glass transition occurs at zero temperature with a 1RSB ansatz; (ii) the transition occurs at finite temperature, but with a different kind of replica symmetry breaking ansatz from the 1RSB one; (iii) there is no transition at all in the model with random unitary matrices. In order to test the first hypothesis, we have to compute the zero-temperature limit of the 1RSB saddle-point equations, solve them and check whether the asymptotic value of the free energy is greater than that of the paramagnetic solution. This is still work in progress, see Sec. <ref>. However, it would be very surprising to discover that in this model the addition of an infinitesimal thermal noise would destroy the alleged zero-temperature transition. A useful indication regarding the second hypothesis may come from the study of the stability of the RS solution: proving that the RS replicon is always positive definite at finite temperature can be considered as an evidence in favor of the p-spin model scenario. In fact, as a result of our analysis, we are led to reconsider the mapping from the original Bernasconi model, with Hamiltonian (<ref>) and whose phebomenology seems compatible with a glass transition, to the model with random unitary matrices, which may be not under control. Therefore, the third scenario may be the most reasonable one: the correct mapping to a disordered model should involve two random orthogonal transformation (see Eq. (<ref>)), one for the real and one for the imaginary part of the complex spins τ, instead of a single unitary transform. In order to test this hypothesis, we are performing analytical and numerical studies of the original models, of the model with random orthogonal matrices and of the random unitary model. § ANNEALED LIMIT In this section we focus on the annealed limit, which yields a great simplification of the theory from the mathematical point of view, since it amounts to consider numbers instead of matrices. The two local free energies boil down to f_τ () = log∑_{τ}e^-1/2|τ|^2 ℛ = log4 - ℛ f_C(ℳ) =log∫ C C e^-β |C|^4 + ℳ/2|C|^2. In the first expression the term log4 is in place of the usual log 2 for binary variables, since the sum over the configurations of τ runs over 4 possible values. However, this fact is compensated by the factor 1/2 in the definition of the action, which takes into account the correct number of degrees of freedom. Hence, the action reads A_ann=log2 - /2 + 1/2log∫ C C e^-β |C|^4 + ℳ/2|C|^2 + 1/2log(- ℳ) and the only value of the overlap q is connected to ℛ and ℳ through the simple relation q=1/- ℳ, which is the scalar version of the algebraic constraint Eq. (<ref>). The saddle point equations can be derived straightforwardly. The first one is simply - =1, which, by using Eq. (<ref>), gives q=1 consistently with the expectation for the annealed case. The other equation, which determines the value of ℳ, is ⟨ |C|^2 ⟩_C = 2, where the average is performed with the probability measure induced by the local free energy f_C Eq. (<ref>). The annealed limit shows that, from the technical point of view, the theory is not trivial, since even in this simple case we cannot obtain an analytical solution. This is mainly due to the quartic measure, which characterizes the free energy integrated in the unitary-transformed variables. The integrals appearing in the action and in the equation for can be simplified with some change of variables and cast into truncated Gaussian integrals, yielding error functions which have to be computed numerically. By passing to polar coordinates in the complex plane we have ∫ C C e^-β |C|^4 + ℳ/2|C|^2 = 4 π∫_0^∞ r r e^-β r^4 + ℳ/2 r^2 = 4 πe^ℳ^2/16β√(π)/4√(β)(1+(ℳ/4√(β))) and ∫ C C e^-β |C|^4 + ℳ/2|C|^2|C|^2 = 4 π∫_0^∞ r r^3 e^-β r^4 + ℳ/2 r^2 =4 π4√(β) + e^ℳ^2/16β√(π)ℳ(1+(ℳ/4√(β))) /16 β^3/2, where the final result can be obtained by changing variables to u=r^2. Eventually, Eq. (<ref>) becomes ℳ/4β+e^-ℳ^2/16β/√(πβ)(1+(ℳ/4√(β))) = 2, and can be solved numerically. The free energy is given by f_ann(β)= -1/β A_ann where A is computed over the solutions of the saddle point equations. Due to Eq. (<ref>) and to the fact that q=1, the logarithm in A vanishes and can be expressed in terms of ℳ. Hence, we have f_ann(β)= -1/β{log 2 - ℳ+1/2 + 1/2log( π^3/2e^ℳ^2/16β/√(β)) + 1/2log(1+(ℳ/4√(β))) }, where ℳ=ℳ(β) is given by the solution of Eq. (<ref>) for each value of β. § REPLICA SYMMETRIC ANSATZ In this section we perform a replica symmetric (RS) ansatz for the solution of the saddle point equations. We consider the following parametrizations for the global order parameters 𝒬_ab = δ_ab + q_0(1-δ_ab) ℛ_ab = _D δ_ab + _0(1-δ_ab) ℳ_ab = _D δ_ab + _0(1-δ_ab), where the diagonal elements of 𝒬 are fixed to one due to the fact that |τ^a|^2=2. The action in Eq. (<ref>) has to be expressed in terms of the parameters of the RS matrices. In order to lighten the exposition, here we simply limit ourselves to reporting the results and we refer to Appendix <ref> for the details of the computations. To shorten the notation we introduce the function g_β,0(C|x,y,z) = e^-β |C|^4 + 1/2 (x-y)|C|^2 + √(y)(C z), where x,y ∈ℝ and z,C ∈ℂ. By integrating g_β, 0 with respect to C we have the function I_β,0(x,y,z) = ∫ C C e^-β |C|^4 + 1/2(x-y)|C|^2 + √(y)(C z), that is a local partition function (related to the local free energy (<ref>)), which plays exactly the same role played by the cosh function, which will appear in the other local free energy (<ref>), after tracing over the discrete spins. However, in this case, we are not able to reduce it further due to the quartic term in the exponential. The function 𝒫_β,0(C|x,y,z) = g_β,0(C|x,y,z)/I_β,0(x,y,z) defines a probability measure for the unitary transforms C of the complex spin variables τ. Eventually, the RS action reads lim_n → 02/n A_RS = f_τ(_0) + f_C(_D,_0) + s_0(_D, _0, _D, _0) where the 𝒪(n) expressions of the local free energies are f_τ(_D, _0) = log 4 - (ℛ_D - ℛ_0) + 2 ∫ h/√(2π) e^-h^2/2logcosh (√(-ℛ_0)h) f_C(_D,_0) = ∫ z z/4π e^-|z|^2/2log I_β,0(_D, _0, z), and the 𝒪(n) expression of the entropic term has been stored into the function s_0(_D, _0, _D, _0) = log(ℛ_D -ℳ_D- ℛ_0 + ℳ_0 ) + ℛ_0 - ℳ_0/ℛ_D -ℳ_D- ℛ_0 + ℳ_0. Plugging Eq. (<ref>) into Eq. (<ref>) yields the RS free energy of the model. We now have to find self-consistency equations for _D,_0, _D,_0, whose solution will give the dynamics of the parameters with respect to temperature variations, which is necessary to evaluate the free energy at each value of β. These equations can be derived by imposing the vanishing of the RS action gradient components, as shown in detail in Appendix <ref>. Eventually, the RS self-consistency equations are given by q_0 = ∫ h/√(2π)e^-h^2/2tanh^2(√(-ℛ_0)h) q_0 = 1/2∫ z z/4π e^-|z|^2/2 |⟨ C ⟩_0|^2 1 = 1/2∫ z z/4π e^-|z|^2/2⟨ |C^2| ⟩_0 where ⟨ (⋯) ⟩_0 = ∫ C C 𝒫_β,0(C | _D,_0,z) (⋯) = ∫ C C g_β,0(C | _D,_0, z) (⋯) /I_β,0(_D,_0,z), is the average defined over the probability measure 𝒫_β,0(C|_D,_0,z), see Eq. (<ref>), and, therefore, is a function of the complex Gaussian variable z and the saddle point parameters _D,0. The system of RS equations is completed by the relations ℛ_D-ℳ_D -(ℛ_0-ℳ_0)q_0=1 (ℛ_0-ℳ_0)(1-2q_0)+ (ℛ_D-ℳ_D)q_0=0. which follow from the RS expression of the algebraic relation (<ref>). We notice that with the above saddle point equations the entropy (<ref>) can be rewritten as s_0(q_0)=-ln(1-q_0)-q_0/1-q_0, which clearly vanishes for q_0=0. One important remark: if one sets _0 = _0 = q_0 = 0 in the RS action (<ref>), one recovers the annealed action (<ref>). §.§.§ Further manipulations It is convenient, especially in view of the numerical analysis of the previous set of equations, to write every complex variable in terms of its real and imaginary parts. To have an even lighter notation, let us denote the integral over a real Gaussian variable x as 𝒟 x = dx /√(2π)e^-x^2/2. We use the following notation: z=ρ+i σ and C=a+ib. Therefore, we have ∫ z z/4π e^-|z|^2/2⟨ (⋯) ⟩_0 = ∫𝒟ρ𝒟σ⟨ (⋯) ⟩_0 where a factor 1/2 is cancelled by the modulus of the Jacobian of the transformation z,z→σ,ρ. The average induced by the measure 𝒫_β,0(a,b|_D,_0,z) now reads ⟨ (⋯) ⟩_0 = ∫ a b (⋯) 𝒫(a,b|ℳ_D,ℳ_0,z) = ∫ a b  (⋯)  g_β,0(a,b|ℳ_D,ℳ_0,ρ, σ) /I_β,0(ℳ_D,ℳ_0,ρ, σ) where in this case the Jacobians of the transformations C,C→ a,b cancel out between numerator and denominator and g_β,0( a, b|x,y,ρ, σ,) = exp[-β (a^2+b^2)^2 +1/2(x-y)(a^2+b^2)+√(y)(aσ + bρ) ] I_β,0(x,y,ρ,σ) = ∫ a  b  g_β,0(a, b|x,y,ρ, σ). In this notation, we have ∫ z z/4π e^-|z|^2/2⟨ |C|^2 ⟩_0 = ∫𝒟ρ𝒟σ ⟨ a^2 ⟩_0 + ∫𝒟ρ𝒟σ ⟨ b^2 ⟩_0 = 2 ∫𝒟ρ𝒟σ ⟨ a^2 ⟩_0, where the last identity holds because the two integrals are equal under the simultaneous changes of variables ρ↔σ and a ↔ b. Similarly, for the other expectation value in the self-consistency equations, we have ∫ z z/4π e^-|z|^2/2 | ⟨ C ⟩_0|^2 = ∫ z z/4π e^-|z|^2/2 (⟨ C ⟩_0)^2 + ∫ z z/4π e^-|z|^2/2(⟨ C ⟩_0)^2 = ∫𝒟ρ𝒟σ ⟨ a ⟩_0^2 + ∫𝒟ρ𝒟σ ⟨ b ⟩_0^2 =2 ∫𝒟ρ𝒟σ ⟨ a ⟩_0^2, where the last identity is again due to the changes of variables ρ↔σ and a ↔ b. Given the previous results, we can finally rewrite the set of self-consistency equations for the RS parameters as follows q_0= ∫𝒟h tanh^2(√(-ℛ_0)h) q_0= ∫𝒟ρ𝒟σ ⟨ a ⟩_0^2 1= ∫𝒟ρ𝒟σ ⟨ a^2 ⟩_0 plus the algebraic constraints Eqs. (<ref>). Finally, we report the expression of the RS free energy, which has to be computed at each temperature on the solutions of the self-consistency equations f_RS(β) = -1/β(log2 - ℛ_D - ℛ_0/2 + ∫𝒟 h logcosh (√(-ℛ_0)h) + 1/2∫𝒟ρ𝒟σlog[2 I_β,0(_D,_0,ρ,σ)] + 1/2 s_0(_D,_0,_D,_0) ) . where we recall that s_0 contains the expression of the entropic term Eq. (<ref>). The factor 2 multiplying I_β,0 comes from the change of variables C,C→ a,b. § ONE STEP OF REPLICA SYMMETRY BREAKING The first step of replica symmetry breaking (1RSB), introduced in Ref. <cit.> for the SK model, is based on a more sophisticated choice of matrices than the more intuitive RS ansatz, in order solve the saddle point optimization problem. When replica symmetry is broken, the group S_n of replica permutations is no more a symmetry group for the theory. However, the theory may still be invariant under a subgroup of replica permutations. In the 1RSB case the symmetry group left is (S_m)^⊗n/m⊗ S_n/m, for some integer value of m, where (S_m)^⊗n/m is the direct product of the permutation group of m objects with itself for n/m times <cit.>. This ansatz amounts to consider matrices of the kind [ _D _1 _1 _D _0; _0 _D _1 _1 _D ] which are characterized by n/m diagonal blocks of dimension m × m. With this choice, it is clear that S_m corresponds to permutations of replicas inside a block and S_n/m corresponds to permutations of the blocks. From the physical point of view, the presence of two values _0 and _1 signals the organization of states in clusters with one value of the overlap between configurations belonging to different states (_0) and only one intrastate overlap value (_1). The parameter m, which is known as breaking parameter, is connected to the probability for the overlap to take one of the two allowed values. This replica symmetry breaking scheme can be iterated hierarchically, by taking smaller sub-blocks inside each diagonal blocks and so on ad infinitum. This procedure with infinite steps of breaking has proved to give the correct free energy for the SK model <cit.>. However, in analogy with the REM and with the spherical p-spin model, we expect the 1RSB ansatz to be the right one to capture the low temperature properties of the model <cit.>. An important remark, which will be particularly helpful for the algebra of this kind of matrices, is that 1RSB matrices can be decomposed as follows [ _0 _0; _0 _0 ] + [ _1 - _0 0; 0 _1 - _0 ] + diag(_D - _1), where the first one is a matrix of all elements equal to _0, the second one is a block matrix with n/m diagonal m× m blocks of all elements equal to _1-_0 and the third one is a diagonal matrix with all diagonal elements equal to _D-_1. This decomposition provides a nice visualization of a 1RSB matrix. The first useful result, which can be better understood from the decomposition, is the following: a term in the action containing replicated variables x_a - either discrete or continuous - which are coupled through a 1RSB matrix can be written as ∑_ab^nx_a _abx_b = _0 (∑_a=1^n x_a )^2 +(_1-_0)∑_k=1^n/m(∑_ a ∈ Block(k)^1,m x_a)^2 +(_D-_1)∑_a=1^n x_a^2, where it is clear that each term corresponds to one of the matrices in the expression (<ref>). Other properties of 1RSB matrices will be discussed when necessary. In the following, we put = -. The precise computation of the action (<ref>) in the 1RSB ansatz is reported in Appendix <ref>. Let us here report and discuss the result. In analogy with the RS case, see Eq. (<ref>), we define the probability density for the variables C as 𝒫_β,1(C|x,y,t,z,w)≡ g_β,1(C|x,y,t,z,w)/I_β,1(x,y,t,z,w), where g_β,1(C|x,y,t, z,w) = e^-β|C|^4 + 1/2(x-t)|C|^2 + √(y)(Cz)+√(t - y)(Cw), with x,y,t ∈ℝ and z,w,C ∈ℂ and I_β,1(x,y,t,z,w) = ∫ C C g_β,1(C|x,y,t, z,w) . Notice that, by taking y=t, the definitions of g_β,1 and I_β,1 are equivalent to those of g_β,0 I_β,0, see Eqs. (<ref>) and (<ref>): this is exactly what one expects, since in this case we are taking _0=_1, i.e. a RS matrix. Moreover, in this case we also define the following function, which will appear in the expression of the free energy f_τ Ξ(_0, _1, h, u) = √(-_0) h + √(_0-_1) u. Notice, that by taking _0=_1 this function reduces to its RS form, which is simply Ξ = √(-_0) h. With respect to these quantities, the 1RSB action reads as lim_n→02/n A_1RSB = f_τ(_D,_0,_1,m)+ f_C (_D,_0,_1,m) + s_1(_D,_0,_1,m) where the local free energies have the 𝒪(n) expressions f_τ(_D,_0,_1,m) = log4 - (ℛ_D-ℛ_1) + 2/m∫𝒟h log∫𝒟u cosh^m Ξ(_0, _1, h, u) f_C (_D,_0,_1,m) = 1/m∫𝒟[zz] log∫𝒟[ww] I^m_β,1(_D,_0,_1 | z,w) 𝒟[zz] = z z/4π e^-|z|^2/2 and the entropic term s_1(_D,_0,_1,m) = m-1/mlog(_D-_1) +1/mlog[_D+(m-1)_1 - m _0] +_0/_D+(m-1)_1 - m _0. The procedure employed to compute the self-consistency equations for the 1RSB parameters goes along the same line as in the RS case, but the computations are heavier. This is due to the fact that a 1RSB matrix has two more parameters than a RS matrix, leading to an additional level of Gaussian integration in the local free energies. When performing the derivatives, this will lead to nested averages defined over probability measures which have more involved expressions compared to the RS case. In Appendix <ref>, we present in some detail the computation of the derivatives, adopting notations that are compact enough to allow us to write down the main equations, but not so much to obscure their meaning. Here, we just state the result. In analogy to the RS case let us define the following average over the measure 𝒫_β,1, see Eq. (<ref>), ⟨ (⋯) ⟩_1 = ∫ C C 𝒫_β,1(C | _D,_0,_1, z,w ) (⋯) = ∫ C C g_β,1(C | _D,_0,_1, z,w) (⋯) /I_β,1(_D,_0,_1, z,w), which is a function of the complex Gaussian variables z,w and the saddle point parameters _D,0,1. Then, the 1RSB stationary point is given by the solution of the following set of equations q_0 = ∫𝒟 h ( ∫𝒟 u cosh^m ΞtanhΞ/∫𝒟u cosh^m Ξ)^2 q_1 = ∫𝒟 h ∫𝒟u cosh^mΞtanh^2Ξ/∫𝒟u cosh^mΞ 1 = 1/2∫𝒟[zz] ∫𝒟[w w] I_β,1^m ⟨ |C|^2 ⟩_1/∫𝒟[w w] I_β,1^m q_0 = 1/2∫𝒟[zz] | ∫𝒟[w w] I_β,1^m ⟨ C ⟩_1/∫𝒟[w w] I_β,1^m |^2 q_1 = 1/2∫𝒟[zz] ∫𝒟[w w] I_β,1^m |⟨ C ⟩_1|^2 /∫𝒟[w w] I_β,1^m . The system is completed by the relations coming from the 1RSB expression of the algebraic constraint (<ref>), which in the limit n→ 0, reads as 𝒜_D +(m-1)𝒜_1 q_1 - m𝒜_0 q_0 =1 𝒜_1 + 𝒜_D q_1 + (m-2)𝒜_1 q_1 - m𝒜_0 q_0 =0 𝒜_D q_0 + (m-1)𝒜_1 q_0 + 𝒜_0 + (m-1) 𝒜_0 q_1 - 2m 𝒜_0 q_0 = 0. §.§.§ The derivative with respect to m In this subsection we compute the derivative of the action with respect to the last 1RSB parameter, the breaking parameter m. In fact, this parameter was originally an integer number, such that m<n, which denoted the dimension of the diagonal blocks of a 1RSB matrix; when the limit n → 0 is taken, the more intuitive thing to do would be to send m to zero as well, keeping fixed the ratio n/m. However, the prescription of the replica method is that, in order to obtain a well-defined probability distribution function of the overlap, m has to be promoted to a real number in the interval [0,1] and as a result the functions depending on m are analytically continued with respect to m in that interval <cit.>. Therefore, m has to be regarded by all means as a variational parameter with respect to which a saddle point self-consistency equation has to be computed. As for the other parameters, we compute the derivatives of the action (<ref>) with respect to m. The derivatives of the free energies (<ref>) and (<ref>) are ∂ f_τ/∂ m = -1/m^2∫𝒟 h log∫𝒟 u cosh^m Ξ + 1/m∫𝒟 h ∫𝒟u cosh^mΞlogcoshΞ/∫𝒟u cosh^mΞ and ∂ f_C/∂ m = -1/m^2∫𝒟[zz] log∫𝒟[w w] I_β,1^m + 1/m∫𝒟[zz] ∫𝒟[w w] I_β,1^m log I_β,1/∫𝒟[w w] I_β,1^m. The derivative of the entropic term reads as ∂ s_1/∂ m = - 𝒜_0(𝒜_1-𝒜_0)/(𝒜_D+(m-1)𝒜_1-m𝒜_0)^2 - 1/m^2log[𝒜_D+(m-1)𝒜_1-m𝒜_0] + 1/m𝒜_1-𝒜_0/𝒜_D+(m-1)𝒜_1-m𝒜_0 + 1/m^2log(𝒜_D-𝒜_1). Then the equation for m, which is too cumbersome to be written here, is given by the sum of the previous three derivatives set equal to zero. §.§.§ Further manipulations With the same procedure followed in the RS case, we pass to the real and imaginary parts of all the complex variables, with the notations: C=a+ib, z=ρ + iσ and w = u + i v. Each of the square moduli in the equations obtained from the derivatives of f_C, gives two contributions, which can be proved to be equal under proper changes of variables. Eventually, we obtain the following set of equations q_0 = ∫𝒟 h ( ∫𝒟 u cosh^m ΞtanhΞ/∫𝒟u cosh^m Ξ)^2 q_1 = ∫𝒟 h ∫𝒟u cosh^mΞtanh^2Ξ/∫𝒟u cosh^mΞ 1 = ∫𝒟σ𝒟ρ∫𝒟u 𝒟v I_β,1^m ⟨ a^2 ⟩_1/∫𝒟u 𝒟v I_β,1^m q_0 = ∫𝒟σ𝒟ρ( ∫𝒟u 𝒟v I_β,1^m ⟨ a ⟩_1/∫𝒟u 𝒟v I_β,1^m )^2 q_1 = ∫𝒟σ𝒟ρ∫𝒟u 𝒟v I_β,1^m ⟨ a ⟩_1^2 /∫𝒟u 𝒟v I_β,1^m , where the average induced by the measure 𝒫_β,1(a,b|_D,_0,_1,ρ,σ,u,v), see Eq. (<ref>), now reads ⟨ (⋯) ⟩_1 = ∫ a  b  g_β,1(a,b|_D,_0,_1,ρ,σ,u,v)/I_β,1(_D,_0,_1,ρ,σ,u,v), with g_β,1(a,b|x,y,t,ρ,σ,u,v)= e^-β(a^2+b^2)^2 + 1/2(x-t)(a^2+b^2) + √(y) (a ρ + b σ)+√(t - y) (a u + b v) I_β,1(x,y,t,ρ,σ,u,v) = ∫ a  b  g_β,1(a,b|x,y,t,ρ,σ,u,v). Notice that, as in the RS case, we have removed a factor 2 from the definition of I_β,1 since it cancels out with an equal factor in the numerator of the average ⟨ (⋯) ⟩_1. The algebraic Eqs. (<ref>), together with the equation obtained with the derivatives in m, complete the set of self-consistency equations for the 1RSB parameters. The 1RSB free energy, which has to be computed on the solutions of the self-consistency equations, reads as f_1RSB (β) = -1/β(log2 - ℛ_D - ℛ_1/2 + 1/m∫𝒟 h log∫𝒟 u cosh^m Ξ +1/2 m∫𝒟ρ𝒟σlog∫𝒟 u 𝒟 v [2 I_β,1]^m + 1/2 s_1(_D,_0,_1,m) ), where all parameters are the solutions to Eqs. (<ref>) and (<ref>). We notice that the RS equations are easily obtained by putting m=1 and taking q_0=q_1 and similarly for the other parameters. §.§ Simplified 1RSB Ansatz A simpler optimization problem, which is worth studying at least at a preliminary level, is the one resulting from a 1RSB ansatz with q_0=_0=_0=0. This assumption works for the p-spin model in zero external field, both with spherical variables, see Refs. <cit.>, and with Ising spin variables, see Ref. <cit.>, so it is reasonable for the present case as well. The advantage of making this simplified ansatz is a drastic simplification of the self-consistency equations for the remaining 1RSB parameters. Besides having three parameters less to optimize, in this case, the local partition function function I_β,1 does not depend anymore on the auxiliary variable z (or equivalently ρ,σ), leading to the disappearance of the outer Gaussian integration from the self-consistency equations obtained by the derivative of f_C. Moreover, in this case the functions g_β,1 and I_β,1 reduce to the RS integral functions g_β,0 and I_β,0 computed in _D, _1 rather than _D, _0. Hence, though we are still in a 1RSB ansatz, the average ⟨ (…) ⟩_0 will appear in the saddle-point equations. A similar simplification also occurs for the equation that contains the derivative of f_τ with respect to _1: the function Ξ reduces to its RS form, but computed in _1, rather than _0, i.e. Ξ= √(-_1)u. Let us report the expression of the self-consistency equations in this case q_1 = ∫𝒟 u cosh^m Ξtanh^2 Ξ/∫𝒟u cosh^m Ξ q_1 = ∫𝒟u 𝒟v I_β,0^m(_D,_1 | u,v) ⟨ a ⟩_0^2 /∫𝒟u 𝒟v I_β,0^m(_D,_1 | u,v) 1 = ∫𝒟u 𝒟v I_β,0^m(_D,_1 | u,v) ⟨ a^2 ⟩_0/∫𝒟u 𝒟v I_β,0^m(_D,_1 | u,v) with the following algebraic constraints ℛ_D-ℳ_D +(m-1)(ℛ_1-ℳ_1) q_1 = 1 ℛ_1-ℳ_1 + (ℛ_D-ℳ_D) q_1 + (m-2)(ℛ_1-ℳ_1)q_1 = 0. The equation obtained by the vanishing of the action derivative with respect to m can be computed for the present case by putting _0 = _0 = 0 in the derivatives computed before and reads as -1/mlog∫𝒟 u cosh^mΞ + ∫𝒟u cosh^mΞlogcoshΞ/∫𝒟u cosh^mΞ - 1/mlog∫𝒟u 𝒟v I_β,0^m + ∫𝒟u 𝒟v I_β,0^m log I_β,0/∫𝒟u 𝒟v I_β,0^m - 1/mlog(1 + m (1-q_1)(_1-_1)) + (1-q_1)(_1 - _1)/1+m (1-q_1)(_1-_1) = 0, where q_1 has been introduced through its expression in terms of the other parameters. The free energy of the model in this simplified 1RSB ansatz reads f_1RSB (β) = -1/β(log2 - ℛ_D - ℛ_1/2 + 1/mlog∫𝒟 u cosh^m Ξ +1/2 mlog∫𝒟 u 𝒟 v [2 I_β,0(_D,_1,| u, v)]^m + 1/2 s_1(_D,_1,_D,_1,m) ), where, now, we have s_1 = m-1/mlog(ℛ_D-ℳ_D-ℛ_1+ℳ_1) +1/mlog[ℛ_D-ℳ_D+(m-1)(ℛ_1-ℳ_1)]. By using the algebraic relations (<ref>), the entropic term has the following simpler dependence on q_1: s_1 = m-1/mlog(1/1-q_1) + 1/mlog(1/1+(m-1)q_1). § NUMERICAL INTEGRATION This section is devoted to the description of the technique used for the numerical integration of the saddle-point self consistency equations which have been obtained for the variational parameters of the model in the previous sections. In particular, we have studied in detail the RS set of Eqs. (<ref>) and the simplified 1RSB set of Eqs. (<ref>), both on Mathematica and by writing dedicated codes in Python. The numerical solution of the integrals has been performed by means of the Gaussian-Legendre quadrature rule <cit.>, which we briefly explain in the following. Given a function f:[a,b] ⊂ℝ→ℝ, the integral of f over its domain can be approximated by ∫_a^b y f(y) ≈b-a/2∑_i=1^n w_i f(y_i), with y_i = (b-a/2) x_i + (b+a/2). In the previous expressions the point x_i is the i^th zero of the Legendre polynomial P_n(x) and w_i is the corresponding weight given by w_i = 2/(1-x_i^2)[P'_n(x_i)]^2. Both the values of x_i and w_i are tabulated and can be generated by a specific Python (or Mathematica) routine. The generalization of this technique to two-dimensional integrals is straightforward. The advantage of this technique is that the discretization of the integration domain is very efficient and one obtains a relatively good result already with a small number of points. Our integrals are extended between ±∞, but the integrands are rapidly decreasing functions. We choose symmetric integration domains limited by a parameter L_g in the case of Gaussian integrals and L_q in the case of integrals of the exponential of a 4-degree polynomial, namely the function g_β,0. We have studied the parametrical dependence of integrals on the number of points n and on the quantities L_g,L_q and assessed the values of the parameters such that the results remained stable. The computation of the integrals is particularly demanding in the 1RSB case, due to the presence of nested double integrals, which lead to an increase of the computational complexity of order O(n^2) for each layer. Just to make an example, the discretized version of the integral in the third equation of the simplified 1RSB system (see Eq. (<ref>)) reads as I = ∑_ij^n w_i w_j e^-L_g^2(u_i^2+v_j^2)/2 I^m_β,0(_D,_1|L_g u_i,L_g v_j) ⟨ (L_q a_k)^2 ⟩_0/∑_ij^n w_i w_j e^-L_g^2(u_i^2+v_j^2)/2 I^m_β,0(_D,_1|L_g u_i,L_g v_j), ⟨ (L_q a_k)^2 ⟩_0 = ∑_kl^n w_k w_l g_β,0(_D,_1|L_g u_i,L_g v_j, L_q a_k, L_q b_l) (L_q a_k)^2 /∑_kl^n w_k w_l g_β,0(_D,_1|L_g u_i,L_g v_j, L_q a_k, L_q b_l) where, of course, I_β,0 contains another double integration. During the solution of the saddle point equations at a certain value of the inverse temperature β, this integral, like the others, has to be computed iteratively many times with respect to tentative values of the 1RSB parameters _D,_1,m. Then, the procedure has to be repeated varying the temperature. To speed up this kind of computations the code has been parallelized on GPUs using the Python library PyTorch. The integration technique of the saddle point equations is based on the optimization of a loss function defined as the sum of the action gradient components squared. Let us briefly describe the technique in general before moving to the case of interest. Suppose we want to find the global minimum of a differentiable cost function ℒ(x), depending on P parameters {x_i}_i=1,…,P. The Gradient Descent (GD) algorithm is based on the idea that the most efficient way of reaching the minimum of ℒ(x) is to follow the opposite direction of its gradient. This can be implemented in following iterative way x_n+1 = x_n - γ∇ℒ(x_n), where the quantity γ is the so-called learning rate, which defines the step size of the algorithm and the quantity γ∇ℒ(x_n) is subtracted from x_n since we want to move against the gradient. Clearly, this method may encounter some difficulties for functions with many local minima: when one of this minima is reached the gradient of the function is a vector of zeros and the algorithm gets stuck. To overcome the problem, several GD optimizations can be run with random initial conditions: in non-pathological cases the global minimum can be selected a posteriori. In order to optimize the loss functions which will be defined in a short while, we used the Adam[Adam stands for adaptive moment estimation and it is usually adopted as a Stochastic Gradient Descent (SGD) algorithm in the context of input-output problem in machine learning. Here, however, we just have to minimize a function with respect to its arguments and we have used it as a simple GD.] optimizer as a GD algorithm with momentum <cit.>. The momentum is an additional term to the GD dynamics defined in Eq. (<ref>), which suppresses the oscillations of the gradient, by taking larger steps in the preferred direction of steepest descent. §.§ RS Equations Consider the set of RS Eqs. (<ref>). In principle, we have to determine five parameters q_0,_D,_0,_D and _0; however, the algebraic constraints (<ref>) can be used to express two of them in terms of the others. For example, we can eliminate _D and _0 by writing _D=_D(_D,q_0) and _0=_0(_0,q_0), where _D = _D + 1-2 q_0/(1-q_0)^2 _0 = _0 - q_0/(1-q_0)^2. Incidentally, this expression of the parameters allows us to make an important consistency check on the solution: in order for the RS free energy (<ref>) to be real valued we need the argument of the logarithm function in the entropic term s_0 to be positive definite, i.e. _D -_D -_0 +_0 > 0. When substituting the expressions of _D and _0 in terms of the other parameters into the previous condition, we find the simple condition 1-q_0 > 0, which is always verified at finite temperature, since q_0∈[0,1]. We are, then, left with three integral equations in the three parameters q_0,_D,_0. The loss function that we need to optimize can be defined as ℒ_RS = ∑_i (∂_𝒳_i A_RS)^2, 𝒳_i being the generic replica parameter, and explicitly reads ℒ_RS(q_0,_D,_0, β) = (q_0 - ∫𝒟h tanh^2(√(-ℛ_0)h) )^2 + (q_0 - ∫𝒟ρ𝒟σ ⟨ a ⟩_0^2 )^2 + (1-∫𝒟ρ𝒟σ ⟨ a^2 ⟩_0)^2, where the dependence of the parameters on β is implicit and all the integrals are discretized with the procedure described before. With respect to standard spin-glass optimization problems, where the Lagrange multipliers conjugated to the overlap variables are usually integrated away at the level of the saddle-point equations <cit.>, here we have the additional difficulty of dealing with unbounded parameters: in fact, if q_0 must be in the interval [0,1] for every value of β, we do not have such strong bounds on _D and _0. The only information we have is that, either by looking again at the RS free energy Eq. (<ref>) or by directly inspecting the equations, the theory is well defined only if _0 < 0 and _0 > 0. The condition on _0 can be used together with Eq. (<ref>) to find an upper bound for _0. Eventually, we have 0 < _0 < q_0/(1-q_0)^2, which, however, is not so useful in practice a part from the choice of the initial condition selection. Actually, what we learn from Eq. (<ref>) is that as long as q_0=0, _0 has to vanish too: this is expected to happen at least for low values of β. If there is a value of β from which q_0 starts increasing, then, as q_0 → 1 the upper bound on _0 diverges. Nothing can be said, instead, for the definition interval of _D. In order to acquire preliminary knowledge of the parameters region where the global minimum of ℒ_RS might be located at a certain value of β, we have visualized the loss landscape, by producing color maps of its projections onto orthogonal planes. From these plots we could clearly identify the paramagnetic solution, which is always present for any value of the temperature. Moreover, for sufficiently high β many other minima, though not deep as the paramagnetic one, could be found for non-vanishing values of the parameters _0 and q_0. However, most of these minima have turned out to be nonphysical, leading to imaginary values of the free energy or other pathological consequences. The absence of boundaries for the parameter _0 prevents to thicken the grid over which the loss is computed, but for small intervals of _0 values. If the global minimum of the loss has a small basin of attraction, it is very unlikely to be visualized. However, we have managed to exclude some values of _0 from the choice of the initial conditions for the GD algorithm. Starting from high temperature, our algorithm falls into the paramagnetic solution with q_0==0=0, independently of the initial conditions, leading to results which are consistent with the annealed limit. To speed up the search for the optimal parameters, when increasing β, we always initialize the optimizer for the next step in temperature with the optimized parameters at the previous temperature. However, if one starts with a low value of β, this procedure may cause the algorithm to remain stuck in the paramagnetic solution, even in presence of a different solution dominating the thermodynamics. This occurrence is typical of first-order phase transitions, where the high temperature solution does not become unstable at the transition (i.e. a maximum or a saddle in the parameter space), but from being a global minimum it turns into a local minimum. With respect to this, we refer to the section dedicated to the numerical integration of the Ising p-spin model, which we have used as a test of the procedure. Therefore, we have tried another strategy: we start from a high value of β, hoping to fall into a different state than the paramagnetic one and we lower β to follow the solution up to the transition point. Having not much information on the value of _0 at low temperature, we start the optimization by fixing a high value of q_0 ∈ [0,1] and randomly choosing the initial condition on _0 according to Eq. (<ref>) and to our observations of the loss function. However, no solution can be found at finite T with non-vanishing q_0 and _0 and a free energy value higher than the value of the RS free energy at the same temperature. Hence, no evidence of a phase transition has been revealed in terms of non-vanishing q_0 or _0. §.§ 1RSB Equations We now turn to the more complicated case of the set of 1RSB equations (<ref>), where we have six parameters to determine: q_1,_D,_1,_D,_1 and the breaking parameter m. We eliminate _D and _1 from the problem, by means of the algebraic constraints (<ref>), expressing them as _D= _D(q_1,m,_D) and _1= _1(q_1,m,_1) in the following way _D = _D + 1 + (m-2) q_1/(1-q_1)[1+(m-1)q_1] _1 = _1 - q_1/(1-q_1)[1+(m-1)q_1]. As in the RS case, we can verify the consistency of the theory, by using these equations together with the positiveness of the arguments of the two logarithms in the entropic term of the 1RSB free energy (<ref>): we get two conditions, i.e. 1-q_1 > 0 and 1-(m-1)q_1 > 0, which are both satisfied for every finite temperature, since q_1,m∈[0,1]. Moreover, by using the fact that _1 < 0 and _1>0, we find a condition on _1, which is the 1RSB generalization of Eq. (<ref>) and reads 0 < _1 < q_1/(1-q_1)[1+(m-1)q_1]. As in the RS case, this equation tells us that as long as q_1=0, _1=0 as well. Hence in this case we are left with the three integral equations (<ref>) plus Eq. (<ref>), which by using the expression of _1 further simplifies to -log∫𝒟 u cosh^mΞ + m ∫𝒟u cosh^mΞlogcoshΞ/∫𝒟u cosh^mΞ - log∫𝒟u 𝒟v I_β,0^m + m ∫𝒟u 𝒟v I_β,0^m log I_β,0/∫𝒟u 𝒟v I_β,0^m - log(1 + m/1+(m-1)q_1) + mq_1/1-q_1 = 0. to be solved with respect to q_1,m,_D and _1. We have adopted two alternative strategies to solve the 1RSB optimization problem, one, more direct, which indeed makes use of Eq. (<ref>), the other one, more subtle, which is based on a “graphical” maximization of the 1RSB free energy with respect to m. In the first case, the loss function is defined by taking into account all the squared components of the action gradient, expressed in terms of q_1,m,_D and _1. We have ℒ_1RSB(q_1,m,_D,_1,β) = (q_1 - ∫𝒟 u cosh^m Ξtanh^2 Ξ/∫𝒟u cosh^m Ξ)^2 + ( q_1 - ∫𝒟u 𝒟v I_β,0^m ⟨ a ⟩_0^2 /∫𝒟u 𝒟v I_β,0^m )^2 + (1 - ∫𝒟u 𝒟v I_β,0^m ⟨ a^2 ⟩_0/∫𝒟u 𝒟v I_β,0^m )^2 + (∂_m A_1RSB)^2 = 0 where the left hand side of Eq. (<ref>) has been denoted as ∂_m A_1RSB for brevity and all the integrals are computed with the Gauss-Legendre quadrature rule implemented in parallel. In the other case, the term ∂_m A_1RSB drops out from the cost function definition, and optimization with respect to m is performed as follows. At a fixed value of β, the global minimum of the cost function is found in parallel for different values of m ∈ [0,1]: then, the free energy in Eq. (<ref>) is computed as a function of m, i.e. f_1RSB = f_1RSB(m) and its values are sorted. We look for the maximum of the f_1RSB(m) and consider the values of the parameters corresponding to it as the solution of the optimization problem. In order to reduce the error, this procedure can be iterated many times, by using values of m each time closer to the true maximum of f_1RSB(m). Regarding the dynamics in temperature, we have proceeded as in the RS case, by either increasing β starting from the paramagnetic solution or by decreasing β starting from many random initializations of the parameters q_1,_D,_1 and m, which are sampled consistently with their bounds from the regions where the low temperature maps of the loss functions revealed the presence of minima. By increasing β, we just remain stuck into the paramagnetic solution already found at the RS level and in the annealed limit. By lowering β, notwithstanding the huge number of attempts with different initial conditions, we are not able to find any good solution besides the paramagnetic one. §.§ Test: the Ising p-spin model In order to check our numerical integration technique, we have tested the procedure on the 1RSB solution of the Ising p-spin model. Let us briefly report some useful result drawn from Ref. <cit.>. The temperature at which the RS entropy becomes negative, signaling a thermodynamic anomaly, is T=1/(2 √(log 2))=0.60056…. The critical temperature of the transition to the 1RSB phase is given by the analytical expression T_c = 1/2 √(log 2)(1+2^-(p+1)√(π/p (log 2)^3)), which for the case p=3 gives the value T_c = 0.66712… to be compared with our numerical integration. For an arbitrary value of p, the free energy with q_0 already set to zero is given by the following expression Φ = -β/4 [1 + (p-1)(1-m)q_1^p - p q_1^p-1] - 1/mβlog∫ z  (2coshΞ )^m, where in this case Ξ= z √(p/2)β q_1^p-1/2 and, as usual, z = z/√(2 π)exp(z^2/2). For the present case we have just two equations to determine the parameters q_1 and m as functions of β, which read as q_1 = ∫ z (2coshΞ )^m tanh^2Ξ/∫ z (2coshΞ )^m β/4 (p-1) q_1^p + 1/β m^2log∫ z  (2coshΞ )^m - 1/β m∫ z  (2coshΞ )^m log 2 coshΞ/∫ z  (2coshΞ )^m = 0. The second equation is obtained from the derivative in m of the free energy and, in this simple case, can be easily integrated directly. Alternatively, as discussed above, one can solve the first equation parametrically in m and then look for the maximum of the free energy in m “graphically”. We have tested both procedures and checked that the optimization of the loss functions defined for the two cases in analogy with the previous section yields the same results. Here, we report the results obtained through the optimization of the full loss function by using the FindRoot routine on Mathematica. The integrals have been computed both with the Gauss-Legendre quadrature rule and with the NIntegrate routine: we have checked the consistency of the values computed in the two ways. In Figs. <ref> and <ref> we display the temperature dependence of the two parameters q_1 and m respectively, by plotting them as functions of β. Data are obtained for the case p=3. The jump in q_1 occurs at β_c= 1.55 ± 0.025, with the uncertainty estimated as half of the β spacing (δβ = 0.05). This result is in good agreement with the expected critical temperature for the present case. The value of m oscillates for β < β_c, where q_1=0, as a sign of the fact that in the high temperature phase the solution is degenerate in m. At β_c the value of m starts decreasing smoothly from 1 towards zero, see Fig. <ref>. Eventually, in Fig. <ref> we plot the 1RSB free energy and the RS free energy as functions of β. The RS free energy is given by the analytical expression f(β) = -β/4-log (2)/β. Notice how at β_c there is a bifurcation of the free energy corresponding to the point where the 1RSB free energy dominates the thermodynamics of the model: the physical free energy is the maximum value of the two curves for each value of β. It is important to stress once again that starting from the high temperature region, i.e. a low value of β, the optimization remains stuck in the RS (or annealed/paramagnetic) solution. This is in line with the fact that the RS solution with q=0 remains stable at all temperature values in the p-spin model, differently from the SK case. The reason why this happens is that the transition to the 1RSB phase in the case of the Ising p-spin model is first order from the point of view of the order parameter, while the SK model is characterized by a continuous order parameter. In other terms, in the p-spin case the solution has a jump in the order parameter at the critical temperature to an already existing state, whereas in the SK model the order parameter continuously increases from zero, where the new state starts to exist. In order to obtain the previous results, one can proceed in two ways: either from a very high value of β and heating the system, or, when approaching β_c from below, by suggesting initial conditions for the loss optimization which are close to the right solution. The stability of the RS solution can be clearly visualized in the low temperature color maps of the loss function, which is defined as ℒ(q_1,m, β) = ( q_1 - ∫ z (2coshΞ )^m tanh^2Ξ/∫ z (2coshΞ )^m)^2 + (∂_m Φ)^2. where ∂_m Φ denotes the left hand side of Eq. (<ref>). In Fig. <ref> we display three different plots of the loss function, taken at the following values of the inverse temperature β = 0.5, β = 3 and β=6.5, from left to right. Values of q_1 (vertical axis) and m (horizontal axis) are sampled in their definition interval [0,1], with a spacing of 0.005. Colors correspond to values of the loss function: the gradient of the loss points from blue to yellow . One can clearly identify a valley in the low region of the plots corresponding to the paramagnetic state at q_1=0 and degenerate in m, which is also present at β<β_c, and a smaller valley in the high region corresponding to the glassy state with a high value of q_1, which appears for β > β_c. The glassy state moves to the left of the plot towards lower values of m, consistently with the plot in Fig. <ref>. § ZERO TEMPERATURE LIMIT In this section we complete the study of the model, by computing the zero temperature limit of the 1RSB equations. Instead of computing the limit β→∞ directly on the equations, we first compute the zero temperature limit of the free energy (<ref>), by making scaling hypotheses on the parameters and introducing temperature independent quantities; then, we compute a new set of self-consistency equations for the new parameters, hoping that they turn out to be easier then those at finite temperature. Once the value of these parameters has been found, then the corresponding value of the free energy is the asymptotic value of the 1RSB free energy: if it turns out to be greater than the asymptotic value of the paramagnetic free energy, then we would have evidence of a phase transition at zero temperature. §.§ Test: the Ising p-spin model Let us first report the case of the Ising p-spin in order to gain familiarity with this kind of computations. We take the limit at the leading order in 1/β, thus q_1 = 1 + O(1/β) m = y/β + O(1/β^2), which means that m(T) linearly approaches zero when T → 0, with a slope y to be determined. By direct substitution in the first term of the free energy one finds -β/4 [1 + (p-1)(1-m)q_1^p - p q_1^p-1] = p-1/4 y, up to terms of order O(1/β^2). The integral can be evaluated by considering that - 1/mβlog∫ z  [2cosh(z√(p/2)β q^p-1/2)]^m = - 1/ylog∫ z exp[y/βln (e^z β√(p/2) + e^-z β√(p/2))] and between the two exponentials inside the logarithm the first one dominates for z>0, while the second one dominates when z<0 in the β→∞ limit. Thus, we can write - 1/ylog∫ z exp[y/βlog (e^z β√(p/2) + e^-z β√(p/2))] = - 1/ylog∫ z e^ y √(p/2) |z| which can be expressed in terms of an error function as - 1/ylog∫ z e^ y √(p/2) |z| = - 1/ylog[e^p y^2/4(1+ √(p)y/2)] = -py/4 - 1/ylog( 1+ √(p)y/2) Thus, at the leading order we get Φ(y) = 1/y - 1/ylog( 1+ √(p)y/2) where y has to be determined by the self-consistency equation Φ/ y=0, i.e. -1/4 + 1/y^2log( 1+ √(p)y/2) - 1/y√(p/π)e^-p y^2/4/1+ √(p)y/2 = 0. This equation can be easily solved numerically: Fig. <ref> shows the results for p ∈ [3,20]. For p=3, see Fig. <ref>, the equation has the solution y^*=1.38356…, which is a maximum point for the free energy. The value of the free energy in y^* is Φ(y^*) = -0.813535… in perfect agreement with the results of the numerical integration reported in the previous section. §.§ Case of interest We perform the following scaling ansatzes on the 1RSB parameters of our model _1 ≃ - r_1 β^2     r_1 > 0 _1 ≃μ_1 β^2    μ_1 > 0 _D - _1 ≃β r_- _D - _1 ≃βμ_- m ≃y/β. The guiding idea is to take the parameters (or the differences between couples of parameters) appearing inside the square roots as of order O(β^2) and the others as O(β), apart from m, which is supposed to behave as in the p-spin case. The first local free energy in Eq. (<ref>) can be simply written by proceeding in analogy to the p-spin case and reads as f_τ(r_1,y) = - r_1/2 y - 1/ylog[1 + ( √(r_1)y/√(2)) ]. The other local free energy is more difficult to compute in the large-β limit. After implementing the scaling hypotheses on the parameters we find f_C (μ_-, μ_1, y) = - 1/ylog∫ u v (∫ a b e^β F(a,b|μ_-,μ_1, u,v))^y/β, where the function F corresponds to the argument of the exponent of the function g_β,0, see Eq. (<ref>), computed in x=_D and y=_1, and reads F(a,b | μ_-,μ_1, u,v) = - (a^2+b^2)^2 + μ_-/2 (a^2 + b^2) + √(μ_1)(au +bv). The integrals in a and b can be computed with the saddle point method, since we are in the limit β→∞. Therefore we have f_C (μ_-, μ_1, m) = - 1/ylog∫ u v  e^ y F( μ_-,μ_1, u,v ), where F( μ_-,μ_1, u,v ) = F(a^*,b^* |μ_-,μ_1, u,v) and a^*,b^* are functions of the other parameters, e.g. a^* = a^*(μ_-,μ_1,u,v) and are given by the solution of the coupled equations ∂ F/∂ a|_a^*,b^* = 0 ∂ F/∂ b|_a^*,b^* = 0. We do not report here the explicit expression of the function F(u,v | μ_-,μ_1 ), since, after a^*,b^* have been substituted, it becomes too cumbersome. Finally the entropic term of the free energy (<ref>) simply reduces to s_1(r_-,r_1,μ_-,μ_1,y) = - 1/ylog(1+y r_1 - μ_1/r_- - μ_-). Then, the complete free energy is given by f_1RSB(r_-,r_1,μ_-,μ_1,y) = r_-/2 - r_1/2 y - 1/ylog[1 + ( √(r_1)y/√(2)) ] - 1/ylog∫ u v  e^ y F(μ_-,μ_1,u,v ) - 1/2 ylog(1+y r_1 - μ_1/r_- - μ_-). The self-consistency equations for the parameters introduced can be easily derived from the previous free energy by imposing the vanishing of the derivatives. We get the following system of equations - y - √(2/π)r_1^-1/2 e^-r_1 y^2/2/1 + (√(r_1)y/√(2)) - 1/r_- - μ_- + y (r_1 - μ_1) = 0 1/r_- - μ_- + y (r_1 - μ_1) = - r_1 - μ_1/(r_- - μ_-) - ∫ u v e^y F∂_μ_1 F/∫ u v e^y F + 1/r_- - μ_- + y (r_1 - μ_1) = 0 ∫ u v e^y F∂_μ_- F/∫ u v e^y F + 1/r_- - μ_- + y (r_1 - μ_1) = 0, to which we have to add the derivative with respect to y. Resolution of the equations is in progress. CHAPTER: A NEW MEAN-FIELD THEORY FOR THE GLASSY RANDOM LASER In the previous chapter, we have presented a deterministic model with long-range interactions and a topology of the interaction network similar to the mode-locked graph, which can been solved by means of the replica method. It is now time to turn back to our original problem of reaching the analytical solution of the ML 4-phasor model. Inspired by the results of numerical simulations discussed in Chap. <ref>, we believe that a mean-field solution for this model may exist, even if most likely of a different kind with respect to the solution already obtained on the fully-connected graph <cit.>. In fact, the model is characterized by a combined effect of quenched disorder due to the random couplings and deterministic dilution induced by the FMC. While in the case of ordered mode-locked graphs, a long-range spatial structure can be identified notwithstanding the dilution <cit.>, we do not expect this to happen in the presence of disordered couplings. What we expect, and indeed what happens as soon as the replica method is applied to the model, is that the heterogeneities induced by the disorder do not simply average out as in the fully connected case, leading to the failure of the standard mean-field Replica Symmetry Breaking theory for spin-glass models. On the other hand, the order of the dilution is not such that the model can be defined on a sparse network, e.g. the Bethe lattice, where the cavity method implemented through message passing algorithms like belief propagation works well <cit.>. However, precisely because of the weakness of the dilution, our conjecture, supported by numerical evidence, is that the interaction network is still dense enough to compensate the effect of the heterogeneities and to be compatible with a mean-field approximation, although with a more complicated theory than the standard one. In this chapter, after presenting the ML 4-phasor model in connection to the Merit Factor problem and explaining the solution strategy, we report the various steps of the replica computation, which goes along the same lines of the previous chapter, leading to the saddle point equations for the ML 4-phasor model. § THE MODEL For the purpose of defining a new mean-field theory for the glassy random laser, we have developed a technique based on the formal analogy with the Bernasconi model for the MF problem <cit.>, to which the previous chapter has been entirely devoted. In order to make the discussion more specific, let us recall the model and add some technical details. The Hamiltonian function in which we are interested reads [a] = - ∑_FMC J_ijkl [a_i a_j a_k a_l + c.c.], where a is a complex vector on the N-sphere and J_ijkl are quenched disordered couplings. The summation is restricted to all those indices which satisfy the FMC in the case of the linear comb |i-j+k-l| = 0. Due to this constraint on the interacting quadruplets, the Hamiltonian can be written in a more convenient way as follows [a] = - ∑_i<j<k^N J_ijk[a_i a_i+k a_j a_j+k + c.c.], where we recognize exactly the same structure of the indices of the Bernasconi model. Besides the complex variables, the other obvious difference with respect to the case of the previous chapter is the randomness of the couplings, which are independently extracted from the following zero-mean Gaussian distribution P(J_ijk)= 1/√(2 πσ_J^2)exp[-J_ijk^2/2 σ_J^2]        σ_J^2 = 3! J^2/2 N^2, where the scaling of the variance with N ensures the extensivity of the thermodynamic potentials. Moreover the sign in front of the summation in the Hamiltonian is different in the two cases. However, this is not a big deal in the present case, since for random couplings extracted from an unbiased symmetric distribution it makes no real difference whether one has a plus or a minus in the Hamiltonian definition: after averaging the partition function over the Gaussian distribution (<ref>), one is left with the Hamiltonian squared. §.§ Strategy of Solution Our strategy is to transform the disordered model of Eq. (<ref>) into a non-disordered one, which is equivalent to the Bernasconi model, though with different variables, and can be solved exactly, by using the associated random unitary model. In order to solve the model we will then need to introduce two averages over the disorder. * First Average: this is the average taken over the quenched randomness, which we denote as (⋯). Due to the dilution of the graph, after the average over the Gaussian couplings is performed, one is forced to introduce local matrices q_i^αβ. With respect to these variables, our problem simplifies to the study of an ordered model with long-range interactions, which has a Hamiltonian of the kind = ∑_k=1^N (1/N∑_i=1^N q_iq_i+k)^2 = ∑_k=1^N 𝒞_k^2, where we have dropped the dependence on the replica indices, just to make more clear the analogy with the Hamiltonian of the MF problem. Namely, we look for the sequences of q_i for which the correlation 𝒞_k^2, summed for all distances k, takes the lowest value. In other words, we have a problem which is analogous to the MF problem, but at the level of the local overlaps q_i, rather than of the spins. Clearly, if the original variables are phasors a_k, their local overlap will be in general a complex quantity, so this is a more general problem with respect to the search for the LABS. * Second Average: after the first average, one is left with a deterministic model in the local overlaps with long-range interactions. What we can do now is simply to apply all the machinery developed in the previous chapter for the solution of the MF problem, by following Ref. <cit.>. We will then associate to the model Eq. (<ref>) the corresponding random unitary model, by replacing the usual Fourier transformation of the variable with generic unitary matrices, over which we will perform a second average. In order to distinguish it from the average over the quenched disorder, we denote this average with (⋯)^U. Following this procedure it turns out that, for our problem, one has to introduce the overlap between local overlaps: _αβ = 1/N∑_i=1^N q_i^α q_i^β. We will refer to this quantity as superoverlap. Our main result is to show that for the ML p-spin a mean-field ansatz for the structure of replica matrices can be done only at the level of _αβ matrices. Numerical simulations shows clear evidence of a glass transition at low temperatures, so that we are led to assume at low temperatures a replica-symmetry breaking ansatz for _αβ. § AVERAGE OVER DISORDER In order to disentangle the difficulties, we consider the case of non-complex variables, leaving the generalization to phasors for the future. We carry out the computations in parallel for both Ising and spherical spins, up to the point where some ansatz for the solution of the saddle point equations has to be performed. Then, the Hamiltonian we consider is _J(σ) = - ∑_i<j<k^N J_ijkσ_iσ_i+kσ_jσ_j+k and the configuration space is given by either Σ_N = {[ {± 1}^N    or 𝕊_N = {σ : ∑_i=1^N σ_i^2 = N }. ]. We notice that the two cases differ in the number of constraints: for Ising spins, we have N local constraints, whereas spherical spins are locally unbounded, but have to satisfy a global constraint. The partition function of the model can be written as = _Σ_N e^-β_J(σ), where the trace is a compact notation for the summation over all possible configurations, which in the case of spherical spins corresponds to an integration over the N-sphere, i.e. _Σ_N = {[ ∏_i=1^N∑_σ_i=± 1 ∫_𝕊_Nσ = ∫∏_i=1^n σ_i δ(∑_i=1^N σ_i^2 - ϵ N ), ]. where ϵ is a constant which tunes the constraint, i.e. the radius of the hypersphere (see Chap. <ref>). The replicated partition function reads ^n = _Σ_N^nexp[- β∑_α=1^n _J(σ^α) ] where Σ_N^n = ⊗_α=1^n Σ_N^α. In the thermodynamic limit the free energy of the model is given by f(β) = lim_n → 0lim_N →∞ - 1/β n Nlog^n, where the order of the two limits have been exchanged, as is usual in the replica method. The average of the replicated partition function over quenched disorder is computed as follows ^n = _Σ_N^nexp[β∑_α=1^n ∑_i<j<k^N J_ijkσ_i^ασ_i+k^ασ_j^ασ_j+k^α] = _Σ_N^n∏_i<j<k^N ∫ J_ijk/√(2 πσ _J^2)exp[ -J_ijk^2/2 σ_J^2 + β J_ijk∑_α=1^n σ_i^ασ_i+k^ασ_j^ασ_j+k^α] = _Σ_N^nexp[ (β J)^2 /43!/N^2∑_i<j<k^N ∑_αβ^n σ_i^ασ_i+k^ασ_j^ασ_j+k^ασ_i^βσ_i+k^βσ_j^βσ_j+k^β] = _Σ_N^nexp[ (β J)^2 /4∑_αβ^n ∑_k=1^N (1/N∑_i=1^N σ_i^ασ_i^βσ_i+k^ασ_i+k^β)^2 ], where the last expression is correct up to order O(1/N), since 3!/N^2∑_i<j<k = 1/N^2∑_ijk + O(1/N), see e.g. Ref. <cit.>. We have already reached the point where the standard mean-field computation breaks down for this model: although, as usual, the average over disorder has led to coupled replicas, we are not able to introduce at this point a global overlap between configurations of different replicas. What one can do instead is to change variables from spins to local overlaps q_i^αβ = σ_i^ασ_i^β. In order to predispose the model for a generic unitary transformation, in analogy with the Merit Factor we define the complex overlaps _i^αβ= σ_2i-1^ασ_2i-1^β+i σ_2i^ασ_2i^β and multiply the whole partition function by 1 = ∏_i=1^N/2∏_<αβ>^n ∫_i^αβ_i^αβ δ( _i^αβ-(σ_2i-1^ασ_2i-1^β+i σ_2i^ασ_2i^β) ) δ( _i^αβ-(σ_2i-1^ασ_2i-1^β-i σ_2i^ασ_2i^β)) = ∫∏_i=1^N/2∏_<αβ>^n [_i^αβ_i^αβλ_i^αβλ_i^αβ]  exp[1/2∑_αβ^n ∑_i=1^N/2λ_i^αβ[_i^αβ-(σ_2i-1^ασ_2i-1^β+i σ_2i^ασ_2i^β)] + c.c.], where the integral over the λ_i^αβ is between ± i ∞ and the argument of the exponent has been symmetrized in the replica indices. In order to keep a compact notation, we use the symbol <αβ> in the product to denote how many independent values of the local overlap we have introduced. On one hand, the product has α < β terms in the Ising case, where the diagonal terms are fixed by the local constraints and yield the following constant contribution exp[ (β J)^2 /4∑_α^n ∑_k=1^N (1/N∑_i=1^N (σ_i^α)^2 (σ_i+k^α)^2 )^2 ] = exp[(β J)^2Nn/4]. This term can be dropped from the computation and added eventually to the free energy. On the other hand, the product has α≤β terms in the spherical case, since the diagonal terms are free to vary compatibly with the global constraint. Furthermore, the delta functions implementing the spherical constraint on each replica of the system can be written in terms of the complex overlaps as follows ∏_α=1^n δ(∑_i=1^N (σ_i^α)^2 - ϵ N ) = ∏_α=1^nδ( ∑_i=1^N/2 ([_i^αα] + [_i^αα]) - ϵ N ). However, in order to keep the notation compact, for the moment this contribution will be left inside the definition of the trace operator for the continuous case. At this point, it is convenient to define an action functional in order to write the partition function in the following way ^n = _Σ_N^n∫∏_i=1^N/2∏_<αβ>^n [_i^αβ_i^αβλ_i^αβλ_i^αβ] exp[S(_i^αβ,_i^αβ,λ_i^αβ,λ_i^αβ,σ_i^α)] , where S(_i^αβ,_i^αβ,λ_i^αβ,λ_i^αβ,σ_i^α) = β^2J^2/4∑_αβ^n ∑_k=1^N/2( 1/N∑_i=1^N/2_i^αβ_i+k^αβ)^2 + 1/2∑_αβ^n ∑_i=1^N/2λ_i^αβ[_i^αβ-(σ_2i-1^ασ_2i-1^β+i σ_2i^ασ_2i^β)] + c.c. After averaging over the Gaussian couplings, we have a matrix field theory in the overlap matrices _i^αβ, which do not depend only on the replica indices αβ but also on the site index i. Different indices i and i+k are coupled in the interaction terms. One can define a new Hamiltonian as = β J^2/4∑_αβ^n ∑_k=1^N/2( 1/N∑_i=1^N q_i^αβ q_i+k^αβ)^2. which is the equivalent of the Merit Factor Hamiltonian of the local overlaps, while the other terms in the action (<ref>) are entropic contributions accounting for the fact that the fundamental variables are the spins. In analogy with the previous chapter, we introduce the generic unitary transform of the complex overlaps as ^αβ_k = ∑_r=1^N/2 U_kj ^αβ_j = [ U^αβ]_k, where U represents a generic N/2 × N/2 matrix of the unitary group. As we already know, the interaction term is diagonalized by this transformation and reads as = β J^2/4∑_αβ^n ∑_k=1^N/2 |_k^αβ|^4. We introduce the unitary-tranformed variables in the computation as usual by means of delta functions 1 = ∏_<αβ>^n ∏_k=1^N/2∫^αβ_k ^αβ_k δ(^αβ(k)-[U^αβ]_k) δ(^αβ(k)-[U^αβ]_k) = ∫∏_<αβ>^n ∏_k=1^N/2[ ^αβ_k ^αβ_k ξ^αβ_k ξ^αβ_k ] exp[ 1/2∑_αβ^n ∑_k=1^N/2 i ξ_k^αβ(^αβ(k)-[U^αβ]_k) + c.c.], where, once again, the diagonal terms have been included (excluded) in the continuous (discrete) case and the argument of the exponent has been symmetrized in the replica indexes. In order to lighten the notation, let us introduce the following convention for the symbol of integration over local variables x = ∏_<αβ>^n ∏_k=1^N/2 x_i^αβ. At this stage the partition function can be written as ^n = _Σ_N^n∫ξξλλ exp[ S_U(, , ,,ξ,ξ,λ,λ,σ) ], and the action reads as S_U(, , ,,ξ,ξ,λ,λ,σ) = β^2J^2/4∑_αβ^n ∑_k=1^N/2 |_k^αβ|^4 + 1/2∑_αβ^n ∑_k=1^N/2 i ξ_k^αβ(^αβ(k)-[U^αβ]_k) + c.c. + 1/2∑_αβ^n ∑_i=1^N/2λ_i^αβ[_i^αβ-(σ_2i-1^ασ_2i-1^β+i σ_2i^ασ_2i^β)] + c.c. where the subscript U specifies its dependence on the specific realization of a random unitary matrix. § AVERAGE OVER UNITARY MATRICES According to Eq. (<ref>) the replicated partition function depends on a new source of randomness, that is we have a free energy f_U(β) and we aim to compute f_U(β)^U. We can use the fact that logZ^n = log(1 + (Z^n-1) ) and since (Z^n-1) = O(n) we can write the free energy in the equivalent form f_U(β) = lim_n → 0lim_N →∞ - 1/β N^n-1/n. This is all very standard, but it allows us to understand that, at variance with the MF problem, in this case it is sufficient to perform the annealed average over the matrices U, since, thanks to the average over the couplings, we have already dealt with the problem of integrating the logarithm of the partition function and we are interested in the moments of the partition function. Therefore, the free energy averaged over the unitary group is simply f(β) = lim_n → 0lim_N →∞ - 1/β N(^n)^U-1/n. The computation becomes now very similar to the one performed in the previous chapter. We select the U-dependent part of Eq. (<ref>), we introduce auxiliary variables Ω_ik=i∑_αβξ_k^αβ_i^αβ/2 and perform the integration on the unitary group as follows exp[∑_kj^N/2Ω_kl U_kl + c.c.]^U = ∫ U U^†exp[ (Ω^† U + h.c.)] = exp[N/2( Ω^†Ω/N^2) ] = exp[ N/2( Λ/4)], where we have used the results of Refs. <cit.>. In particular, we recall that the function is defined as in Eq. (<ref>). Moreover, we have introduced the overlaps _αβ,γδ = 1/N∑_i=1^N/2_i^αβ _i^γδ     Λ_αβ,γδ = 1/N∑_k=1^N/2ξ_k^αβ ξ_k^γδ, which represent the new global order parameters of the theory: the overlaps between local overlap fields. In order to change variables from the local fields λ_k^αβ and _i^γδ to the global matrices _αβ,γδ and Λ_αβ,γδ we introduce the following terms in the partition function 1 = ∫_-∞^∞ D ∫_-i∞^i∞ D exp[N/4∑_αβ^n∑_γδ^n_αβ,γδ_αβ,γδ- 1/4∑_αβ^n∑_γδ^n∑_i=1^N/2_i^αβ_αβ,γδ_i^γδ] and, similarly, 1 = ∫_-∞^∞ DΛ ∫_-i∞^i∞DΛ̂ exp[ N/4∑_αβ^n∑_γδ^nΛ̂_αβ,γδΛ_αβ,γδ-1/4∑_αβ^n∑_γδ^n∑_k=1^N/2ξ_k^αβΛ̂_αβ,γδξ_k^γβ], where the integration measures for the global order parameters ,Λ and their Lagrange multipliers ,Λ̂ read as DX = ∏_<αβ>^n ∏_<γδ>^n X_αβ,γδ, with the usual meaning of the symbol <αβ>. Moreover, let us momentarily denote by x all the local variables of the theory {, , ,,ξ,ξ,λ,λ,σ} and by X all the global ones {, , Λ, Λ̂}. With these notations the averaged partition function reads as ≡(^n)^U = _Σ_N^n∫ D D DΛ DΛ̂ξξλλ exp[S(X,x)], where S(X,x) = β^2 J^2/4∑_αβ^n ∑_k=1^N/2|^αβ(k)|^4 + 1/2∑_αβ^n ∑_k=1^N/2( i ξ_k^αβ^αβ_k + c.c.) +1/2∑_αβ^n ∑_i=1^N/2λ_i^αβ[_i^αβ-(σ_2i-1^ασ_2i-1^β+i σ_2i^ασ_2i^β)] + c.c. + N/8 Tr G(Λ) + N/4∑_αβ∑_γδΛ̂_αβ,γδ Λ_αβ,γδ-1/4∑_αβ^n∑_γδ^n∑_k=1^N/2ξ_k^αβ Λ̂_αβ,γδ ξ_k^γδ + N/4∑_αβ^n∑_γδ^n_αβ,γδ _αβ,γδ-1/4∑_αβ^n∑_γδ^n∑_i=1^N/2_i^αβ _αβ,γδ _i^γδ. We now develop some further manipulations, which will simplify the expression of the partition function. First, the Gaussian integration over the complex matrices ξ_k^αβ can be easily carried out, yielding up to constant terms ∫ξξexp[ -1/4∑_αβ^n∑_γδ^n∑_k=1^N/2ξ_k^αβ Λ̂_αβ,γδ ξ_k^γδ + 1/2∑_αβ^n ∑_k=1^N/2( i ξ_k^αβ^αβ_k + c.c.) ] = = exp[ -N/2logΛ̂ + ∑_αβ^n∑_γδ^n ∑_k=1^N/2^αβ_k [Λ̂^-1]_αβ,γδ^γδ_k ]. Furthermore, we can store all the dependence on the local variables inside the definition of free-energy functions. Let us first consider the variables depending on the local indices of the real space. In order to simplify the dependence on indices it is better to rename the spin variables in a way which makes explicit the fact that they are independent integration variables: u_i=σ_2i-1 and v_i = σ_2i. Then, we define e^F_q() = _Σ_N^n∫λλ exp[ -1/4∑_i=1^N/2∑_αβ^n∑_γδ^n _i^αβ _αβ,γδ _i^γδ +1/2∑_αβ^n ∑_i=1^N/2λ_i^αβ[_i^αβ-(u_i^α u_i^β+i v_i^α v_i^β)] + c.c.], where we will see in a short while that F_q() can be factorized as N/2f_q(). Similarly, the dependence on the local unitary transformed variables can be put in the following free energy, which immediately factorizes in N/2 local identical contributions e^N/2 f_(Λ̂) = {∫∏_<α,β>^n ^αβ^αβexp[ β^2J^2/4∑_αβ^n |^αβ|^4 + ∑_αβ^n∑_γδ^n ^αβ [Λ̂^-1]_αβ,γδ^γδ] }^N/2. Eventually, we can rewrite the partition function in a very compact expression as follows = ∫ D D DΛ DΛ̂exp[ S(, , Λ, Λ̂) ] where S(, , Λ, Λ̂) = N/2{ f_(Λ̂)+ f_q() + 1/2Tr (Λ̂Λ) + 1/2Tr () + Tr G(Λ/4) -log(Λ̂)} §.§ Free Energy of the Local Overlap In the discrete case, it is immediate to see that the free energy F_q() corresponds indeed to the sum of N/2 independent and identical local free-energies, where the expression of the trace operation is simply _Σ_N^n = ∏_α=1^n ∏_i=1^N/2∑_u_i^α=± 1∑_v_i^α=± 1. Since the exponent in the definition of F_q() is diagonal in the local indices, one can define N/2 terms of the kind f_() = log∏_α=1^n∑_u^α=± 1∑_v^α=± 1∫∏_α < β^n [^αβ^αβλ^αβλ^αβ]   ×exp[-1/4∑_αβ^n∑_γδ^n ^αβ _αβ,γδ ^γδ +1/2∑_αβ^n λ^αβ[^αβ-(u^α u^β+i v^α v^β)] + c.c.] In order to show that the relation F_() = N/2 f_() holds also in the continuous case, one has to “open” the Dirac delta of the spherical constraint, which we have hidden inside the trace operator _Σ_N^n = ∫∏_α =1^n ∏_i=1^N/2 u_i^α v_i^α∏_α=1^nδ( ∑_i=1^N/2[(u_i^α)^2 + (v_i^α)^2 ] - ϵ N ). Equivalently, the spherical constraint can be written in terms of the local overlaps as in Eq. (<ref>). The operation of passing to the integral representation of a delta function, which in practice amounts to pass from a microcanonical (hard) version of the constraint to a canonical (soft) one, is harmless only when the interaction network is dense enough (see Chap. <ref>). When the graph of interactions is sparse, which is not the case here, the global constraint induce a condensation phenomenon and the equivalence between ensembles breaks down. The opening of the Dirac delta is not harmless and must be handled with much more care. However, in the present case, due to the results of Chap. <ref>, we do not have to worry, since a proper localization transition does not take place on the mode-locked graph. Then, by considering the expression of the constraint in the local overlap, we can write ∏_α=1^n δ(∑_i=1^N/2 ([_i^αα] + [_i^αα]) - ϵ N ) =∫_-i∞^i∞∏_α=1^n h^αexp[ ∑_α=1^n h^α( ∑_i=1^N/2 ([_i^αα] + [_i^αα]) - ϵ N ) ] = ∫_-i∞^i∞∏_α=1^n h^αexp[ - ϵ N ∑_α=1^n h^α + ∑_i=1^N/2∑_α=1^n h^α ([_i^αα] + [_i^αα]) ], from which we get exp[F_q() ] = ∫_-i∞^i∞∏_α=1^n h^α  e^- ϵ N ∑_α=1^n h^α [h^α], where the partition function [h^α] reads as [h^α] = ∫λλ u v  exp[ -1/4∑_i=1^N/2∑_αβ^n ∑_γδ^n _i^αβ _αβ,γδ _i^γδ + ∑_i=1^N/2∑_α=1^n h^α ([_i^αα] + [_i^αα]) + 1/2∑_αβ^n ∑_i=1^N/2λ_i^αβ[ _i^αβ - (u_i^α u_i^β+i v_i^α v_i^β) ] + c.c.]. It is now clear that this partition function can be factorized in the product of N/2 identical terms, so that we can write F_() = log∫_-i∞^i∞∏_α=1^n h^α expN/2[ f_q(,h) - 2ϵ∑_α=1^n h^α], where f_(,h) = log∫λλ u v  exp[ -1/4∑_αβ^n ∑_γδ^n ^αβ _αβ,γδ ^γδ + ∑_α=1^n h^α([^αα] + [^αα]) + 1/2∑_αβ^n λ^αβ[ ^αβ - (u^α u^β+i v^α v^β) ] + c.c.]. Notice that here we have kept for convenience the same notation for the integration measure as before, even if now it has lost the product over the local indices, i.e. 𝒟x = ∏_α≤β^n x^αβ and equivalently for the spins. § SADDLE-POINT EQUATIONS In this section we focus on the case of continuous spherical variables, in which we are mostly interested, since they are closer to phasors, compared to discrete spins. However, the solution of the model has been set up also for the discrete case. By including the result at the end of the previous section, the action of the model can be written as S(Λ̂,Λ,,,h) =N/2 [ f_(Λ̂)+ f_(,h) + Tr G(Λ/4) + 1/2Tr(Λ̂Λ) + 1/2Tr () - Trlog(Λ̂) - 2 ϵTr(h) ]. The full set of saddle-point equations for the action reads as: ∂ S/∂ h^α = ∂ f_(,h)/∂ h^α -2ϵ=0 ∂ S/∂_αβ,γδ = ∂ f_(,h)/∂_αβ,γδ + 1/2_αβ,γδ =0 ∂ S/∂_αβ,γδ = _αβ,γδ + 1/2 [ Λ G'(Λ/4)]_αβ,γδ =0 ∂ S/∂Λ_αβ,γδ = Λ̂_αβ,γδ+ 1/2 [  G'(Λ/4)]_αβ,γδ=0 ∂ S/∂Λ̂_αβ,γδ = ∂ f_(Λ̂)/∂Λ̂_αβ,γδ +1/2Λ_αβ,γδ-[Λ̂^-1]_αβ,γδ=0. Now, by exploiting the property of the derivative of Eq. (<ref>) and following the same procedure as the previous chapter (which assumes commuting matrices), we can eliminate the variable Λ and Eqs. (<ref>) and (<ref>) in favor of the algebraic constraint ( -) = 1, where for convenience we have defined = Λ̂^-1/8 and performed the rescaling → 2. Consistently with these redefinitions, the two local free energies can be rewritten as f_() = log∫∏_α≤β^n ^αβ^αβexp[ β^2J^2/4∑_αβ^n |^αβ|^4 + ∑_αβ^n∑_γδ^n ^αβ_αβ,γδ^γδ] and f_(,h) = log∫λλ u v  exp[ -1/2∑_αβ^n ∑_γδ^n ^αβ _αβ,γδ ^γδ + ∑_α=1^n h^α([^αα] + [^αα]) + 1/2∑_αβ^n λ^αβ[ ^αβ - (u^α u^β+i v^α v^β) ] + c.c.], where in the first free energy the local integration variables have been rescaled as →/(2√(2)). It is worth stressing that all the variable redefinitions performed so far do not affect the theory up to irrelevant constants and a rescaling of the temperature. Hence, the set of saddle-point equations reduces to ∂ f_(,h)/∂ h^α - 2ϵ = 0 ∂ f_(,h)/∂_αβ,γδ + 1/2_αβ,γδ = 0 -∂ f_()/∂_αβ,γδ + _αβ,γδ = 0 ( - )= 1 , which can be obtained by extremizing the following reduced action with respect to the matrix elements of and : A(,,h) = N [f_() + 2 f_q(,h) + Trlog(-) -  4ϵ Tr(h) ]. Notice that, when explicitly computing the derivatives of the free energies, the saddle point equations lead to the physically relevant relations ⟨Re[^αα] + Im[^αα] ⟩_ _ = 2ϵ ⟨^αβ^γδ⟩_ _ = ⟨^αβ^γδ⟩_ _ = _αβ,γδ, where the definitions of the averages are intuitively induced by the expression of the local free energies. §.§ Symmetries of the Overlap-Overlap Correlations The structure of the overlap-overlap matrices – and with that the whole formalism – can be simplified and lightened a lot considering the symmetries of the original Hamiltonian under reversal of all spins. Recall that the number of spins in the 4-body interaction term is even. As a consequence the replicated action must be invariant when all spins are flipped in one replica <cit.>, namely we need it to be invariant under the transformation {^1α,^2α,…,^nα} ⟶ { -^1α,-^2α,…,-^nα}. The direct consequence of this is that among generic multipoint correlation functions of the kind ⟨^α_1β_1 ^α_2β_2…^α_kβ_k⟩, only those where each upper index is repeated an even number of times are different from zero. In particular the two point correlations of Eq. (<ref>) are non-zero only when α=γ and β=δ ⟨^αβ^γδ⟩_ _ = ⟨^αβ^γδ⟩_ _ δ_αγ δ_βδ = ⟨^αβ^αβ⟩_ _. This means that the only non-zero terms of the matrix _αβ,γδ are those diagonal with respect to the couple of indices: _αβ,γδ= _αβ δ_αγ δ_βδ. This simple observation greatly simplifies all the mean-field equations and the matrices appearing therein. The simplified saddle-point equations are ∂ f_()/∂_αβ = ⟨^αβ^αβ⟩_ _ = _αβ -∂ f_(,h)/∂_αβ = 1/2⟨^αβ^αβ⟩_ _ = 1/2_αβ ∂ f_(,h)/∂ h^α = ⟨Re[^αα] + Im[^αα] ⟩ = 2ϵ ( - ) = 1, where the two local free energies now read: f_() = log∫∏_α≤β^n ^αβ^αβexp[ β^2J^2/4∑_αβ^n |^αβ|^4 + ∑_αβ^n _αβ |^αβ|^2 ] and f_(,h) = log∫λλ u v exp[ -1/2∑_αβ^n _αβ |^αβ|^2 + ∑_α=1^n h^α([^αα] + [^αα]) + 1/2∑_αβ^n λ^αβ[ ^αβ - (u^α u^β+i v^α v^β) ] + c.c.], § RS ANSATZ The first step towards a replica-symmetric solution is to bring back the free energy written in the second line of Eq. (<ref>) to the form where spins appear explicitly. We rewind the steps performed, by first integrating over the variables λ and then proceeding in the following way f_(,h) = log∫ u v exp[ -1/2∑_αβ^n _αβ |^αβ|^2 + ∑_α=1^n h^α([^αα] + [^αα]) ] ×∏_α≤βδ( ^αβ - ((u^α u^β+i v^α v^β) ) δ( ^αβ - ((u^α u^β - i v^α v^β) ) = log∫ u v exp[ -1/2∑_αβ^n _αβ[(u^α u^β)^2+ (v^α v^β)^2] + ∑_α=1^n h^α[(u^α)^2 + (v^α)^2 ] ] =2 log∫∏_α=1^n σ_αexp[ -1/2∑_αβ^n σ_α^2 _αβσ_β^2 + ∑_α=1^n h^ασ_α^2 ] where the integrals in the variables u and v have been factorized in two identical contributions. In the following, we will refer to this expression of the local free energy in real space as f_σ(,h), to remind that now the local integration variables are the spins. The simplest assumption for the elements of the global order parameters _αβ and _αβ is the replica-symmetric one: _αβ = q̂_D δ_αβ + q̂_0 (1-δ_αβ) _αβ = μ_D δ_αβ + μ_0 (1-δ_αβ), together with h_α = h for the field. Therefore, we can write 1/2 f_σ(,h) = log∫∏_α=1^n σ_αexp[ -1/2(q̂_D-q̂_0) ∑_α=1^n σ_α^4 - q̂_0/2( ∑_α=1^n σ_α^2 )^2 + ∑_α=1^n h_ασ_α^2 ] = log∫ z ∏_α=1^n ∫σ_αexp[ -1/2(q̂_D-q̂_0) σ_α^4 + √(-q̂_0) z σ_α^2 + h_ασ_α^2 ] = log∫ z [ _0(q̂_D,q̂_0,h,z) ]^n, where z = e^-z^2/2/√(2π) and we have defined the local partition function _0(q̂_D,q̂_0, h, z) = ∫σ f_0(σ | q̂_D,q̂_0, h, z) f_0(σ | q̂_D,q̂_0, h, z) = exp[ -1/2(q̂_D-q̂_0) σ^4 + (√(-q̂_0) z + h ) σ^2. ] From this finite-n expression, it is easy to find that lim_n → 01/n f_σ(,h) = 2 ∫ z log_0(q̂_D,q̂_0, h, z). The local free energy of the dual space is completely diagonal in the replica indices and can be written as follows f_() = log∫∏_α≤β^n ^αβ^αβexp[ β^2J^2/4∑_αβ^n |^αβ|^4 + ∑_αβ^n _αβ |^αβ|^2 ] = log∫∏_α=1^n ^αα^ααexp[ β^2J^2/4∑_α=1^n |^αα|^4 + μ_D ∑_α=1^n |^αα|^2 ] + log∫∏_α < β^n ^αβ^αβexp[ β^2J^2/2∑_α < β^n |^αβ|^4 + 2 μ_0 ∑_α < β^n |^αβ|^2 ] = n log_β (μ_D) + n(n-1)/2log_β (μ_0). In the previous expression, the site partition function _β,0 is defined as _β(μ) = ∫ x x g_β,0(x | μ ) g_β( x | μ) = exp[ (2 - δ_μ,μ_D) (β^2 J^2/4|x|^4 + μ |x|^2) ], where the Kronecker delta in the second definition accounts for the factor 2 in the off-diagonal case. Eventually, by taking the limit n→ 0, we get lim_n→ 01/n f_() = log_β (μ_D) - 1/2log_β (μ_0). The entropic term in Eq. (<ref>) in the limit n→ 0 reads lim_n→ 01/nlog(-) = log(q̂_D-μ_D - (q̂_0-μ_0)) + q̂_0-μ_0/q̂_D-μ_D - (q̂_0-μ_0), so that, in conclusion, the RS action is given by lim_n→ 01/n A_RS(,,h) = 4∫ z log_0(q̂_D,q̂_0, h , z) + log_β (μ_D) - 1/2log_β (μ_0) + log(q̂_D-μ_D - (q̂_0-μ_0)) + q̂_0-μ_0/q̂_D-μ_D - (q̂_0-μ_0) -4ϵ h. §.§ RS Equations In this section the self-consistency equations for the RS parameters are derived. Let us start from the computation of the derivatives of the action (<ref>), by considering separately its terms. In the following we imply the limit n→ 0, to shorten the notation. We have: ∂ f_σ/∂q̂_D = 2 ∫ z 1/_0∂_q̂_D_0 = 2 ∫ z ∫σ∂_q̂_D f_0 /∫σ f_0 = - ∫ z ⟨σ^4 ⟩_0, where we have defined the average ⟨ (⋯) ⟩_0 = ∫σ f_0(σ | q̂_D,q̂_0, h, z ) (⋯)/_0(q̂_D,q̂_0, h, z) . Similarly, we have ∂ f_σ/∂q̂_0 = ∫ z [ ⟨σ^4 ⟩_0 - 1/√(-q̂_0)∂_z ⟨σ^2 ⟩_0] = ∫ z ( ⟨σ^2 ⟩_0)^2 where after integration by parts we have used the fact that ∂_z ⟨σ^2 ⟩_0 = √(-q̂_0)[⟨σ^4 ⟩_0 - (⟨σ^2 ⟩_0)^2 ]. We consider now the free energy of the dual space and define the average ⟨ (⋯) ⟩_μ = ∫ x x  g_β(x | μ) (⋯) /_β(μ), where the subscript μ={μ_D,μ_0} is just a reminder of the g_β function argument. It is easy, then, to see that ∂ f_/∂μ_D = ⟨ |x|^2 ⟩_μ_D ∂ f_/∂μ_0 = - ⟨ |x|^2 ⟩_μ_0. The derivatives of the entropic term read just like in the previous chapter. Therefore, we can write ∂ A_RS/∂q̂_D = 0    →    - 2 ∫ z ⟨σ^4 ⟩_0 + A = 0 ∂ A_RS/∂q̂_0 = 0    →    2 ∫ z ( ⟨σ^2 ⟩_0)^2 + B = 0 ∂ A_RS/∂μ_D = 0    →   ⟨ |x|^2 ⟩_μ_D - A = 0 ∂ A_RS/∂μ_0 = 0    →   ⟨ |x|^2 ⟩_μ_0 + B = 0 ∂ A_RS/∂ h = 0    →   ∫ z ⟨σ^2 ⟩_0 - ϵ = 0, were we have defined A = q̂_D - μ_D - 2 (q̂_0 - μ_0) /[q̂_D - μ_D - (q̂_0 - μ_0)]^2 B = q̂_0 - μ_0/[q̂_D - μ_D - (q̂_0 - μ_0)]^2. The system of equations can be put in a more familiar form, by exploiting the RS expression of the algebraic constraint (<ref>), which in the n → 0 limit is given by the set of equations q_D (q̂_D - μ_D) - q_0 (q̂_0 -μ_0) =1 (q_D - 2q_0)(q̂_0 -μ_0) + q_0 (q̂_D - μ_D) = 0. Thanks to these equations, we can eliminate q̂_D and q̂_0 from the saddle-point equations by replacing them with the expressions q̂_D = μ_D + q_D - 2q_0/(q_0-q_D)^2 q̂_0 = μ_0 - q_0/(q_0-q_D)^2, which are analogous to Eqs. (<ref>), with the only difference that here q_D is not fixed to 1. By substituting into A and B, one finds A=q_D and B=-q_0. A further simplification follows by noting that the average ⟨ (⋯) ⟩_μ can be rewritten as ⟨ (⋯) ⟩_μ = ∫_0^∞ r r (⋯) g_β (r | μ )/∫_0^∞ r r g_β( r | μ ), where we have passed to polar coordinates in the complex integration variables and g_β ( r | μ ) = exp[ (2-δ_μ,μ_D) (β^2 J^2/4r^4 + μ r^2) ]. Then, we have q_D/2 = ∫ z ⟨σ^4 ⟩_0 q_0/2 = ∫ z ( ⟨σ^2 ⟩_0)^2 q_D = ⟨ r^2 ⟩_μ_D q_0 = ⟨ r^2 ⟩_μ_0 ϵ = ∫ z ⟨σ^2 ⟩_0 where ⟨ r^2 ⟩_μ can be written in terms of error functions. This is a system in the independent variables q_D,q_0,μ_D,μ_0 and h and Eqs. (<ref>) must be used instead of q̂_D and q̂_0 inside the definition of the function f_0. The solution of these equations is in progress. § 1RSB ANSATZ In the 1RSB ansatz the overlap matrices have the structure introduced in the previous chapter and Eq. (<ref>) holds. We use the same expression for the free-energy in the direct space as in the previous section, going back to the trace over the spin variables. Then, with simple manipulations we get the following expression lim_n → 01/n f_σ(,h) = 2/m∫ z log∫ y _1^m(q̂_D, q̂_0, q̂_1, h, z, y). where we have defined _1(q̂_D, q̂_0, q̂_1, h, z, y) = ∫σ f_1( σ | q̂_D,q̂_0,q̂_1, h, z, y) f_1( σ | q̂_D,q̂_0,q̂_1, h, z, y) = exp[ -1/2(q̂_D-q̂_1) σ^4 + (√(-q̂_0) z + √(q̂_0-q̂_1)y + h ) σ^2 ] as a clear generalization of the RS functions _0 and f_0. Similarly, the 1RSB expression of the free-energy in the dual space is given by lim_n→ 01/n f_() = log_β (μ_D) - m/2log_β (μ_0) + m-1/2log_β (μ_1) where the local partition function Z_β and the function g_β are those defined in the previous section. The 1RSB expression of the entropic term is the same as Eq. (<ref>), with the matrix =- for the present case. The self-consistency equations for the 1RSB parameters read as q_D/2 = ∫ z ∫ y _1^m ⟨σ^4 ⟩_1/∫ y  _1^m q_0/2 = ∫ z ∫ y _1^m (⟨σ^2 ⟩_1)^2 /∫ y  _1^m q_1/2 = ∫ z (∫ y _1^m ⟨σ^2 ⟩_1/∫ y  _1^m )^2 q_D = ⟨ r^2 ⟩_μ_D q_0 = ⟨ r^2 ⟩_μ_0 q_1 = ⟨ r^2 ⟩_μ_1 ϵ = ∫ z ∫ y _1^m ⟨σ^2 ⟩_1/∫ y  _1^m , where ⟨ (⋯) ⟩_1 = ∫σ f_1(σ | q̂_D,q̂_0,q̂_1, h, z ) (⋯)/_1(q̂_D,q̂_0,q̂_1, h, z) . Resolution of these equations is in progress. CHAPTER: CONCLUSIONS AND PERSPECTIVES This work finds its place in the statistical mechanical approach to light amplification in disordered media. In particular, it addresses the problem of going beyond the standard mean-field RSB (Replica Symmetry Breaking) theory employed to find the solution of spin-glass models for random lasers, thus improving the theory towards a more realistic description of these optical systems. The leading spin-glass model for the study of the glassy lasing transition has been introduced, by connecting its key features with the semiclassical theory of random lasers. In particular, it has been shown that: (i) a non-diagonal linear coupling between pairs of cavity modes arises as a result of the interaction with a bath of diffusive modes escaping the system; (ii) a 4-body term of interaction accounting for the light-matter interactions emerges in the context of third-order perturbation theory in the mode amplitudes. When the mode dynamics is considered in the slow amplitude basis, where the modes have a definite frequency, as lasing modes approximately have to, both the linear and the nonlinear couplings turn out to be selected by a Frequency Matching Condition (FMC). Moreover, it has been shown that generalizing the results of the Statistical Light-mode Dynamics approach developed by Fischer, Gordon and coworkers leads to a thermodynamic theory for the stationary regime of RLs. The spin-glass (2+4)-phasor Hamiltonian is obtained by taking disordered couplings, where the randomness in the mode-coupling is induced by the randomness in the spatial extension of the modes and by the spatial heterogeneity of the nonlinear optical response. The standard mean-field theory requires the model to be defined on the complete graph of interactions, where the FMC does not play any role, since it is always satisfied. In this approximation, the model is compatible only with the narrow-bandwidth limit, where the emission spectrum has a width comparable to the broadened linewidth of the single modes. This is the price to pay for the huge simplification that one has in the mean-field fully-connected approximation, which allows to apply in a quite straightforward way the RSB techniques developed for mean-field spin glasses and to derive the phase diagrams described in Chap. <ref>. However, the regime to which this mean-field solution pertains is very special, therefore preventing the theory from being applied to generic experimental situations. For instance, neglecting the coupling dilution induced by the FMC hinders the reproduction of the central narrowing in random laser empirical spectra. Consequently, it is of great interest to investigate the model on the mode-locked diluted graph of interaction. So far the most important result that has been found regarding the mode-locked glassy random laser is the evidence of a mixed-order ergodicity breaking phase transition, as revealed by Monte Carlo numerical simulations (see Chap. <ref>). The joint study of the specific-heat divergence at the critical temperature and of the low temperature behavior of the Parisi overlap distribution function reveals both the first and second-order nature of the transition, which is the typical scenario of a Random First Order Transition. This is a feature which was already predicted on the complete graph of interactions and seems quite solidly preserved in the diluted model. However, in numerical simulations of the Mode-Locked (ML) 4-phasor model preceding the present thesis work the transition is found not to be compatible with mean-field theory, according to the estimated value of the scaling exponent of the critical region. This exponent ν_eff appears to be outside the boundaries corresponding to a mean-field universality class. These limits are derived in Chap. <ref> through a simple mean-field argument based on the second-order nature of the glass transition, which consists in the divergence of the thermal response at the critical point. In this work, we have presented new results from numerical simulations of the ML 4-phasor model, showing how the previous results were haunted by strong finite-size effects. Finite size-effects are unavoidable when dealing with simulations of a dense model such as the mode-locked random laser: the number of connections in the graph requires a number of operations which scales as the cube of the system size, thus forbidding the simulation of large enough sizes. In order to reduce these effects, we have developed a simulation strategy based on periodic boundary conditions on the frequency space, for which band-edge modes participate in the same number of interacting quadruplets as the modes in the center of the spectrum. Therefore, a given size of the simulated model with periodic boundary conditions on the frequencies can be regarded as the bulk of a a larger size with free boundaries. By means of this strategy, and by also performing simulations of the original model, but with a larger number of sizes and of disordered samples, we have assessed that the scaling of the critical region is compatible with mean-field theory up to the precision of our analysis. However, the model seem not to be in the the universality class of the Random Energy Model, an feature suggesting that the mode-locked random laser may need a different mean-field solution than its fully connected counterpart. The study of the glass transition has been completed with the analysis of the Parisi overlap probability distribution function, where the use of periodic boundary conditions, has resulted in more pronounced side-peaks. Another interesting phenomenon that has been studied in this work is the possibility of a localization – else termed power condensation – transition in the mode-locked glassy random laser. In this context, localization is understood as the phenomenon whereby a finite number of modes carries an extensive amount of light intensity, and not in the sense of disorder induced Anderson or many body localization in quantum systems. The presence of localization, as the spherical constraint is tuned above a given threshold, is only theoretically possible in presence of dilution of the interaction network: in the fully-connected case, the high connectivity of the model is sufficient to guarantee the equipartition of the constraint among all degrees of freedom. From the careful finite-size study of the localization order parameter reported in Chap. <ref>, we have been able to assess that, although some evidence of incipient localization can be found, the glassy phase of light is not strictly speaking localized. This means that the results of our analysis are not compatible with single light modes carrying an extensive amount of intensity. However, we have found an anomaly in the finite-size study of the participation ratio, whose size dependence is compatible with the presence of high power modes, carrying an intensity |a_k|^2 ∼ N^1-Ψ/2, with Ψ > 0. We stress that in presence of power condensation those modes would have intensity |a_k|^2 ∼ N, i.e. Ψ = 0. Moreover, the study of the spectral entropy has revealed that the low temperature phase of the model is characterized by the breaking of equipartition. We have termed “pseudo-localization” the transition to this hybrid phase, where the light intensity is not completely localized and at the same time is not equipartitioned among the modes. One of the most relevant aspects of the picture revealed by the numerical results presented in this work is that the critical temperature of the glass and of the pseudo-localization transitions is the same within the statistical uncertainty. This occurrence makes the mode-locked random laser a very interesting problem where ergodicity breaking manifests itself in a twofold way: replica-symmetry breaking, which is typical of quenched disordered systems, and localization, which has mostly been studied in the context of quantum many-body systems. The opportunity given by this model is to study both transitions at the same time, opening the way to more general studies for arbitrary nonlinearities and degrees of coupling dilution. Supported by the numerical evidence that the mode-locked random laser is, indeed, a mean-field model, we have approached its solution with analytical techniques. The similarity of the mode-locking dilution rule with the kind of correlations in the Hamiltonian of the Bernasconi model has led us to perform a preliminary study of the Merit Factor problem. Though the presence of a low temperature glass phenomenology is suggested by numerical studies of the finite-size Hamiltonian, within the accuracy of our analysis, the model does not exhibit any transition at finite temperature. The presence of a transition has been investigated through the replica method applied to the model in the space where the spin variables are mapped by a random unitary matrix. In Chap. <ref>, the model has been solved in the annealed limit, in the replica symmetric ansatz and with one step of replica symmetry breaking. The self-consistency equations for the free parameters have been deeply studied with different integration techniques and computational tools: the only solution revealed at finite temperature is the paramagnetic one. The study of the zero temperature limit of the 1RSB free energy and self-consistency equations is still in progress. Other important information may come from the study of the stability of the RS solution, which helps to distinguish the kind of replica symmetry breaking possibly characterizing the low temperature phase. The solution of the mode-locked random laser has been addressed in Chap. <ref>, where the replica technique developed for the Bernasconi model has been adapted to the case of interest. In order to simplify the computation, we have first considered real spherical variables, eliminating the technicality of dealing with the phases at an initial step of investigation. In this case, after the average over disorder, we pass to a generalized Fourier space by transforming the local overlaps with a random unitary matrix. The major difficulty of defining a global order parameter for the model and finding closed equations to determine it as function of temperature has been successfully addressed, with the introduction of a new order parameter, a superoverlap, which is a measure of the correlations among local overlaps. The analysis has been completed up to the formal derivation of the 1RSB self-consistency equations for the order parameters of the model. Future developments of the work presented in the present thesis are being considered. In the remaining part of this chapter, the most relevant directions of research that we aim to follow are discussed. A first natural continuation of the work presented in Chap. <ref> is to investigate the reason why a glassy phase at finite temperature has not been found for the Merit Factor problem. This may be due either to some technical issues of the computation procedure, which we are still carefully inspecting, or to the fact that the transition occurs at zero temperature. While the study of the zero-temperature saddle-point equations is in progress, we also aim to compute the stability of the RS solution. In fact, if the RS solution turns out to be unstable below a certain value of the temperature, this may be taken as an indication that the correct low-temperature solution of the model should be looked for with a FRSB ansatz. However, the more reasonable scenario seems to be that there is no transition at all in the model with random unitary matrices. It may be that the the mapping of the deterministic model onto a disordered model with unitary matrices in not correct: the mapping to a a model with two random orthogonal transformations of the spin variables seems to be more reliable and will be deeply studied in future. On top of that, numerical studies of the original deterministic model and of the models with random orthogonal and unitary transformations are in progress. Regarding the Mode-Locked 4-phasor model, though the saddle point equations have not been studied in detail yet, the replica approach based on random unitary matrices may suffer the same problems as in the Merit Factor problem. Probably, in this case as well a double orthogonal transformation is needed to reach the correct solution. However, the approach developed in this work proved useful at least to deal with the dependence on the site indices due to the deterministic dilution rule. Similar computations will be performed on the model with the introduction of orthogonal matrices. Once the study of the 1RSB saddle-point equations is completed and a phase diagram for the model is obtained, a straightforward extension of the new mean-field theory is to include the phases of the modes and determine the role played by the phase locking in the diluted case. Then, we plan to generalize the computation to the case of a non-zero mean coupling distribution, in order to study the effect of ferromagnetic alignment with respect to the disorder of the couplings. Another analytical approach to the study of the Mode-Locked 4-phasor model, which may yield a useful basis for comparison with the replica computation presented in this work (or variations on the theme), implies performing an expansion in the coupling magnitude, which, given the density of the mode-locked graph, still has to decrease as a power of the size also in the diluted model. The small coupling expansion can also be interpreted as a high temperature expansion, the well-known Plefka/Geroges-Yedida expansion <cit.>. A useful reference for this approach is Ref. <cit.>, where the second-order truncation of the expansion is performed on the p-spin model, precisely to make a comparison with the replica computation. In the case of the ML 4-phasor model, one has to perform the expansion keeping the implementation of the Frequency Matching Condition. Eventually, the third order truncation of the expansion, by including the Onsager reaction term, yields the TAP (Thouless-Anderson-Palmer) free energy, paving the way to a TAP analysis of the model. As discussed in the first part of this work, the numerical approach gives the opportunity to gain physical insight on the mode-locked diluted models and to bridge the theory with the experiments. Work is in progress regarding the numerical simulation of the ML 4-phasor model with a continuous frequency distribution and considering also the relaxation dynamics towards equilibrium <cit.>. Extracting the frequencies uniformly in the interval [0,1] allows for a more realistic description of the frequency profile of a random laser with respect to the frequency comb, which actually applies to the case of standard closed cavity lasers. Moreover, experimental measurements RL spectra are always affected by a transient of non-equilibrium dynamics, which equilibrium data collected for the results presented in this work do not take into account. One important study which is in progress in this context is the test of the correspondence between IFO (Intensity Fluctuation Overlap) and Parisi overlap distribution functions in the case of the mode-locked diluted interaction graph. In fact, this correspondence has been analytically proved only in the mean-field fully-connected theory and it would be of great importance to check its validity also in the diluted case, which is closer to experimental RLs. Another direction of investigation regards the addition of the 2-body term in the study of the mode-locked diluted model, which for simplicity has been discarded in the whole thesis. For instance, the simulations of the complete (2+4)-phasor model, on the mode-locked graph, should be able, in principle, to show traces of FRSB, if this feature of the solution on the fully-connected graph survives in the diluted model. Of course, in this case the effect of FRSB will have to be carefully disentangled from the finite size effects on the overlap distribution function (see Chap <ref>). Simulations in presence of the linear term can be used also to study the inclusion of a gain profile to the dynamics, which so far has been considered in the ordered case only. Furthermore, once a code is written to simulate a gain profile, it could be fed with random laser gain curves measured in experiments. CHAPTER: INTEGRATION OVER THE UNITARY GROUP In this Appendix we derive the result contained in Eq. (<ref>) with two different techniques, by considering elementary cases, which were already discussed in Refs. <cit.>. Our aim is just to provide an explanation for the specific function appearing in Eq. (<ref>). The key idea is to reduce the difficulty of the integration over the Haar measure of the unitary group in Eq. (<ref>), by passing to a scalar problem, where the integral can be performed with the saddle point method in the large-N limit. In fact, as pointed out in Ref. <cit.>, the general matrix source problem can not be solved with this technique, since the Lagrange multiplier, which implements the unitary constraint in the integration measure, is itself a matrix. Thus, after carrying out the integration over the unitary group, one is left with N^2 coupled variables and the saddle point method is not applicable. For this reason, in order to deal with the general problem, Brezin and Gross developed in Ref. <cit.> an alternative procedure based on the study of the equations of motion in the large-N limit. §.§.§ Equations of motion Let us start by reviewing the simple example provided in Ref. <cit.>. Consider the case of a unitary N-dimensional vector uu^† = 1 in an external complex vector field a. In this case, the problem simplifies to the computation of the following integral Z = K ∫∏_i=1^N u_i u_i δ( ∑_i=1^N |u_i|^2 - 1 ) e^N [∑_i=1^N (u_i a_i + u_i a_i) ], where K is a constant and the delta function restricts the integration to the space of unitary vectors. First, we notice that the partition function Z satisfies the following equation: ∑_i=1^N ∂^2 Z/∂ a_i ∂a_j = N^2 Z. This can be checked by computing the derivatives of Z with respect to the components of the external field: ∂ Z/∂a_i = K ∫∏_i=1^N u_i u_j δ( ∑_i=1^N |u_i|^2 - 1 ) N u_i e^N [ ∑_i=1^N (u_i a_i + u_i a_i) ] and ∂^2 Z/∂ a_j ∂a_i = K ∫∏_i=1^N u_i u_j δ( ∑_i=1^N |u_i|^2 - 1 ) N^2 u_i u_j e^N (∑_i=1^N u_i a_i + u_i a_i ). Considering the case i=j and summing over i, one immediately finds Eq. (<ref>) due to the constraint. We now make an important remark: in order for the theory to be invariant under unitary transformations, the dependence of Z on the external field can only be mediated by its modulus. This can be seen as follows: after u→ Uu with UU^† = I, the action in Eq. (<ref>) reads N[u(Ua^†) + (U^†a) u^†], and the only way to make U disappear from the theory is that the dependence is on aa^†. Then, if we denote λ=aa^†, it must be Z=Z(λ). Given this remark, Eq. (<ref>) takes the form of an ordinary differential equation: λ Z”(λ) + N Z'(λ) = N^2 Z(λ), where the apex denotes derivatives with respect to λ. This can easily be proved by repeatedly using the chain rule to obtain ∂^2 Z/∂ a_j ∂a_i = Z/λ^2 a_i a_j + Z/λδ_ij and, again, taking i=j and summing over i. By now imposing the fact that log Z is proportional to N, we look for solutions of Eq. (<ref>) of the kind Z=exp[N (λ)]. By plugging this ansatz into Eq. (<ref>) we find an equation for (λ), which in the large-N limit reduces to λ ('(λ))^2 + '(λ) = 1. The solution of this differential equation with initial condition (0)=0 is (λ) = - 1 + √(1 + 4λ) - log[1/2 + 1/2√(1+4λ)], which, apart from irrelevant constant factors, corresponds to the formula in Eq. (<ref>). The general solution of the external field problem posed in Eq. (<ref>) is a generalization of this procedure and leads to a result which is much more complicated. However, the final expression of the partition function involves double trace operators, which for large N are irrelevant in the replica computation carried out in Chap. <ref>: by taking into account only terms with a single trace operation, the general result contained in Ref. <cit.> is a very simple generalization of the scalar case. If we denote by A the external matrix field, we have Z = exp[ N (A A^†) ], which corresponds to Eq. (<ref>), with A=Ω/N. §.§.§ Saddle-point computation The other example we deal with is taken from Ref. <cit.>. Consider the case in which Ω has only one element different form zero which is extensive in N, say Ω_11=ω N /2, so that the trace operation reduces to (Ω^† U+h.c.) = ωN/2u_11 + c.c. In this case, we have to compute the integral Z = ∫∏_i,j=1^N/2 u_iju_ij∏_i=1^N/2δ(∑_j=1^N/2|u_ij|^2 -1 ) e^ωN/2u_11 + c.c. where the integration over the Haar measure of the unitary group has been opened with N/2 global constraints, one for each line in order to implement the unitary condition U U^†=I. Here, we are considering the case of N/2 × N/2 matrices only to be coherent with the treatment of Chap. <ref>, but the final result will be the same of the previous example a part from an overlall factor 1/2. The integration over the elements of every line but the first one gives a constant. Hence, one is left with Z = K∫∏_j=1^N/2 u_1ju_1jδ(∑_j=1^N/2|u_1j|^2 -1 ) e^ωN/2u_11 + c.c. = K∫∏_j=1^N/2 x_j x_j δ(∑_j=1^N/2|x_j|^2 -1 ) e^ωN/2x_1 + c.c., where in the second line we have just renamed the integration variables and dropped a redundant index, i.e. u_1j→ x_j. By comparing this integral with the one in Eq. (<ref>), we can see that this is a special case of the previous example. The integration in x_1 can be isolated as follows Z =K ∫ x_1 x_1 e^ωN/2x_1 + c.c.∫∏_j=2^N/2 x_j x_j δ(∑_j=2^N/2|x_j|^2 -(|x_1|^2-1) ) = K ∫ x x e^ωN/2x + c.c.∫∏_j=1^N/2-1 x_j x_j δ(∑_j=1^N/2-1|x_j|^2 -(|x|^2-1) ). The N/2-1-dimensional integral can be performed easily by passing to the real and imaginary parts of the variables x_j=a_j+i b_j and using the property of the delta of a function: ∫∏_j=1^N/2-1 x_j x_j δ(∑_j=1^N/2-1|x_j|^2 -(|x|^2-1) ) ∝∫∏_j=1^N/2-1 a_j b_j δ(∑_j=1^N/2-1(a_j^2+b_j^2) -(|x|^2-1) ) = ∫∏_i=1^2(N/2-1) t_i δ(∑_j=1^2(N/2-1)t_i^2 -(|x|^2-1) ) ∝∫_0^∞ r r^2(N/2-1)-1δ(r^2 -(|x|^2-1) ) = ∫_0^∞ r r^2(N/2-1)-1δ(r-√(1-|x|^2))/2√(1-|x|^2) ≈ (1-|x|^2)^N/2, where the last step holds in the large-N limit. Hence, the integral we aim to compute boils down to Z = ∫ x x  e^ωN/2x + c.c. (1-|x|^2)^N/2 = ∫ x x expN/2[log(1-|x|^2) + ωx + ωx], which is a one-dimensional integral that can be solved with the saddle point method in the large-N limit. By passing to real and imaginary part in x=a+ib and ω=v+iw, we have Z = ∫ a b expN/2[log(1-a^2-b^2)+ 2va+2wb ] ≈expN/2𝒢(a^*,b^*,v,w) . where 𝒢(a,b,v,w)=log(1-a^2-b^2)+ 2va+2wb and (a^*,b^*) is its maximum. The saddle point equations are v a^2 + v b^2 +a -v =0 w a^2 + w a^2 +b -w =0 and the solution that maximizes 𝒢 is (a^*,b^*)=(-v-v√(1+4v^2+4w^2)/2(v^2+w^2),-w-w√(1+4v^2+4w^2)/2(v^2+w^2)). Calculating 𝒢 in (a^*,b^*) one finds after some algebra 𝒢(|ω|)= 𝒢(a^*,b^*,v,w)= -1 + √(1+4|ω|^2) - log[1/2+1/2√(1+4|ω|^2)]. This result corresponds precisely to Eq. (<ref>), taking into account that Ω_11=ωN/2. It is interesting to note that the dependence of in the modulus of the external source ω. This is a consequence of the invariance of the theory under unitary transformations. CHAPTER: RS COMPUTATIONS FOR THE MF PROBLEM In this Appendix, we report all the Replica Symmetric (RS) computations for the Random Unitary model of the MF problem. In the first part, the RS action is derived, by implementing the ansatz (<ref>) in the action of the model. In order to simplify the calculation, the three terms by which the action (<ref>) is comprised are treated separately. Then, self-consistency equations for the RS parameters are obtained, by imposing the stationarity of the RS action. § RS ACTION §.§.§ Local free energy f_τ Consider first the free energy in the spin variables. It is convenient to use the notation _τ = ∏_a=1^n[∑_{τ^a }], where now τ denotes a vector in the replica space. By plugging the RS ansatz in place of the matrix ℛ, we have f_τ(_D,_0) = log_τ e^-1/2∑_ab^n τ^a[_Dδ_ab+ _0(1-δ_ab)]τ^b =log e^-(_D-_0)n_τ e^-1/2_0|∑_a^n τ^a|^2 = - n(_D-_0) + log_τ∫ h h/4π e^-|h|^2/2 + √(-_0/4)∑_a^n (τ^a h + τ^a h) = - n(_D-_0) + log∫ h h/4π e^-|h|^2/2_τ e^√(-_0)∑_a^n (τ^ah), where in the second step a Hubbard-Stratonovich transformation has been used to decouple the square. In order to take the trace over the spins, let us pass to the real and imaginary parts of the complex variables, by defining τ^a=ρ^a+iσ^a and h=h_R + i h_I. Since (τ^ah)=ρ^a h_R + σ^a h_I, we have _τ e^√(-_0)∑_a^n (τ^ah) = ∏_a=1^n ∑_{ρ^a,σ^a} e^√(-_0)∑_a^n (ρ^a h_R + σ^a h_I) = ( ∑_ρ=± 1 e^√(-_0)ρ h_R∑_σ = ± 1 e^√(-_0)σ h_I)^n = 4^n cosh^n(√(-_0) h_R)cosh^n(√(-_0) h_I). Now, we consider the integration over the complex Gaussian variable h. It is easy to see that the two integrals in the real and imaginary parts of h can be factorized in two identical contributions, leading to f_τ(_D,_0) = nlog4 -n (_D-_0) + 2 log∫ h/√(2π) e^-h^2/2cosh^n(√(-_0) h), which is the expression of the local free energy at finite n. We now have to take the limit lim_n → 0 f_τ/n, i.e. we only have to keep 𝒪(n) terms in the expression of f_τ. The term containing the Gaussian integral in the expression above can be treated as follows log∫ h/√(2π) e^-h^2/2cosh^n(√(-_0) h) = log∫ h/√(2π) e^-h^2/2exp[n cosh (√(-_0) h) ] ≈log∫ h/√(2π) e^-h^2/2[ 1 + n cosh (√(-_0) h) ] = log[1 + n ∫ h/√(2π) e^-h^2/2cosh (√(-_0) h) ] ≈ n ∫ h/√(2π) e^-h^2/2cosh (√(-_0) h). These are all standard manipulations, which are true at first order in the limit n→ 0 and will be repeatedly used in the following. Eventually, by redefining lim_n → 0 f_τ/n → f_τ for convenience, we have f_τ(_D,_0)= log4 - _D + _0 + 2∫ h/√(2π) e^-h^2/2logcosh (√(-_0)h), This expression resembles the RS free energy of standard spin-glass models, such as the SK model, see e.g. Ref. <cit.>, except for the fact that it does not depend explicitly on the temperature: in this model the temperature appears only in the local free energy f_C. §.§.§ Local free energy f_C Let us now turn to the local free energy that is computed in the generalized Fourier space. By using the RS ansatz on the matrix ℳ, we proceed as follows f_C(_D,_0) =log∫∏_a=1^n C^a C^a e^-β∑_a^n |C^a|^4 + 1/2(_D-_0)∑_a^n|C^a|^2 e^1/2_0 |∑_a^nC^a|^2 =log∫ z z/4π e^-|z|^2/2∏_a=1^n ∫ C^a C^a e^-β |C^a|^4 + 1/2(_D-_0)|C^a|^2 + √(_0/4) (C^a z+C^a z) =log∫ z z/4π e^-|z|^2/2[∫ C C e^-β |C|^4 + 1/2(_D-_0)|C|^2 + √(_0)(C z)]^n , where analogously to the previous case the square of the sum in the exponential has been decoupled by introducing a complex Gaussian integration on the auxiliary variable z and the dependence on n has been factorized. Here, at variance with the free energy f_τ it is convenient to keep the complex formalism, which is more compact. By using the definition (<ref>), the finite-n expression of the second free energy can be compactly written as f_C(_D,_0) = log∫ z z/4π e^-|z|^2/2 I_β,0^n(_D,_0,z). By expanding linearly in the limit n→0, using the same manipulations of the previous section, and replacing lim_n → 0 f_C/n → f_C, we finally get f_C(_D, _0)= ∫ z z/4π e^-|z|^2/2log I_β,0(_D,_0,z). §.§.§ Entropic term In order to rewrite the entropic term in the action, note that for a generic RS matrix 𝒜 the following relation holds = (𝒜_D-𝒜_0)^n-1(𝒜_D+(n-1)𝒜_0) , as a consequence of the fact that A has only two kind of eigenvalues: 𝒜_D-𝒜_0 with degeneracy n-1 and 𝒜_D+(n-1)𝒜_0 with degeneracy 1. Thus, in the present case we have log(ℛ-ℳ) = log (ℛ-ℳ) = (n-1)log(ℛ_D -ℳ_D- ℛ_0 + ℳ_0) + log(ℛ_D -ℳ_D- ℛ_0 + ℳ_0 +n(ℛ_0 - ℳ_0 )), and by carefully taking limit the n → 0 up to order O(n), we find lim_n → 01/nlog(ℛ-ℳ) = log(ℛ_D -ℳ_D- ℛ_0 + ℳ_0 ) + ℛ_0 - ℳ_0/ℛ_D -ℳ_D- ℛ_0 + ℳ_0. For convenience, let us define a function s_0(_D, _0, _D, _0) equal to the right hand side of the previous equation, so that lim_n → 0log(ℛ-ℳ)/n = s_0. By considering together the three expressions Eqs. (<ref>), (<ref>) and (<ref>) the expression of the RS action (<ref>) reads lim_n → 01/n A_RS = log 2 - ℛ_D - ℛ_0/2 + ∫ h/√(2π) e^-h^2/2logcosh (√(-ℛ_0)h) + 1/2∫ z z/4π e^-|z|^2/2log I_β,0(_D, _0, z) + 1/2 s_0(_D, _0, _D, _0), where the overall factor 1/2 in Eq. (<ref>) has been taken into account. § RS EQUATIONS Let us consider separately the various terms, as in the previous section, and compute the derivatives of the action. The derivative of the local free energy f_τ (<ref>) with respect to _D gives ∂ f_τ/∂_D = - 1. The derivative of f_τ with respect to _0 is less immediate. In order to obtain a simplified expression, we proceed as follows ∂ f_τ/∂ℛ_0 = 1 - 2 1/2 √(-ℛ_0)∫ h/√(2π)e^-h^2/2 h tanh(√(-ℛ_0)h) = 1 - 1/√(-ℛ_0)∫ h/√(2π) e^-h^2/2∂/∂ htanh(√(-ℛ_0)h) = 1 - ∫ h/√(2π)e^-h^2/2[ 1-tanh^2(√(-ℛ_0)h) ], where an integration by parts has been performed and the resulting boundary term neglected[It may be a superfluous remark, but notice that, even if apparently the sign has not changed after the integration by parts, in fact, it has changed, since ∂_h(e^-h^2/2) = - e^-h^2/2 h. This kind of integration by parts will be used several times in the following.]. Since the Gaussian integral is normalized to 1, we finally get ∂ f_τ/∂ℛ_0 =∫ h/√(2π)e^-h^2/2tanh^2(√(-ℛ_0)h). Consider now the local free energy f_C in Eq. (<ref>). For the following computations we will go on working with complex numbers, instead of passing to their real and imaginary parts. This is done only to keep our notation more compact, and, in fact, some step will be purely formal. The derivative of f_C with respect to _D gives ∂ f_C/∂ℳ_D = ∫ z z/4π e^-|z|^2/2∂__Dlog I_β,0(_D,_0, z) = ∫ z z/4π e^-|z|^2/2∫ CC  g_β,0(C | _D,_0, z) |C|^2 /2 /I_β,0(_D,_0, z) and, by using the average defined in Eq. (<ref>), we get ∂ f_C /∂ℳ_D = 1/2∫ z z/4π e^-|z|^2/2⟨ |C^2| ⟩_0. As for the other free energy, the derivative of f_C with respect to the off-diagonal element _0 requires some non-trivial step. We have ∂ f_C/∂ℳ_0 = ∫ z z/4π e^-|z|^2/2∂__0log I_β,0(_D,_0, z) = ∫ z z/4π e^-|z|^2/2∫ C C  g_β,0(C | _D,_0, z) (-|C|^2/2 + (ℳ_0)^-1/2(C z)/2)/I_β,0(_D,_0, z) = - 1/2∫ z z/4π e^-|z|^2/2⟨ |C|^2 ⟩_0 + 1/2√(ℳ_0)∫ z z/4π e^-|z|^2/2z/2⟨ C ⟩_0 +1/2√(ℳ_0)∫ z z/4π e^-|z|^2/2z/2⟨C⟩_0 = - 1/2∫ z z/4π e^-|z|^2/2⟨ |C|^2 ⟩_0+ 1/√(ℳ_0)∫ z z/4π e^-|z|^2/2z/2⟨ C ⟩_0, where the definition of the average ⟨ (⋯) ⟩_0 has been used, together with the fact that in the next-to-last step the third integral can be cast into the second one, by simultaneously changing variables to C ↔C and z ↔z. This gives twice the contribution of the second integral. We now formally integrate by parts in z, and, therefore, we can write ∂ f_C/∂ℳ_0 = - 1/2∫ z z/4π e^-|z|^2/2⟨ |C|^2 ⟩_0 + 1/√(ℳ_0)∫ z z/4π e^-|z|^2/2∂/∂ z⟨ C ⟩_0, where we used the fact that ∂_z(e^-zz/2) = -e^-zz/2z /2. The derivative of the expectation value of C is computed as follows 1/√(ℳ_0)∂/∂ z⟨ C ⟩_0 = 1/√(ℳ_0)∫ CC  g_β,0(C |_D,_0, z) C √(_0)C/2/I_β,0(_D,_0, z) - 1/√(ℳ_0)∫ CC  g_β,0(C | _D,_0, z) C ∫ CC  g_β,0(C | _D,_0, z) √(_0)C/2/(I_β,0(_D,_0, z) )^2 = 1/2⟨ |C|^2 ⟩_0 - 1/2 |⟨ C ⟩_0|^2, where in the last step the linearity of complex conjugation has been used to pass from the integral of the complex conjugate to the complex conjugate of the integral. Once integrated over the Gaussian variable z, the first of the two terms in the last step is exactly equal to the first term of Eq. (<ref>), so that the final result is ∂/∂ℳ_0 f_C = - 1/2∫ z z/4π e^-|z|^2/2 |⟨ C ⟩_0|^2. Eventually, we are able to write the self-consistency equations for the RS parameters, by considering together Eqs. (<ref>), (<ref>), (<ref>) and (<ref>). We have ∂ A_RS/∂_D=0     →     -1 + ∂ s_0/∂_D = 0 ∂ A_RS/∂_D=0     →    1/2∫ z z/4π e^-|z|^2/2⟨ |C^2| ⟩_0 + ∂ s_0/∂_D = 0 ∂ A_RS/∂_0=0     →    ∫ h/√(2π)e^-h^2/2tanh^2(√(-ℛ_0)h) + ∂ s_0/∂_0 = 0 ∂ A_RS/∂_0 =0     →     -1/2∫ z z/4π e^-|z|^2/2 |⟨ C ⟩_0|^2 + ∂ s_0/∂_0 = 0, where the derivatives of the entropic term defined in Eq. (<ref>) are easy to compute and read as ∂ s_0/∂_D = - ∂ s_0/∂_D = ℛ_D-ℳ_D-2( ℛ_0 - ℳ_0) /(ℛ_D -ℳ_D- ℛ_0 + ℳ_0)^2 ∂ s_0/∂_0 = - ∂ s_0/∂_0 = ℛ_0-ℳ_0/(ℛ_D -ℳ_D- ℛ_0 + ℳ_0)^2. The previous set of equations can be simplified as follows. The first equation in the set, which contains the derivative with respect to the diagonal element _D, fixes the condition (ℛ_D -ℳ_D - ℛ_0 + ℳ_0)^2=ℛ_D -ℳ_D- 2 (ℛ_0 - ℳ_0), which can be eliminated after substitution in all the other equations of the set, leading to 1/2∫ z z/4π e^-|z|^2/2⟨ |C^2| ⟩_0 - 1 = 0 ∫ h/√(2π)e^-h^2/2tanh^2(√(-ℛ_0)h) + ℛ_0-ℳ_0/ℛ_D -ℳ_D- 2 (ℛ_0 - ℳ_0) = 0 - 1/2∫ z z/4π e^-|z|^2/2 |⟨ C ⟩_0|^2 - ℛ_0-ℳ_0/ℛ_D -ℳ_D- 2 (ℛ_0 - ℳ_0) = 0. A further simplification comes from the RS expression of the algebraic relation (<ref>), which defines the overlap in terms of the other variables. By considering the product of two RS matrices, we get a system of only two independent equations ℛ_D-ℳ_D + (n-1)(ℛ_0-ℳ_0)q_0=1 (ℛ_0-ℳ_0)(1+(n-2)q_0)+ (ℛ_D-ℳ_D)q_0=0 , which, always at order O(n), simplifies to ℛ_D-ℳ_D -(ℛ_0-ℳ_0)q_0=1 (ℛ_0-ℳ_0)(1-2q_0)+ (ℛ_D-ℳ_D)q_0=0. If one isolates q_0 from the second equation, one finds q_0= ℛ_0-ℳ_0/ℛ_D-ℳ_D-2 (ℛ_0 - ℳ_0) . By substituting this expression of q_0 in the self-consistency equations, we find q_0 = ∫ h/√(2π)e^-h^2/2tanh^2(√(-ℛ_0)h) q_0 = 1/2∫ z z/4π e^-|z|^2/2 |⟨ C ⟩_0|^2 1 = 1/2∫ z z/4π e^-|z|^2/2⟨ |C^2| ⟩_0 This set of equations is completed by the two algebraic relations among the RS parameters, Eqs. (<ref>). CHAPTER: 1RSB COMPUTATIONS FOR THE MF PROBLEM In this Appendix, the 1RSB computations for the Random Unitary model of the MF problem are reported by following the same scheme of Appendix <ref>. First the 1RSB action is derived, by implementing the ansatz (<ref>) on all the order parameters of the theory; then self-consistency equations are obtained by imposing the stationarity of the action. § 1RSB ACTION §.§.§ 1RSB local free energy f_τ Let us consider the free energy f_τ in Eq. (<ref>). We perform the 1RSB ansatz on the matrix , which leads to the introduction of three parameters _D, _0 and _1. In the following, we avoid writing the dependence of f_τ on these parameters explicitly. By using the same notation of the previous Appendix for the trace over the replicated spin variables τ, we have f_τ = _τ e^-1/2ℛ_0∑_ab^nτ^aτ^b -1/2(ℛ_1-ℛ_0)∑_k^n/m∑_ab^m τ^aτ^b - 1/2(ℛ_D-ℛ_1) ∑_a^n |τ_a|^2 =-n(ℛ_D-ℛ_1) + log_τe^-1/2ℛ_0 |∑_a^nτ^a|^2e^1/2(ℛ_0-ℛ_1)∑_k^n/m|∑_a^m τ^a|^2. The first exponential in the trace operator is of the same kind of the one already encountered in the RS computation and can be decoupled by introducing only one auxiliary Gaussian variable with a Hubbard-Stratonovich transformation e^-ℛ_0 |∑_a^nτ^a|^2= ∫ h h/4π e^-|h|^2/2e^√(-ℛ_0)∑_a^n (τ^a h). On the other hand, the second exponential requires the introduction of n/m auxiliary Gaussian variables, one for each diagonal block: e^1/2(ℛ_0-ℛ_1)∑_k^n/m|∑_a^m τ^a|^2 = ∏_k^n/m∫ u_k u_k/4π e^-|u_k|^2/2 e^√((ℛ_0 - ℛ_1))∑_a^m (τ^a u_k) . In order to take the trace over the complex spin variables, as in the previous Appendix, we pass to the real and imaginary parts both of the spins τ^a=ρ^a+iσ^a and of the auxiliary Gaussian fields h=h_R+ih_I and u_k=u_k^R+iu_k^I. Moreover, we use the fact that ∑_a^n = ∑_k^n/m∑_a^m and ∏_a^n = ∏_k^n/m∏_a^m. Consider the two exponential functions depending on the the parameters _0 and _1 in the previous expressions. We can write their product as follows: ∏_k^n/m∏_a^m ∑_{τ^a} e^√(-ℛ_0)(τ^a h)e^√(ℛ_0 - ℛ_1)(τ^a u_k) = ∏_k^n/m∏_a^m ∑_{ρ^a,σ^a} e^√(-ℛ_0)(ρ^a h_R + σ^a h_I )× × e^√(ℛ_0 - ℛ_1) (ρ^a u_k^R + σ^a u_k^I) = ∏_k^n/m∏_a^m ∑_ρ^a =± 1 e^ρ^a(√(-ℛ_0) h_R + √(ℛ_0 - ℛ_1) u_k^R )∑_σ^a =± 1 e^σ^a(√(-ℛ_0) h_I + √(ℛ_0 - ℛ_1) u_k^I ) = 4^n ∏_k^n/mcosh^m(√(-ℛ_0) h_R + √(ℛ_0 - ℛ_1) u_k^R ) × ×cosh^m(√(-ℛ_0) h_I + √(ℛ_0 - ℛ_1) u_k^I ) = 4^n ∏_k^n/mcosh^mΞ(_0,_1, h_R,u_k^R) cosh^mΞ(_0,_1, h_I,u_k^I), where the function Ξ is defined as the argument of the cosh function (see Eq. (<ref>)). When the final expression of the previous sequence is integrated over the Gaussian variables it factorizes in two equivalent contributions, one containing an integral in h_R and n/m identical integrals in u_k^R, the other one containing an integral in h_I and n/m identical integrals in u_k^I. By changing integration variables the two contributions give a square, which taken out of the log, leads to f_τ = n log 4 -n(ℛ_D-ℛ_1) + 2 log∫𝒟 h (∫𝒟 u cosh^m Ξ)^n/m, where the compact notation for the Gaussian integration measure has been adopted. This is the expression of f_τ in the 1RSB ansatz at finite n. After carefully taking the limit n → 0 and replacing as usual lim_n → 0 f_τ /n → f_τ, we find the O(n) expression f_τ(_D,_0,_1,m) = log 4 - (ℛ_D-ℛ_1) + 2/m∫𝒟 h log∫𝒟 u cosh^mΞ(_0,_1, h,u) Once again, this expression resembles the 1RSB free energy of the SK model, even if here the dependence on temperature is only implicit in the parameters. §.§.§ 1RSB local free energy f_C We perform the 1RSB ansatz on the matrix and plug it into the free energy f_C in Eq. (<ref>), so that we get f_C =log∫∏_a^n C^a C^a e^-β∑_a^n |C^a|^4 +1/2ℳ_0 ∑_ab^n C^a C^b + 1/2 (ℳ_D-ℳ_1)∑_a^n|C^a|^2 × e^1/2 (ℳ_1-ℳ_0)∑_k^n/m∑_ab^m C^a C^b =log∫∏_a^n C^a C^a e^-β∑_a^n |C^a|^4 +1/2 (ℳ_D-ℳ_1)∑_a^n|C^a|^2 e^ℳ_0/2 |∑_a^n C^a|^2 × e^1/2(ℳ_1-ℳ_0)∑_k^n/m|∑_a^m C^a|^2. As for the other local free energy, we decouple the squares introducing auxiliary Gaussian variables through the following relations e^ℳ_0/2 |∑_a^n C^a|^2=∫ z z/4π e^-|z|^2/2 e^√(ℳ_0)∑_a^n (C^az) and e^1/2(ℳ_1-ℳ_0)∑_k^n/m|∑_a^m C^a|^2 = ∫∏_k^n/m[ w_k w_k/4π e^-|w_k|^2/2] e^√(ℳ_1-ℳ_0)∑_k^n/m∑_a^m (C^aw_k). After reorganizing the terms in the free energy and factorizing identical contributions, one finds f_C = log∫ z z/4π e^-|z|^2/2[ ∫ w w/4π e^-|w|^2/2 I^m_β,1(_D,_0,_1, z,w) ]^n/m, where Eq. (<ref>) has been used. Eq. (<ref>) is the finite-n expression of the local free-energy f_C. By expanding linearly in n and replacing lim_n → 0 f_C / n → f_C, we find f_C(_D,_0, _1,m) = 1/m∫𝒟[zz] log∫𝒟[ww] I^m_β,1(_D,_0,_1 , z,w) 𝒟[zz] = z z/4π e^-|z|^2/2, where to shorten the notation, we have introduced the symbol above for the complex Gaussian integration measure. Incidentally, we notice that, if z=σ + iρ, then 𝒟[zz]=𝒟σ𝒟ρ, taking into account the usual factor 2 coming from the Jacobian of the transformation. §.§.§ 1RSB entropic term A useful property of 1RSB matrices, which is of great advantage in writing entropic contributions to the action, is that a given matrix of this kind has only three kinds of eigenvalues a_1=_D-_1 d_1=n-n/m a_2=_D+(m-1)_1-m_0 d_2=n/m-1 a_3=_D+(m-1)_1+(n-m)_0 d_3=1, where d_1,d_2 and d_3 are the degeneracies, see e.g. <cit.>. Hence, the determinant of a generic 1RSB matrix is given by =(_D-_1)^n-n/m(_D+(m-1)_1-m_0)^n/m-1(_D+(m-1)_1+(n-m)_0). Let us put =-. Then, the entropic term in the action (<ref>) is given by log, which at order O(n) reads as lim_n → 0log()/n =m-1/mlog(_D-_1) +1/mlog[_D+(m-1)_1 - m _0] +_0/_D+(m-1)_1 - m _0. Analogously to the RS case, we define a function s_1(_D,_0,_1,m), in which we store the expression of the 1RSB entropic term reported in the right hand side of the previous equation. The 1RSB expression of the action (<ref>), can be written collecting the results of Eqs. (<ref>), (<ref>) and (<ref>). We have lim_n→01/n A_1RSB = log 2 - ℛ_D-ℛ_1/2 + 1/m∫𝒟h log∫𝒟u cosh^m Ξ +1/2 m∫𝒟[zz] log∫𝒟[ww] I^m_β,1(_D,_0,_1 , z,w) + 1/2 s_1(_D,_0,_1,m), where we recall that the functions Ξ and I_β,1 have been defined respectively in Eqs. (<ref>) and (<ref>), and the overall factor 1/2 in Eq. (<ref>) has been taken into account. § 1RSB EQUATIONS Let us consider the local free energy f_τ in Eq. (<ref>), which is the simplest one and resembles to the paradigmatic case of the SK model. The derivative in _D is immediate and leads to the same equation as in the RS case, see Eq. (<ref>). The derivatives with respect to _0 and _1 need some preliminary remark to be computed more easily. Notice that, given the definition of the function Ξ in Eq. (<ref>), the following relations hold ∂__0Ξ = -1/2√(-_0) h + 1/2√(_0-_1) u, ∂__1Ξ = - 1/2√(_0-_1) u, ∂_h Ξ = √(-_0),     ∂_u Ξ = √(_0-_1). As a consequence of the last line of relations, the derivative of any function of Ξ in h or u is equal up to the coefficient c_h,u=∂_h,uΞ = {√(-_0), √(_0-_1)}. In particular, this applies to the following derivatives compactly written for both the derivative in h and in u ∂_h,ucosh^m Ξ = m c_h,ucosh^mΞtanhΞ ∂_h,u (cosh^m ΞtanhΞ) = c_h,ucosh^mΞ[ 1 + (m-1) tanh^2Ξ], which will appear in the computations. In the previous expressions, the power m is restored after the derivative, by multiplying and dividing by coshΞ, which also leads to the presence of tanhΞ. With the help of the previous relations, the derivative of f_τ with respect to _0 is computed as follows: ∂ f_τ/∂ℛ_0 = 2/m∫𝒟 h ∫𝒟 u  ∂__0cosh^mΞ/∫𝒟 u cosh^mΞ = - 1/√(-_0)∫𝒟 h h ∫𝒟 u cosh^mΞtanhΞ/∫𝒟u cosh^m Ξ + 1/√(_0-_1)∫𝒟 h ∫𝒟 u u cosh^mΞtanhΞ/∫𝒟u cosh^m Ξ = -1/√(-_0)[ ∫𝒟 h ∫𝒟 u∂_h (cosh^m ΞtanhΞ) /∫𝒟u cosh^m Ξ - ∫𝒟 h ∫𝒟 ucosh^m ΞtanhΞ∫𝒟 u ∂_h cosh^mΞ/(∫𝒟u cosh^m Ξ)^2] + 1/√(_0-_1)∫𝒟h ∫𝒟 u ∂_u (cosh^m ΞtanhΞ) /∫𝒟u cosh^m Ξ where the first term has been integrated by parts in h (and the derivative distributed on the numerator and denominator of the integrand) and the second one in u. It is then clear, that the first and third term yield an equal and opposite contributions, as a consequence of Eq. (<ref>). We are therefore left only with the second term in the previous expression, which after using Eq. (<ref>), leads to ∂ f_τ/∂ℛ_0 = m ∫𝒟 h ( ∫𝒟 u cosh^m ΞtanhΞ/∫𝒟u cosh^m Ξ)^2. The derivative of f_τ with respect to _1 is slightly easier: we have ∂ f_τ/∂ℛ_1 = 1 - 2/m∫𝒟 h ∫𝒟 u  ∂__1cosh^mΞ/∫𝒟 u cosh^mΞ = 1 - 1/√(_0 -_1)∫𝒟 h ∫𝒟u u cosh^mΞtanhΞ/∫𝒟u cosh^mΞ =1 - 1/√(_0 -_1)∫𝒟 h ∫𝒟u ∂_u(cosh^mΞtanhΞ) /∫𝒟u cosh^mΞ = 1 - ∫𝒟 h ∫𝒟u cosh^mΞ [1 + (m-1) tanh^2Ξ]/∫𝒟u cosh^mΞ. Since the first term in the numerator of the integrand is equal to the denominator, it simplifies and cancels out the additive 1. Eventually, we find ∂ f_τ/∂ℛ_1 = - (m-1) ∫𝒟 h ∫𝒟u cosh^mΞtanh^2Ξ/∫𝒟u cosh^mΞ. Let us now compute the derivatives of the local free energy f_C. To simplify the procedure we have developed a similar technology with respect to the other free energy. In fact, as mentioned before, there is a certain symmetry between the two free energies, and one could already guess the final result for the derivatives of f_C. The symmetry relays on the fact that the role of the function cosh is played by the integral function I_β,1, and, the argument of the exponential function g_1 defined in Eq. (<ref>) is the counterpart of Ξ. We split the computation, by first considering the derivatives of I_β,1^m in the 1RSB parameters, in order to have some results ready for use when computing the derivatives of f_C. For the derivative in _D we have ∂__D I_β,1^m = m I_β,1^m-1∫ C C ∂__D g_β,1 = m I_β,1^m ∫ C C g_β,1|C|^2/2/ I_β,1 = m/2 I_β,1^m ⟨ |C|^2 ⟩_1, for the derivative in _0 ∂__0 I_β,1^m = m I_β,1^m ∫ C C g_β,1[1/2√(_0)(Cz) - 1/2√(_1-_0)(Cw)]/I_β,1 = m/2√(_0) I_β,1^m ⟨(Cz)⟩_1 - m/2√(_1-_0)⟨(Cw)⟩_1, and similarly for the derivative in _1: ∂__1 I_β,1^m = m/2 I_β,1^m ⟨ |C|^2 ⟩_1 + m/2√(_1-_0) I_β,1^m ⟨(Cw)⟩_1 Given these preliminary results, we can compute the derivatives of the local free energy. The case of the diagonal element _D is the simplest one: we immediately find ∂ f_C/∂ℳ_D = 1/2 m∫𝒟[zz] ∫𝒟[w w] I_β,1^m ⟨ |C|^2 ⟩_1/∫𝒟[w w] I_β,1^m . For the derivative with respect to _0, we proceed as follows ∂ f_C/∂ℳ_0 = 1/m∫𝒟[zz] ∫𝒟[w w] ∂__0 I_β,1^m /∫𝒟[w w] I_β,1^m = 1/2√(_0)[ ∫𝒟[zz] z/2∫𝒟[w w] I_β,1^m ⟨ C ⟩_1/∫𝒟[w w] I_β,1^m + ∫𝒟[zz] z/2∫𝒟[w w] I_β,1^m ⟨C⟩_1/∫𝒟[w w] I_β,1^m ] - 1/2√(_1 - _0)[ ∫𝒟[zz] ∫𝒟[w w] w/2 I_β,1^m ⟨ C ⟩_1/∫𝒟[w w] I_β,1^m + ∫𝒟[zz] ∫𝒟[w w] w/2 I_β,1^m ⟨C⟩_1/∫𝒟[w w] I_β,1^m ] Similarly to the RS case, the two couples of terms inside square brackets are equal, after changing variables o C ↔C, z ↔z and w ↔w, hence we have ∂ f_C /∂ℳ_0 = 1/√(_0)∫𝒟[zz] z/2∫𝒟[w w] I_β,1^m ⟨ C ⟩_1/∫𝒟[w w] I_β,1^m - 1/√(_1 - _0)∫𝒟[zz] ∫𝒟[w w] w/2 I_β,1^m ⟨ C ⟩_1/∫𝒟[w w] I_β,1^m = 1/√(_0)[ ∫𝒟[zz] ∫𝒟[w w] ∂_z (I_β,1^m ⟨ C ⟩_1)/∫𝒟[w w] I_β,1^m - ∫𝒟[zz] ∫𝒟[w w] I_β,1^m ⟨ C ⟩_1∫𝒟[w w] ∂_z I_β,1^m /(∫𝒟[w w] I_β,1^m )^2 ] - 1/√(_1 - _0)∫𝒟[zz] ∫𝒟[w w] ∂_w (I_β,1^m ⟨ C ⟩_1) /∫𝒟[w w] I_β,1^m where formal integration by parts has been performed in z for the first term (the derivative has been already distributed) and in w for the second one. Notice the perfect symmetry with the corresponding computation performed on the other free energy. We now have to compute the derivatives in z and w, which correspond to the derivatives in h,u in the case of f_τ. Since, in analogy with the case of the function Ξ, the dependence of the function g_β,1 on z and w is of the same kind, both the derivatives in z and w of more complex objects which have g_β,1 as an argument yield the same result a part from a coefficient, coming from the derivative of the exponent argument in the expression of g_β,1, i.e. c_z,w = {√(_0), √(_1-_0)}. Therefore, we can compactly write ∂_z,w I_β,1^m = m c_z,w/2⟨C⟩_1 ∂_z,w(I_β,1^m ⟨ C ⟩_g_1) = c_z,w/2 I_β,1^m (⟨ |C|^2 ⟩_1 + |⟨ C ⟩_1|^2 ). By considering this result, the only term that survives in the derivative of f_C with respect to _0 is second one in the last step of our computation. This leads to the following result ∂ f_C/∂ℳ_0 = -m/2∫𝒟[zz] | ∫𝒟[w w] I_β,1^m ⟨ C ⟩_1/∫𝒟[w w] I_β,1^m |^2 With an analogous computation, the derivative of f_C with respect to _1 yields the result ∂ f_C/∂ℳ_1 = m-1/2∫𝒟[zz] ∫𝒟[w w] I_β,1^m |⟨ C ⟩_1|^2 /∫𝒟[w w] I_β,1^m A nice remark, which catches the eye by looking at Eqs. (<ref>), (<ref>) and (<ref>), is that there is a correspondence between the position of the parameter in a 1RSB (or more in general RSB) matrix and the level of integration at which the square modulus appears in the derivatives of the free energy with respect to that parameter: the more internal is the parameter position, the more internal is the level where the square modulus appears. This hierarchical correspondence can be seen also in the derivatives of the free energy f_τ in Eqs. (<ref>) and (<ref>); in this case the derivative with respect to diagonal term _D is one, coherently with the fact that the square modulus of Ising spins is one. Now that we have computed all the derivatives of the free energies in the action (<ref>), we can write the self-consistency equations, by assembling all the partial results: ∂ A_1RSB/∂_D=0     →     -1 + ∂ s_1/∂_D = 0 ∂ A_1RSB/∂_0=0     →     m ∫𝒟 h ( ∫𝒟 u cosh^m ΞtanhΞ/∫𝒟u cosh^m Ξ)^2 + ∂ s_1/∂_0 = 0 ∂ A_1RSB/∂_1=0     →     - (m-1) ∫𝒟 h ∫𝒟u cosh^mΞtanh^2Ξ/∫𝒟u cosh^mΞ + ∂ s_1/∂_1 = 0 ∂ A_1RSB/∂_D =0     →    1/2 m∫𝒟[zz] ∫𝒟[w w] I_β,1^m ⟨ |C|^2 ⟩_1/∫𝒟[w w] I_β,1^m + ∂ s_1/∂_D = 0 ∂ A_1RSB/∂_0=0     →     -m/2∫𝒟[zz] | ∫𝒟[w w] I_β,1^m ⟨ C ⟩_1/∫𝒟[w w] I_β,1^m |^2 + ∂ s_1/∂_0 = 0 ∂ A_1RSB/∂_1 =0     →    m-1/2∫𝒟[zz] ∫𝒟[w w] I_β,1^m |⟨ C ⟩_1|^2 /∫𝒟[w w] I_β,1^m + ∂ s_1/∂_1 = 0 where, by using the shorthand notation =-, the derivatives of the entropic term read ∂ s_1/∂ℛ_D = -∂ s_1/∂ℳ_D = 𝒜_0/(𝒜_D+(m-1)𝒜_1-m𝒜_0)^2 -1/m1/𝒜_D+(m-1)𝒜_1-m𝒜_0 -m-1/m1/𝒜_D-𝒜_1 ∂ s_1/∂ℛ_0 = - ∂ s_1/∂ℳ_0 = -m𝒜_0/(𝒜_D+(m-1)𝒜_1-m𝒜_0)^2 ∂ s_1/∂ℛ_1= - ∂ s_1/∂ℳ_1 = (m-1)[ 𝒜_0/(𝒜_D+(m-1)𝒜_1-m𝒜_0)^2 -1/m1/𝒜_D+(m-1)𝒜_1-m𝒜_0 -1/m1/𝒜_D-𝒜_1]. In order write a more compact set of equations, it is convenient to use the algebric constraint Eq. (<ref>), which has to be written in the 1RSB ansatz for this purpose. To visualize the constraint in the 1RSB case, let us write it in matrix form, for the simple case of n=4 and m=2: [ 𝒜_D 𝒜_1 𝒜_0 𝒜_0; 𝒜_1 𝒜_D 𝒜_0 𝒜_0; 𝒜_0 𝒜_0 𝒜_D 𝒜_1; 𝒜_0 𝒜_0 𝒜_1 𝒜_D ][ 1 q_1 q_0 q_0; q_1 1 q_0 q_0; q_0 q_0 1 q_1; q_0 q_0 q_1 1 ] = [ 1 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 1; ] We find the following three independent equations, by considering the product of the first line of A respectively with the first, the second and the (m+1)-th column of 𝒜_D +(m-1)𝒜_1 q_1 + (n-m)𝒜_0 q_0 =1 𝒜_1 + 𝒜_D q_1 + (m-2)𝒜_1 q_1 + (n-m)𝒜_0 q_0 =0 𝒜_D q_0 + (m-1)𝒜_1 q_0 + 𝒜_0 + (m-1) 𝒜_0 q_1 + (n-2m)𝒜_0 q_0 = 0, which in the limit n → 0 reduce to 𝒜_D +(m-1)𝒜_1 q_1 - m𝒜_0 q_0 =1 𝒜_1 + 𝒜_D q_1 + (m-2)𝒜_1 q_1 - m𝒜_0 q_0 =0 𝒜_D q_0 + (m-1)𝒜_1 q_0 + 𝒜_0 + (m-1) 𝒜_0 q_1 - 2m 𝒜_0 q_0 = 0. Note that from the first two equations one finds q_1= 1-1/𝒜_D-𝒜_1. Moreover, by subtracting the first equation to the third one and using the expression of q_1, one finds the following expression for q_0: q_0= 1 - 1/m1/𝒜_D+(m-1)𝒜_1-m𝒜_0 -m-1/m1/𝒜_D-𝒜_1. Now, we can proceed as in the RS case: the first equation of the set, which contains the derivatives in the diagonal element _D, can be eliminated after using it in all the other equations. Then, by using the definitions of q_0 and q_1, after some algebra, we finally get to the nicer and more familiar set of equations q_0 = ∫𝒟 h ( ∫𝒟 u cosh^m ΞtanhΞ/∫𝒟u cosh^m Ξ)^2 q_1 = ∫𝒟 h ∫𝒟u cosh^mΞtanh^2Ξ/∫𝒟u cosh^mΞ 1 = 1/2∫𝒟[zz] ∫𝒟[w w] I_β,1^m ⟨ |C|^2 ⟩_1/∫𝒟[w w] I_β,1^m q_0 = 1/2∫𝒟[zz] | ∫𝒟[w w] I_β,1^m ⟨ C ⟩_1/∫𝒟[w w] I_β,1^m |^2 q_1 = 1/2∫𝒟[zz] ∫𝒟[w w] I_β,1^m |⟨ C ⟩_1|^2 /∫𝒟[w w] I_β,1^m , which has to be completed by the three algebraic relations (<ref>) among the 1RSB parameters. tocchapterBibliography
http://arxiv.org/abs/2306.02853v1
20230605130423
Selection Combining over Log-Logistic Fading Channels with Applications to Underwater Optical Wireless Communications
[ "Yazan H. Al-Badarneh", "Mustafa K. Alshawaqfeh", "Osamah S. Badarneh" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
Selection Combining over Log-Logistic Fading Channels with Applications to Underwater Optical Wireless Communications Yazan H. Al-Badarneh, Member, IEEE, Mustafa K. Alshawaqfeh, Member, IEEE, Osamah S. Badarneh, Member, IEEE Y. H. Al-Badarneh is with the Department of Electrical Engineering, The University of Jordan, Amman, 11942 (email: [email protected]) M. K. Alshawaqfeh and O. S. Badarneh are with the Electrical Engineering Department, School of Electrical Engineering and Information Technology, German Jordanian University, Amman 11180, Jordan (e-mail: mustafa.shawaqfeh, Osamah.Badarneh,@gju.edu.jo). July 31, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= We study the performance of a selection combining (SC) receiver operating over independent but non-identically distributed log-logistic (ℒℒ) fading channels. We first characterize the statistics of the output instantaneous signal-to-noise ratio (SNR) of the SC receiver. Based on the SNR statistics, we derive exact analytical expressions, in terms of multivariate Fox H-functions, for the outage probability, the average bit error rate, and the ergodic capacity. We also derive exact expressions for such performance measures when all channels are independent and identically distributed, as a special case. Furthermore, we deduce simplified asymptotic expressions for these performance metrics assuming high values of average transmit SNR. To demonstrate the applicability of our theoretical analysis, we study the performance of an SC receiver in underwater optical wireless communication systems. Finally, we confirm the correctness of the derived analytical results using Monte Carlo Simulations. Selection combining, log-logistic fading, performance analysis, underwater optical wireless communications § INTRODUCTION Log-logistic (ℒℒ) distribution has recently garnered much attention due to its effectiveness in fitting experimental data related to channel fluctuations in various communication systems. Several studies utilized the two-parameter ℒℒ distribution to accurately model different types of channel fluctuations, including turbulence fading in underwater optical wireless communication (UWOC) systems <cit.>, misaligned gain in millimeter-wave cellular networks <cit.>, and air-ground channels in unmanned aerial vehicle (UAV) communications <cit.>. The performance of wireless communication systems in ℒℒ fading environment has been investigated very recently in <cit.>. The authors in <cit.> analyzed the outage probability and the ergodic capacity of a single-branch receiver. However, a single-branch receiver is highly prone to fading, which can significantly degrade the performance, especially in deep fading scenarios. Therefore, a multiple-branch receiver can be employed to mitigate severe fading. The performance of UWOC systems under different turbulence fading conditions was investigated in <cit.>, <cit.>. Recent advances in the UWOC systems have focused on the design and performance perspectives of Internet of underwater things (IoUT) systems <cit.>, <cit.>. However, analyzing the performance of UWOC systems with diversity reception is of practical importance. In this regard, employing a multiple-branch selection combining (SC) receiver in UWOC systems can combat turbulence fading and improve the performance. The SC receiver is characterized by its low complexity, as it selects the branch with the highest signal-to-noise ratio (SNR) among the available L branches. In realistic communication settings, these L branches may experience non-identical or identical fading statistics <cit.>. This letter mainly focuses on analyzing the performance of an L-branch SC receiver in ℒℒ fading environment with applications to UWOC systems. The key contributions of this work can be outlined as follows: * We characterize the statistics of the output SNR of the L-branch SC receiver considering L independent but non-identically distributed (i.n.i.d.) branches, as well as the special case of L independent and identically distributed (i.i.d.) branches. * We derive novel exact analytical expressions, in terms of multivariate Fox H-functions, for the outage probability, the average bit error rate (BER), and the ergodic capacity. In addition, we obtain simple asymptotic expressions for such performance metrics assuming high values of average transmit SNR. * We utilize the obtained expressions to study the performance of an UWOC system affected by temperature-induced turbulence. § STATISTICS OF THE SC RECEIVER OUTPUT SNR We consider an L-branch SC receiver operating in ℒℒ fading environment. The output instantaneous SNR of the SC receiver is characterized by γ_SC=max{γ_1, γ_2, . . . . , γ_L}, where γ_l is the instantaneous SNR of the l-th branch, l=1,2,...., L. Assuming that the power of transmitted signal is P and the noise at the l-th branch is the additive white Gaussian noise (AWGN) with zero mean and variance σ^2 (assumed to be identical for all branches), then the random variable (RV) γ_l is given by γ_l=ρ |h_l|^2, where ρ=P/σ^2 is the average transmit SNR, |h_l| is a RV that represents the fading amplitude of the l-th branch and is distributed according to a two-parameter ℒℒ distribution (i.e., |h_l| ∼ℒℒ(α^'_l, β^'_l). Hence, the cumulative distribution function (CDF) of |h_l| is given by <cit.> F_|h_l|(γ)=1/1+ ( γ/α^'_l)^-β^'_l γ≥ 0, where α^'_l>0 and β^'_l> 0 are the scale and shape parameters of the ℒℒ distribution, respectively. Using the transformation of random variables, one can show that γ_l∼ℒℒ(ρα_l, β_l), where α_l= (α^'_l)^2 and β_l= β^'_l/2 [ The parameters α_l and β_l capture the turbulence fading characteristics of the l-th branch <cit.>. The impact of these parameters on the performance of a single-branch receiver has been studied in <cit.>.]. Therefore, the CDF of γ_l can be obtained as F_γ_l(γ)=1/1+ ( γ/ρα_l)^-β_l. Utilizing <cit.>, F_γ_l(γ) can be expressed as F_γ_l(γ)= *111111γ^β_l/(ρα_l ) ^β_l, where G_p,q ^m,n(· ) is the Meijer G-function <cit.>. §.§ L i.n.i.d. branches We first consider an SC receiver with L i.n.i.d. branches, where γ_l are i.n.i.d. RVs, l=1,2,...., L. In view of γ_SC=max{γ_1, γ_2, . . . . , γ_L}, the CDF of γ_SC, denoted by F_γ_SC(γ), is given by F_γ_SC(γ) = ∏_l=1^L F_γ_l(γ) = ∏_l=1^L *111111γ^β_l/(ρα_l ) ^β_l. F_γ_SC(γ) above can be compactly expressed in terms of the multivariate H-function <cit.> as given in (<ref>). Capitalizing on F_γ_SC(γ) in (<ref>), we derive the probability density function (PDF) of γ_SC in Lemma 1 below. Let f_γ_SC(γ) denote the PDF of γ_SC=max{γ_1, γ_2, . . . . , γ_L}, then f_γ_SC(γ) is given by (<ref>). Proof: See Appendix <ref>. §.§ L i.i.d. branches We now consider an SC receiver with L i..i.d branches as a special case, where γ_l are i.i.d. RVs, l=1,2,...., L. This implies that α_l=α and β_l=β, l=1,2,...., L. Making use of (<ref>) in (<ref>), F_γ_SC(γ) reduces to F_γ_SC(γ)=1/[ 1+ ( γ/ρα)^-β]^L , γ≥ 0. F_γ_SC(γ) can be expressed in terms of a Meijer G-function as F_γ_SC(γ) = 1/Γ(L)*11111Lγ^β/(ρα)^β, where Γ(·) is the gamma function <cit.>, and the identity above is obtained with the help of <cit.>. To this end, f_γ_SC(γ) can be obtained by differentiating F_γ_SC(γ) with respect to (w.r.t.) γ as <cit.> f_γ_SC(γ) = βγ^-1/Γ(L)*11110Lγ^β/(ρα)^β. § EXACT PERFORMANCE ANALYSIS §.§ Outage Probability A communication outage occurs if γ_SC falls below predefined threshold value γ_th. Accordingly, the outage probability, denoted by , is defined as =(γ_SC≤γ_th) =F_γ_SC(γ_th). can be evaluated from (<ref>) and (<ref>) for the cases of L i.n.i.d. branches and L i.i.d. branches, respectively. §.§ Average BER Considering the output SNR of the SC receiver, the average BER for different modulation formats, denoted by , is given by <cit.> = ∫_0^∞δ (√(ζγ)) f_γ_SC(γ) d γ, where δ and ζ are modulation-dependent parameters <cit.> and (·) is complementary error function <cit.>. For the SC receiver with L i.n.i.d. branches, the average BER is given in (<ref>). Proof: See Appendix <ref>. For the SC receiver with L i.i.d. branches, the average BER in (<ref>) can be expressed as = δ/√(π)Γ(L) H_4,3^2,3[ . 1/ζαρ| [ (0,1/ β), (1,1), (0.5,1) (1,1); ; (L,1/ β), (1,1), (0,1) ]], where H_p,q ^m,n[· ] is the univariate H-function <cit.>. Proof: We express the (√(ζγ)) in (<ref>) in terms of Meijer G-function as (√(ζγ))= 1/√(π)*11110, 10, 1/2, 0ζγ, according to <cit.>. We now replace the Meijer G-function of (√(ζγ)) and f_γ_SC(γ) in (<ref>) by their equivalent H-functions using <cit.>. Applying <cit.> to solve the integral in (<ref>) yields in (<ref>). §.§ Ergodic Capacity In view of γ_SC being the output SNR of the SC receiver, the ergodic capacity, denoted by , is given as = 1/ln(2)∫_0^∞ln(1+γ)f_γ_SC(γ) dγ. For the SC receiver with L i.n.i.d. branches, the ergodic capacity is given in (<ref>). Proof: See Appendix <ref>. For the SC receiver with L i.i.d. branches, the ergodic capacity in (<ref>) can be expressed as = 1/ln(2) Γ(L) H_3,3^3,2[ . 1/αρ| [ (0,1/ β), (0,1), (1,1); ; (L,1/ β), (0,1), (0,1) ]], Proof: We first express the ln(1+γ) in (<ref>) in terms of Meijer G-function as ln(1+γ)= *12221,11,0γ, according to <cit.>. To this end, we replace the Meijer G-function of ln(1+γ) and f_γ_SC(γ) in (<ref>) by their equivalent H-functions using <cit.>. Applying <cit.> to solve the integral in (<ref>) yields in (<ref>). § ASYMPTOTIC PERFORMANCE ANALYSIS §.§ Asymptotic Outage probability To get further insights, we derive next an asymptotic closed-form expression for F_γ_l(γ) in (<ref>) at high values of average transmit SNR (i.e., ρ→∞). It can be shown that F_γ_l(γ) ≈1/(ρα_l )^β_lγ^β_l, as ρ→∞. Accordingly, the asymptotic CDF of γ_SC for the case of L i.n.i.d. branches can be given as F_γ_SC(γ) = ∏_l=1^L F_γ_l(γ) ≈φρ^-γ^, where = ∑_l=1^L β_l and φ = ∏_l=1^L (1/α_l)^β_l. Consequently, the asymptotic PDF of γ_SC is f_γ_SC(γ) ≈φ ρ^-γ^ -1. Capitalizing on (<ref>), the asymptotic outage probability, denoted by , can be obtained as ≈φρ^-γ_th^. §.§ Asymptotic Average BER The asymptotic average BER, denoted by , can be obtained by plugging f_γ_SC in (<ref>) and the identity (√(ζγ))= 1/√(π)*11110, 10, 1/2, 0ζγ in the integral representation of (<ref>). To this end, the integral in (<ref>) can be solved with the help of <cit.> as ≈δζ^-/√(π)φΓ(1/2+) ρ^-. Based on (<ref>) and (<ref>), we can draw conclusions about the diversity order G_d ( see, e.g., <cit.> and references therein). It can be shown that for the i.n.i.d. case G_d=S_β=∑_l=1^Lβ_l, while for the i.i.d. case G_d=S_β= β L. §.§ Asymptotic Capacity The asymptotic ergodic capacity, denoted by , can be evaluated according to <cit.> as ≈log _2(ρ)+.log _2(e) ∂/∂ n𝔼[γ^n_SC]/ρ^n|_n=0, where the 𝔼[γ^n_SC] is the n-th moment of the RV γ_SC. To this end, we utilize F_γ_SC(γ) in (<ref>) and <cit.> to evaluate the n-th moment of γ_SC with L i.i.d. branches as 𝔼[γ_SC^n] = ρ^nα^n/Γ( L ) Γ( β-n/β) Γ( β L+n/β) . Plugging (<ref>) in (<ref>), after some basic algebraic manipulations, yields ≈log _2(ρ)+ βln(α)+E_0+Ψ(L)/βln(2), where E_0 denotes the Euler-Mascheroni constant and Ψ(· ) is the digamma function <cit.>. § APPLICATION EXAMPLE In this section, we consider an UWOC system affected by temperature-induced turbulence <cit.>. We utilize the derived expressions for the outage probability, the average BER, and the ergodic capacity to investigate numerically the performance of such a system. To verify our analytical results for different system configurations, we consider the following scenarios: * Scenario 1) L i.n.i.d. branches SC receiver is employed when L=3. The channel power gain of each branch is distributed according to a ℒℒ distribution with arbitrarily chosen parameters (α_l, β_l) ∈{ (1, 2.2), (0.98, 2.3), (1.1, 2.4) }. * Scenario 2) L i.i.d. branches SC receiver is employed when L=2 and L=4. The channel power gain of each branch is distributed according to a ℒℒ distribution with measurement-based parameters α=0.9724, β=2.3311, as given in Case 2 of Table II <cit.>. In Fig. 1, we plot the outage probability against the average transmit SNR ρ for different values of L when γ_th=10 dB. Fig. 2 shows the average BER versus ρ, where the intensity modulation and direct detection (IM-DD) with on-off keying (OOK) modulation is employed. Fig. 3 depicts the ergodic capacity versus ρ for different values of L. In all figures, we observe that the analytical results perfectly match with the simulation results and the asymptotic results tend to converge to the exact ones as ρ grows large. In addition, increasing L improves the performance of the system, as expected. § CONCLUSION The performance of an SC receiver operating over i.n.i.d. log-logistic fading channels is investigated. We developed exact analytical expressions for the outage probability, the average BER, and the ergodic capacity. We also obtained exact expressions for these measures for i.i.d. channels, as a special case. Furthermore, we obtained simplified asymptotic expressions for such performance measures in the high SNR regime. We utilized the analytical results to analyze the performance of an SC receiver in UWOC systems. Monte Carlo Simulations have confirmed the correctness of our analytical framework. § DERIVATION OF F_Γ_SC(Γ) Re-expressing the H-function of F_γ_SC(γ) in (<ref>) by its original contour integral according to <cit.>, yields F_γ_SC(γ) = ( 1/2π j)^L ∮_c_1∮_c_L[ Π_l=1^L Γ(1-s_i) Γ(s_i) . .(ρα_l )^-β_l] γ^∑_l=1^L β_l ds_1 ds_L. Then, f_γ_SC(γ) is obtained by differentiating F_γ_SC(γ) w.r.t. γ as f_γ_SC(γ) = ∂ F_γ_SC(γ) /∂γ = ( 1/2π j)^L ∮_c_1∮_c_L[ Π_l=1^L Γ(1-s_i) . . Γ(s_i) ( ρα_l)^-β_l s_l] (∂/∂γγ^∑_l=1^L β_l s_l) ds_1 ds_L. To this end, ∂/∂γγ^∑_l=1^L β_l s_l = (∑_l=1^L β_l s_l ) γ^(∑_l=1^L β_l s_l ) - 1 = Γ(1+∑_l=1^L β_l s_l)/Γ(∑_l=1^L β_l s_l)γ^(∑_l=1^L β_l s_l) - 1, where the second equality in (<ref>) is obtained with the help of Γ(1+x) = x Γ(x) <cit.> with x = ∑_l=1^L β_l s_l. Plugging (<ref>) in (<ref>) yields (<ref>). Utilizing (<ref>) and <cit.>, f_γ_SC(γ) is as given in (<ref>). § DERIVATION OF THE BER Plugging (<ref>) in (<ref>) yields (<ref>). If we Let z = √(ζγ), then the inner integral I_1 in (<ref>) can be written as I_1 = 2δ/ζ^∑_l=1^L β_l s_l∫_0^∞ z^(2∑_l=1^L β_l s_l ) - 1 (z) dz. Utilizing <cit.>, I_1 above can be expressed as I_1 = δ Γ( 1/2 + ∑_l=1^L β_l s_l ) /√(π)ζ^∑_l=1^L β_l s_l(∑_l=1^L β_l s_l ) =δ/√(π)ζ^∑_l=1^L β_l s_lΓ( 1/2 + ∑_l=1^L β_l s_l ) Γ( ∑_l=1^L β_l s_l )/Γ( 1 + ∑_l=1^L β_l s_l ) , where the second equality is obtained with the help of Γ(1+x) = x Γ(x) with x = ∑_l=1^L β_l s_l. Plugging (<ref>) in (<ref>) yields = δ/√(π)( 1/2π j)^L ∮_c_1∮_c_LΓ( 1/2 + ∑_l=1^L β_l s_l ) ×[ ∏_l=1^L Γ(1-s_l) Γ(s_l) (ζρα_l )^-β_l s_l] ds_1 ds_L. Utilizing (<ref>) and <cit.>, is as in (<ref>). § DERIVATION OF THE ERGODIC CAPACITY Plugging (<ref>) in (<ref>) yields the contour integral representation of the ergodic capacity in (<ref>). With the help of <cit.>, the inner integral I_2 in (<ref>) can be written as I_2 = ∫_0^∞γ^(∑_l=1^L β_l s_l ) - 1*12221,11,0γ dγ. Utilizing <cit.>, I_2 can be expressed as I_2 = Γ(1 + ∑_l=1^L β_l s_l ) Γ( -∑_l=1^L β_l s_l ) Γ( -∑_l=1^L β_l s_l )/Γ( 1 - ∑_l=1^L β_l s_l ) . Substituting (<ref>) in (<ref>) leads to (<ref>). Utilizing (<ref>) and <cit.>, the ergodic capacity is as in (<ref>). IEEEtran
http://arxiv.org/abs/2307.10179v1
20230616200527
Towards fully integrated photonic backpropagation training and inference using on-chip nonlinear activation and gradient functions
[ "Farshid Ashtiani", "Mohamad Hossein Idjadi" ]
cs.ET
[ "cs.ET", "physics.optics" ]
Article Title]Towards fully integrated photonic backpropagation training and inference using on-chip nonlinear activation and gradient functions [1]Farshid [email protected] 1]Mohamad Hossein Idjadi *[1]Nokia Bell Labs, 600 Mountain Ave, Murray Hill, 07974, NJ, USA Gradient descent-based backpropagation training is widely used in many neural network systems. However, photonic implementation of such method is not straightforward mainly since having both the nonlinear activation function and its gradient using standard integrated photonic components is challenging. Here, we demonstrate the realization of two commonly used neural nonlinear activation functions and their gradients on a silicon photonic platform. Our method leverages the nonlinear electro-optic response of a micro-disk modulator. As a proof of concept, the experimental results are incorporated into a neural network simulation platform to classify MNIST handwritten digits dataset where we classification accuracies of more than 97% are achieved that are on par with those of ideal nonlinearities and gradients. [ [ July 31, 2023 ================= § INTRODUCTION As artificial neural networks (ANN) are being utilized in a wider variety of applications from natural language models to medical diagnosis <cit.>, the need for higher processing speed and lower energy requirements is increasing. In supervised learning, ANNs are first trained on a specific dataset to "learn" a certain task (training), and then perform the same task on an unseen dataset (inference). To learn the optimized network parameters (i.e., weights and biases), a training algorithm such as gradient descent-based backpropagation (BP) <cit.> is used. It is an iterative process where both linear (weight and sum) and nonlinear (activation) computations are performed many times to reach an optimized solution. Therefore, improving speed and energy efficiency of such a computation intensive process can significantly benefit the overall performance of a neural network. Benefiting from low-loss interconnects and a wide available bandwidth at optical frequencies, photonic neural networks (PNN) have become a promising candidate to augment the performance of conventional digital clock-based processors. Such photonic analog processors can process optical signals as they propagate through the system which enables processing at the speed of light. Several PNNs have been demonstrated for different applications such as optical fiber nonlinearity compensation <cit.> and image classification <cit.>. Despite promising results, many of these systems rely on digital computers for offline BP training and show inference using photonic chips. One main reason is that for the BP algorithm to work, the nonlinear activation function (in the forward path) and its gradient (in the backward bath) are required to calculate the proper adjustments of the weights and biases that minimize the cost/error function. Since photonic implementation of such asymmetric nonlinearity is not straightforward <cit.>, alternative photonic-friendly training algorithms such as finite difference method <cit.>, direct feedback alignment <cit.>, and simultaneous perturbation method <cit.> have been proposed and demonstrated. Nevertheless, due to the wide applicability and a large number of existing neural network models based on BP training, together with the great potential of photonic computing, photonic implementation of BP training can be very beneficial. Recently, a hybrid digital-photonic demonstration of BP has been reported <cit.>. Despite impressive results, only linear computation is realized photonically, and the nonlinear activation and its gradient are implemented digitally. Here, we propose a novel photonic-friendly solution to realize two commonly used nonlinear activation functions, namely sigmoid and rectified linear unit (ReLU), and their corresponding gradients on a standard silicon photonic platform to enable fully integrated photonic BP training and inference. We leverage the nonlinear electro-optic response of an add-drop micro-disk modulator (MDM) to closely approximate those functions. We then use the measured data in a neural network simulation platform with customized activation and gradient functions to demonstrate the applicability of the proposed method to classify MNIST handwritten digits dataset. Accuracies of 97% and 98% are achieved using the approximated sigmoid and ReLU function and gradient, respectively, that are on par with those of ideal activation functions. The chip was fabricated in AIM Photonics 180 nm silicon photonic process and a microphotograph is shown in Fig. <ref>a. § ACTIVATION FUNCTION AND GRADIENT APPROXIMATION USING A MICRO-DISK MODULATOR Optical-electrical-optical (OEO) conversion enables high-speed and scalable neural nonlinearity <cit.>. Figure <ref>a shows a typical application of OEO nonlinearity, f(.), in a photonic neuron. The relative weights between the optical inputs (x_i) are set using intensity modulators (w_i), and the weighted-sum of the inputs is generated using parallel photodiodes. The photocurrent drives the PN-junction of a MDM to realize the nonlinear activation function by modulating an independent supply light to generate the optical output of the neuron. §.§ Sigmoid function Sigmoid is a commonly used function due to having a smooth gradient and a normalized output (i.e., between 0 and 1). Figure <ref>b shows sigmoid and its gradient. We propose the schemes in Figs. <ref>c and <ref>d to approximate sigmoid and its gradient. For the sigmoid function, the MDM resonance is thermally tuned to align to the supply light wavelength (λ_0=1539.15 nm) and the through port is monitored. Once V_in exceeds the turn-on voltage of the PN-junction (about 0.9V), the MDM resonance shifts and the output power increases. The output power flattens once the supply light wavelength is well beyond the shifted MDM resonance, resulting in a sigmoid-like function. The measured data and the corresponding sigmoid fit are shown in Fig. <ref>c. To approximate the gradient, the drop port is used. The MDM resonance (in BP mode) is biased at λ_0, and then thermally tuned until the drop port output falls below a certain threshold (Fig. <ref>d). Once biased, as V_in increases a sigmoid gradient-like response is achieved (Fig. <ref>d). Note that the output offset for V_IN < 0.9 V is function of the threshold voltage in the alignment process and can be adjusted. §.§ ReLU function ReLU is another commonly used neural nonlinearity that due to the simplicity of the function and its gradient offers computational efficiency and also features non-vanishing gradient, resulting in fast convergence (Fig. <ref>b). Figure <ref>e and <ref>f show the architecture to approximate ReLU and its gradient. ReLU approximation is similar to case of sigmoid, except in this case the maximum of V_in is intentionally limited by controlling the electronic gain and a bias voltage to avoid flattening at higher values of V_in (low-gain mode). To approximate its gradient which is a step function, electronic gain is increased (high-gain mode) which makes the transition to the flat region very sharp (i.e., approximating a step function). Note that the threshold voltage of 0.9 V is kept the same via the bias voltage. The measured ReLU, its gradient, and the corresponding fitted curves are shown in Fig. 1e and 1f. Next, the measurement results are incorporated into a neural network simulation to validate the accuracy of the proposed method. § MNIST CLASSIFICATION RESULTS The measured results (fitted curves) for both sigmoid and ReLU functions and gradients are used in a neural network model to classify MNIST handwritten digits dataset. The architecture of the network is shown in Fig. <ref>a. The network consists of a 2-D convolution layer, a 2D max-pooling layer, and two fully-connected layers with 100 and 10 neurons, respectively, to generate the classification result. Note that custom activation and gradient functions are incorporated into the model in order to use the actual measured data in the convolution and first fully-connected neurons. The output layer uses softmax activation. For training, stochastic gradient descent BP with a learning rate of 0.01 and momentum of 0.9 is used. He-uniform and categorical cross-entropy are used for kernel initialization and loss function, respectively. Figures <ref>b and <ref>c compare the training and classification accuracy for ideal and measured sigmoid and ReLU functions and gradients, respectively. Classification accuracies of 97% and 98% are achieved when measured data for sigmoid and ReLU are used in the model, respectively. We achieve very similar results when the measured data replaces the ideal functions which shows that our proposed method, especially for gradient calculation, is accurate enough and does not affect the performance of the neural network. § CONCLUSION We demonstrated the first implementation of the gradient of OEO neural nonlinearity to train photonic neural networks using the nonlinear electro-optic response of a MDM. The computation speed (i.e., response time) is mainly limited by the optical modulation and detection bandwidths that can reach several gigahertz in commercially available silicon photonic processes. Note that the required electronics for OEO conversion can be co/hybrid integrated with the photonic chip. Our method can be scaled to a large number of neurons and layers and is an important step towards full integration of backpropagation training and inference on the same photonic chip that enables faster and more efficient photonic computing systems.
http://arxiv.org/abs/2306.10029v1
20230606084256
Pseudo session-based recommendation with hierarchical embedding and session attributes
[ "Yuta Sumiya", "Ryusei Numata", "Satoshi Takahashi" ]
cs.IR
[ "cs.IR", "cs.LG" ]
Pseudo SBR with hierarchical embedding and session attributes Sumiya et al. The University of Electro-Communications {sumiya,stakahashi}@uec.ac.jp The Japan Research Institute Limited [email protected] Pseudo session-based recommendation with hierarchical embedding and session attributes Yuta Sumiya1 Ryusei Numata2 Satoshi Takahashi10000-0001-5573-7841 ====================================================================================== Recently, electronic commerce (EC) websites have been unable to provide an identification number (user ID) for each transaction data entry because of privacy issues. Because most recommendation methods assume that all data are assigned a user ID, they cannot be applied to the data without user IDs. Recently, session-based recommendation (SBR) based on session information, which is short-term behavioral information of users, has been studied. A general SBR uses only information about the item of interest to make a recommendation (e.g., item ID for an EC site). Particularly in the case of EC sites, the data recorded include the name of the item being purchased, the price of the item, the category hierarchy, and the gender and region of the user. In this study, we define a pseudo–session for the purchase history data of an EC site without user IDs and session IDs. Finally, we propose an SBR with a co-guided heterogeneous hypergraph and globalgraph network plus, called CoHHGN+. The results show that our CoHHGN+ can recommend items with higher performance than other methods. § INTRODUCTION In electronic commerce (EC) markets, the effective recommendation of items and services based on individual user preferences and interests is an important factor in improving customer satisfaction and sales, and several previous studies have focused on recommendation systems. A recommendation system is a technology that suggests items based on a user's past actions and online behavior. General recommendation methods, such as collaborative filtering<cit.>, focus on similarities among users and items to make recommendations; however, these methods assume that a user ID has been assigned to each transaction. In recent years, user IDs are not assigned to users to protect their privacy. Under such circumstances, it is difficult to identify users; therefore, conventional effective recommendation systems cannot be used. Session-based recommendation (SBR), which makes recommendations without focusing on user IDs, is currently attracting attention. SBR is a method of providing recommendations based on session IDs assigned to short-term user actions. They are assigned when a user logs into an EC site, and are advantageous in that users cannot be uniquely identified as they are assigned different IDs depending on the time of day. However, even if session ID management is inadequate, there is a risk that the session ID of a logged-in user may be illegally obtained to gain access. To prevent this, we propose a new method for recommending items without using either user or session IDs. Specifically, for purchase history data to which user and session IDs are not assigned, records with consecutive user attributes, such as gender and place of residence, are defined as pseudo sessions, and the next item to be purchased in the pseudo session is predicted. In this manner, items that anonymous users place in their carts in chronological order can be recommended for their next purchase without using session IDs. Generally, existing SBRs are often graph neural network (GNN)-based <cit.> methods that consider only item transactions within a session. However, in the case of purchase history, other features such as item price and category tend to be observed as well. The existing method CoHHN <cit.> shows that price information and categories are effective in recommending items. In this study, we propose a new GNN model called the co-guided heterogeneous hypergraph and globalgraph network plus (CoHHGN+), which consider not only the purchase transition and price of items, but also the category hierarchy of items and auxiliary information of sessions; we also discuss the effectiveness of this model. § RELATED WORK Randle et al. proposed a Markov chain–based SBR model, called factorized personalized Markov chains (FPMC) <cit.>. FPMC is a hybrid method that combines Markov chains and matrix factorization to capture sequential patterns and long-term user preferences. The method is based on a Markov chain that focuses on two adjacent states between items and is adaptable to anonymous SBRs. However, a major problem with Markov chain-based models is that they combine past components independently, which restricts their predictive accuracy. Hidasi et al. proposed a recurrent neural network (RNN)–based SBR model called GRU4Rec <cit.>. GRU4Rec models transition between items using gated recurrent units (GRUs) for inputs represented as graphs. The purchase transitions of an EC site can be represented by a graph structure, which is a homogeneous or heterogeneous graph depending on whether the attributes of the nodes are singular or plural. Homogeneous graphs are graphs that represent relationships by only one type of node and edge and are used to represent relationships in social networks. In contrast, heterogeneous graphs are graphs that contain multiple and diverse nodes and edges and are used to represent relationships between stores and customers. Wu et al. proposed SR-GNN <cit.>, which uses a GNN to predict the next item to be purchased in a session based on a homogeneous graph of items constructed across sessions. Using GNNs, we obtain item embeddings that are useful for predicting by introducing attention mechanism to the continuously observed item information. Currently, SBRs based on this GNN have shown more effective results than other methods, and several extended methods based on SR-GNN have been proposed. Wang et al. proposed GCE-GNN <cit.>, which embeds not only the current session but also item transitions of other sessions in the graph. Existing methods, such as SR-GNN and GCE-GNN are models that learn item-only transitions; however, sessions may also include item prices and genre features. To construct a model that takes these into account, it is necessary to use heterogeneous graphs. However, when using graphs to represent the relationship between auxiliary information such as price and items, the graph becomes more complex as the number of items in a particular price range increases. Therefore, we apply an extended heterogeneous hypergraph to allow the edges to be connected to multiple nodes. This makes it possible to understand complex higher-order dependencies between nodes, especially in recommendation tasks <cit.>. Zhang et al. proposed CoHHN <cit.>, which embeds not only item transitions, but also item prices and categories. While CoHHN can consider price and item dependencies, it does not consider the hierarchical features of categories or sales information and user attributes observed during the sessions. It also does not embed the global information that represents item purchase transitions in other sessions. Therefore, we propose a new GNN model that embeds global information as in GCE-GNN, and considers item category hierarchy, user attributes, and sale information. § PRELIMINARIES Let τ be a feature type that changes within a given session. Let 𝒱^τ={v_1^τ, v_2^τ,⋯, v_n^τ^τ} be a unique set of feature τ and n^τ be their size. We consider four items: item ID, price, and hierarchical category of item (large and middle); we subsequently denote its item set as 𝒱^id, 𝒱^pri, 𝒱^lrg, and 𝒱^mid, respectively. Note that the prices are discretized into several price ranges according to a logistic distribution <cit.>, taking into account the market price of each item. Let S_a^τ=[v_1^a, τ, v_2^a, τ, ⋯, v_s^a, τ] be a sequence of the feature τ for a pseudo session and s be its length. Note that each element v_i^a, τ of S_a^τ is belong to 𝒱^τ. The objective of SBR is to recommend the top k items from 𝒱^id that are most likely to be purchased or clicked next by the user in the current session a. §.§ Heterogeneous hypergraph and global graph To learn the transitions of items in a pseudo session, two different graphs are constructed from all available sessions. We construct heterogeneous hypergraphs 𝒢^τ_1, τ_2=(𝒱^τ_1, ℰ_h^τ_2) to consider the relationships between different features. Let ℰ_h^τ_2 be a set of hyperedges for feature τ_2. Each hyperedge e_h^τ_2∈ℰ_h^τ_2 can be connected to multiple nodes v_i^τ_1∈𝒱^τ_1 in the graph. This means that a node v_i^τ_1 is connected to a hyperedge e^τ_2 when the features τ_1 and τ_2 are observed in the same record. If several nodes are contained in the same hyperedge, they are considered to be adjacent. Heterogeneous hypergraphs are a method of constructing graphs with reference to different features; however, transition regarding information about features of the same type is not considered. Additionally, item purchase transitions may include items that are not relevant to prediction. Thus, we construct the global graph shown below. The global graph captures the relationship between items of the same type that co-occur with an item for all sessions. According to <cit.>, the global graph is constructed based on ε-neighborhood set of an item for all sessions. Assuming that a and b are different arbitrary session, we define the ε-neighborhood set as follows. 𝒩_ε(v_i^a, τ) = {v_j^b, τ | v_i^a, τ = v_i^'^b, τ∈ S_a^τ∩ S_b^τ ; v_j^a, τ∈ S_b^τ ; j ∈ [i^' - ε, i^' + ε] ; a ≠ b}, where i^' is an index of v_i^a, τ in S_b^τ and ε is a parameter that controls how close items are considered from the position of i^' in session B. Consider that 𝒢_g = (𝒱^τ, ℰ_g^τ) is a global graph where ℰ_g^τ is an edge set and e_g^τ∈ℰ_g^τ connects two vertices v_i^τ∈𝒱^τ and v_j^τ∈𝒩_ε(v_i^τ). Notably, the global graph only shows the relationship between identical features, and the adjacency conditions between nodes are not affected by other features. § PROPOSAL METHOD We propose a pseudo session-based recommendation method using a heterogeneous hypergraph constructed from a set of features including a categorical hierarchy, a global graph for item and price features, and additional session attribute information. Fig. <ref> shows an overview of our proposed method. In the first step of aggregation, the intermediate embedding of each feature is learned from a heterogeneous hypergraph. In the second step, the final feature embedding vector obtained by aggregating the intermediate embedding is learned. To address the problem of the heterogeneous hypergraph not being able to learn purchase transitions within the same feature, a global graph is used to incorporate co-occurrence relationships within the same feature into learning. Finally, we propose learning of purchase transitions within a session by considering the features of the session itself, in addition to existing methods. §.§ Two-step embedding with category hierarchy We obtain the item ID, price, large category, and middle category embedding vectors using the following two-step learning method. In the first step of embedding, the embedding of a feature is learned from a heterogeneous hypergraph in which the feature is a node and others are hyperedges. For example, if the item ID is a node, price, large category, and middle category correspond to the hyperedges. In this case, multiple intermediate embeddings are obtained depending on the type of feature, i.e., the hyperedge. In the second step, these embeddings are used to learn the final node embeddings by aggregating them based on their importance. Each learning step is repeated for all L layers. §.§.§ First step Following CoHHN's intra-type aggregating method <cit.>, we learn a first-step embedding for a feature t from a heterogeneous hypergraph, where the target feature t is a node and another feature τ is a hyperedge. First, we define the embedding of a node v_i^t ∈𝒱^t as h_l, i^hyper, t∈ℝ^d. Here, l denotes the location of the training layer. In the initial state l=0, the parameters are initialized using He's method <cit.>. Let 𝒩_τ^t(v_i^t) be the adjacent node set of v_i^t. Then, the intermediate embedding of v_i^t in the l-layer is given by m_τ, i^t = ∑_v_j^t∈𝒩_τ^t(v_i^t)α_jh_l-1, j^hyper, t, α_j = Softmax_j([u_t^⊤h_l-1, k^hyper, t | v_k^t∈𝒩_τ^t(v_i^t)]), where u_t^⊤ is an attention vector that determines the importance of h_l-1, j^hyper, t. The function Softmax_i is defined as Softmax_i([a_1, ⋯, a_s]) = exp(a_i)/∑_j=1^sexp(a_j). Here, m_τ, i^t∈ℝ^d represents an intermediate embedding of the feature t when τ is a type of hyperedge. In the first step of embedding, we learn the features to focus on when embedding t. §.§.§ Second step Let us assume that m_τ_1, i^t, m_τ_2, i^t, and m_τ_3, i^t are intermediate embeddings for a feature t when τ_1, τ_2, τ_3 are types of hyperedge, respectively. By aggregating the embeddings of the first step, we obtain the embedding of v_i^t shown in the following equation. h_l, i^hyper, t = β_1 * h_l-1, i^hyper, t + ∑_j=2^4β_j * m_τ_j-1, i^t, β_j = Softmax_j([W^th_l-1, i^hyper, t, W_τ_1^tm_τ_1, i^t, W_τ_2^tm_τ_2, i^t, W_τ_3^tm_τ_3, i^t]), where W^t, W_τ_1^t, W_τ_2^t, W_τ_3^t∈ℝ^d× d are learnable parameters, and * denotes the element-wise items of the vectors. Further, β_j is a parameter that computes the importance between the embedding vectors and aggregates the previous and intermediate layer embeddings. §.§ Embedding of global graph We follow the learning of embedding global graphs in a GCE-GNN <cit.> with two configurations: propagation and aggregation of information. §.§.§ Information propagation The ε-neighborhood of each feature from the global graph for feature t are embedded. Because the number of features of interest within a neighborhood is considered to be different for each user, based on the attention score shown in the following equation, the neighborhood embedding h_𝒩_ε(v_i^t) is first learned. h_𝒩_ε(v_i^t) = ∑_v_j^t∈𝒩_ε(v_i^t)π(v_i^t, v_j^t)h_l-1, j^global, t, π(v_i^t, v_j^t) = Softmax_j([a(v_i^t, v_k^t) | v_k^t∈𝒩_ε(v_i^t)]), a(v_i^t, v_j^t) = q^⊤LeakyRelu(W_1[s * h_l-1, j^global, t];w_ij), where h_l-1^global, t is an embedding of the global graph for the feature j on the l-1–th learning layer, and π(v_i^t, v_j^t) is an attention weight that considers the importance of neighborhood node embedding. The attention score a(v_i^t, v_j^t) employs LeakyRelu. In LeakyRelu, w_ij∈ℝ is the weight of an edge (v_i^t, v_j^t) in the global graph that represents the number of co-occurrences with features v_j^t, and ; is a concatenation operator. Further, W_1 ∈ℝ^(d+1)×(d+1) and q∈ℝ^d+1 are learnable parameters, and s is the average embedding of the session to which v_i^t belongs, defined as s = 1/s∑_v_i^t∈ S_a^th_l-1, i^global, t. §.§.§ Information aggregation For a feature v^t to be learned, the l-layer embedding h_l^global, t is obtained by aggregating the (l-1)-layer embedding and the neighborhood embeddings using the following formula: h_l, i^global, t = ReLU(W_2[h_l-1, i^global, t ; h_𝒩_ε(v_i^t)]), where W_2∈ℝ^d× 2d denotes a learnable parameter. In global graph embedding, highly relevant item information can be incorporated throughout the session by aggregating the reference features and their ε-neighborhoods. §.§ Embedding feature nodes For the feature node v_i^t, the final embedding is obtained from the embedding of heterogeneous hypergraphs considering the category hierarchy and the embedding of global graphs by the following gate mechanism: g_i^t = σ(W_3h_L, i^hyper, t + W_4h_L, i^global, t), h_i^t = g_i^t * h_L, i^hyper, t + (1 - g_i^t) * h_L, i^global, t, where σ is a sigmoid function, W_3∈ℝ^d× d and W_4∈ℝ^d× d are learnable parameters, and L is the final layer of graph embedding. g_i^t is learned to consider the importance of embedding heterogeneous hypergraphs and embedding global graphs. The final feature node embedding is required only for the item ID and price based on the training of the next item. §.§ Feature extraction considering session attributes Based on the learned node embeddings, we extract features related to the user's items and prices in each session. §.§.§ Feature extraction of items The embedding of an item node in session a is given by the sequence [h_1^a, id, ⋯, h_s^a, id]. In addition to items, user attribute information, time–series information, and EC site sale information, among others, may be observed in each session. Therefore, we considered this information and learned to capture the session-by-session characteristics associated with the items. Let d_sale be the number of types of sale information and x_sale^a∈{0, 1}^d_sale items be given per session. Each dimension of this vector represents the type of sale, with a value of 1 if it is during a particular sale period and a value of 0 if it is outside that period. Similarly, if the number of types of attribute information is d_type, then x_type^a∈{0, 1}^d_type is a vector representing user attributes. For items and sales, we also consider time–series location information. The item location information defines a location encoding pos_item_i ∈ℝ^d as in <cit.>. Furthermore, for the location information of the sale, the week information to which the current session belongs is encoded by the following formula: pos_time_2k-1^a = sin(2mπ/52k), pos_time_2k^a = cos(2mπ/52k), where pos_time^a∈ℝ^c is the location encoding associated with the week information of the session a, m∈ℤ represents the week, and k is the embedding dimension. Because a year comprise 52 weeks, the trigonometric function argument is divided by 52. Based on the above, item embedding in a session is defined as follows: v_i^a, id = tanh(W_5[h_i^a, id ; pos_item_i] + W_6[x_sale^a ; pos_time^a] + W_7x_type^a + b_1), where W_5 ∈ℝ^d×2d, W_6 ∈ℝ^d× (d_sale + c), W_7 ∈ℝ^d× d_type, b_1 ∈ℝ^d are trainable parameters, v_i^a, id is the i–th item embedding in session a. The item preferences I^a of a user in a session are determined according to <cit.> as follows: I^a = ∑_i=1^sβ_ih_i^a, id, β_i = u^⊤σ(W_8v_i^a, id + W_9v̅^a, id + b_2), where W_8, W_9∈ℝ^d× d, b_2 ∈ℝ^d are learnable parameters, u^⊤∈ℝ^d is the attention vector. Additionally, v̅^a, id = 1/s∑_i=1^sv_i^a, id. §.§.§ Feature extraction of prices The price hyperedge in session a is given by [h_1^a, p, ⋯, h_s^a, p]. To estimate price preferences with respect to users, we follow <cit.> and learn the features of the price series using multi-head attention as shown in the following equation: E^a, p = [h_1^a, p ; ⋯ ; h_s^a, p], M_i^a, p = [head_1^a ; ⋯ ; head_h^a], head_i^a = Attention(W_i^QE^a, p, W_i^KE^a, p, W_i^VE^a, p), where h is the number of blocks of self-attention, W_i^Q, W_i^K, W_i^V∈ℝ^d/h× d are parameters that map item i in session a to query and key, value, and head_i^a∈ℝ^d/h is the embedding vector of each block of multi-head-attention for item i. Further, E^a, p∈ℝ^dm, M_i^a, p∈ℝ^d and the embedded price series is [M_1^a, p, ⋯, M_s^a, p]. Because the last price embedding is considered to be the most relevant to the next item price in the price series, we determine the user's price preference P^a = M_s^a, p in the session. §.§ Predicting and learning about the next item The user's item preferences I^a and price preferences P^a are transformed into I^a and P^a respectively by co-guided learning <cit.>, considering mutual dependency relations. When an item v_i^a, id∈𝒱^id and a price range v_i^a, p∈𝒱^p are observed in session a, the next item to view and purchase is given by the score of the following Softmax function: y_i = Softmax_i([q_1, ⋯, q_n^id]), q_i = P^a^⊤h_i^a, p + I^a^⊤h_i^a, id. At the training time, this score is used to compute the cross-entropy loss. ℒ(y, y) = -∑_j=1^n^id(y_jlog(y_j) + (1-y_j)log(1-y_j)), where y∈{0, 1}^n^id is the objective variable that indicates whether the user has viewed and purchased item v_i^id. y∈ℝ^n^id is the score for all items. § EXPERIMENTS We evaluate our proposed method using purchasing history data of an EC market. The dataset comprises the purchasing history of 100,000 people randomly selected by age group which are obtained from the users registered in 2019-20 in the Rakuten<cit.> market, which is a portal site for multiple EC sites. We consider four age groups: 21–35, 36–50, 51–65, and 66–80. Each purchasing history comprises the category name of the purchased item (large, middle, small), week (week 1–105), gender (male or female), residence (nine provinces in Japan), and price segment (separated by thousands of JPY). The user ID and session information are not recorded. §.§ Preprocessing Our method recommends a small category name as the item ID. Additionally, the proposed model also considers session attributes, such as purchaser gender, region of residence, and EC site sales. As specific sale information, we include two types of sales that are regularly held at the Rakuten market. Sale 1 is held once every three months for one week, during which many item prices are reduced by up to half or less. Sale 2 is held for a period of one week each month, and more points are awarded for shopping for items on the EC site. Each session attribute is represented by a discrete label. When learning, we treat each gender, region, and sale as a vector with the observed value as 1 and all other values as 0. The price intervals are converted to price range labels by applying a logistic distribution <cit.>. In each transformed dataset, consecutive purchase intervals with the same gender and residential area are labeled as pseudo–sessions. Based on the assigned pseudo session ID, records with a session length of less than 2 or frequency of occurrence of less than 10 are deleted, according to <cit.>. Within each session, the last observed item ID is used as the prediction target, and the other series are used for training. In dividing the data, weeks 1 through 101 are used as training data, and the remaining weeks 102 through 105 are used as test data. Additionally, 10% of the training data re used as validation data for hyperparameter tuning of the model. The statistical details of the four datasets are listed in Table <ref>. §.§ Evaluation criteria We employ the following criteria to evaluate the recommendation accuracy: * P@k (Precision) : The percentage of the top k recommended items that are actually purchased. * M@k (Mean Reciprocal Rank) : The mean value for the inverse of the rank of the items actually recommended for purchase. If the rank exceeds k, it is 0. The precision does not consider the ranking of recommended items; however, the mean reciprocal rank is a criterion that considers ranking, implying that the higher the value, the higher the item actually purchased in the ranking. In our experiment, we set k=10, 20. §.§ Comparative model To verify the effectiveness of the proposed method, we compare it with the following five models. * FPMC <cit.> : By combining matrix factorization and Markov chains, this method can capture both time–series effects and user preferences. As the dataset is not assigned an ID to identify the user, the observations for each session are estimated as if they were separate users. * GRU4Rec <cit.> : An SBR based on RNN with GRU when recommending items for each session. * SR-GNN <cit.> : An SBR that constructs a session graph and captures transitions between items using a GNN. * GCE-GNN <cit.> : An SBR that builds a session graph and global graph, and captures transitions between items by a GNN while considering their importance. * CoHHN <cit.> : An SBR that constructs a heterogeneous hypergraph regarding sessions that considers information other than items and captures transitions between items with a GNN. §.§ Parameter setting To fairly evaluate the performance of the model, we use many of the same parameters for each model. For all models, the size of the embedding vector is set to 128, the number of epochs to 10, and the batch size to 100. For the optimization method, GRU4Rec uses Adagrad (learning rate 0.01) based on the results of previous studies, while the GNN method uses Adam (learning rate 0.001) with a weight decay of 0.1 applied every three epochs. The coefficients of the L2-norm regularity are set to 10^-5. Additionally, in GCE-GNN and our model CoHHGN+, the size of the neighborhood item-set ε in the global graph is set to 12. Furthermore, in CoHHN and our model, the number of self-attention heads is set to 4 (h=4), and the number of price ranges to 10. Finally, the number of GNN layers and percentage of dropouts used in the architecture are determined by grid search for each model using the validation data. § RESULTS AND DISCUSSION §.§ Performance comparison Tables <ref> and <ref> show the results of evaluating the five existing methods and the proposed method CoHHGN+ on the four selected datasets. CoHHGN+ obtains the most accurate results for all datasets with precision for k=10, 20. The mean reciprocal rank is also the most accurate, except for the data for the 36–50 age group. For the 36–50 year age group dataset, the precision is higher than that for the other models, while the mean reciprocal rank shows the highest accuracy for GCE-GNN. However, there is no statistically significant difference in the prediction accuracy between CoHHGN+ and GCE-GNN in this dataset. Thus, it can be inferred that there is no clear difference in prediction accuracy. This confirms the effectiveness of the proposed method for all the data. In the comparison method, a large discrepancy in accuracy between the GNN-based method, which introduces an attention mechanism in the purchase series, and the other methods is noted. Overall, the GRU4Rec without attention mechanism results in the lowest accuracy, suggesting that the results were not sufficiently accurate for data with a small number of sessions. This is because the model focuses only on purchase transitions between adjacent items. Similarly, for FPMC, although the accuracy is improved compared to GRU4Rec, modeling with Markov chains and matrix factorization is not effective for purchase data with pseudo–sessions. Moreover, SR-GNN, GCE-GNN, CoHHN, and CoHHGN+ using graphs of purchase transitions between sessions show a significant improvement in accuracy and are able to learn the purchase trends of non-adjacent items as well. Among the compared methods, CoHHN, which considered price and large category information in addition to item ID information, tends to have a higher prediction accuracy overall. The number of series per session is generally small for purchase history data, and it can be said that higher accuracy can be obtained by learning data involving multiple features, including items. By extracting features from the nonlinear relationship between price embedding and item embedding using co-guided learning, the complex purchase transitions of anonymous users may be learned. GCE-GNN, which also considers the features of other sessions, shows the second highest prediction accuracy after CoHHN. When using purchase history data with short session lengths, it is more accurate to learn embedding vectors by considering items that have co-occurrence relationships with other sessions, in addition to series within sessions. The SR-GNN that has learned only from item ID transitions is inferior to the GCE-GNN in terms of overall accuracy among GNN-based systems, although it is more accurate than the GCE-GNN for some datasets. Therefore, it can be considered that adopting features other than the item ID and other session information will lead to improved recommendation accuracy. We confirme that the proposed method improves accuracy not only by considering auxiliary information in the purchase transition of items, but also by learning methods for its embedding vectors and including additional features that change from session to session. Furthermore, the embedding vector obtained from the global graph of the item of interest works well for a series with short session lengths. §.§ Impact of each model extension Next, we conduct additional experiments on four datasets to evaluate the effectiveness of embedding item category hierarchies and accounting for session attributes, as well as global-level features. Particularly, we design the following two comparative models: * CoHHGN(G): A model that incorporates hierarchical embedding of three or more features that vary within a session. * CoHHGN(GS): A model that considers the hierarchical embedding of three or more features and session attributes in the proposed method. To compare the performance with existing methods, we use the most accurate values of the existing methods shown in Tables <ref> and <ref> as the baselines. Tables <ref> and <ref> show the prediction results of the comparison model. For both precision and Mean Reciprocal Rank, CoHHGN+, which incorporates all the proposed methods, performs better overall than the other two models. For Precision, the accuracy of CoHHGN (GS) is higher for P@10 in the 21–35 year age group dataset. However, because the accuracy of CoHHGN+ is higher than that of other methods in P@20, we believe that considering the embedding of global graph features will improve the accuracy in a stable manner. For CoHHGN(G), although the accuracy is improved over the baseline in several datasets, no statistically significant differences are identified. However, extending the model to CoHHGN (GS), which also considers session attributes, results in a significant difference in precision in all datasets, except in the dataset for the age group 51–65. Further, considering the mean reciprocal rank, although the recommendation accuracy tends to improve as the model is extended to CoHHGN (G) and CoHHGN (GS), the only dataset in which statistically significant differences can be confirmed is that for the 66–80 age group. However, when extended to CoHHGN+, which incorporates all the proposed methods, the overall prediction accuracy is higher and significant differences are confirmed. This confirms that the recommendation accuracy of the item ID can be improved by simultaneously considering features that vary between sessions and attributes of other sessions, in addition to features that vary within sessions. § CONCLUSION In this study, we developed CoHHGN+ based on CoHHN, which is an SBR considering various features, and GCE-GNN considering global graphs, for purchase history data of EC sites. Moreover, we considered global time–series information, sale information, and user information. The application of the proposed model to pseudo-session data with no user IDs shows that the GNN-based method exhibits significantly higher accuracy than those for the other methods, and that our proposed CoHHGN+ is the most accurate method on the dataset. Althought incorporating several types of data improves the prediction accuracy, there are still issues from the viewpoint of feature selection for data with more types of information recorded. If there are n types of heterogeneous information, the number of heterogeneous hypergraphs used to embed heterogeneous information is 2^n. Therefore, selecting and integrating heterogeneous information remains an issue. Future work on issues related to more efficient feature selection and methods for integrating heterogeneous information will lead to the development of models with even higher accuracy. We would also like to expand the scope of application of CoHHGN+ proposed in this study and attempt to provide useful recommendations in other domains as well. §.§.§ ACKNOWLEDGMENTS We would like to thank the sponsor of the Data Analysis Competition, Joint Association Study Group of Management sCience (JASMAC), and Rakuten Group, Inc. for providing us with the data. This work was also supported by JSPS KAKENHI Grant Number JP20H04146. splncs04
http://arxiv.org/abs/2306.07566v2
20230613063444
Learning under Selective Labels with Data from Heterogeneous Decision-makers: An Instrumental Variable Approach
[ "Jian Chen", "Zhehao Li", "Xiaojie Mao" ]
stat.ML
[ "stat.ML", "cs.LG" ]
Non-intrusive reduced order models for partitioned fluid-structure interactions Azzeddine TIBA^1, Thibault DAIRAY^2, Florian DE VUYST^3, Iraj MORTAZAVI^1, Juan Pedro BERRO RAMIREZ^4 ^1M2N, CNAM Paris, Rue Conté, 75003, Paris ^2Manufacture Française des Pneumatiques Michelin, Place des Carmes-Dechaux, 63000, Clermont-Ferrand ^3 Université de Technologie de Compiègne, CNRS, laboratoire BMBI UMR 7338, Rue du docteur Schweitzer, 60203 Compiègne ^4Altair Engineering France, Rue de la Renaissance, 92160 Antony ============================================================================================================================================================================================================================================================================================================================================================================================================================================================ We study the problem of learning with selectively labeled data, which arises when outcomes are only partially labeled due to historical decision-making. The labeled data distribution may substantially differ from the full population, especially when the historical decisions and the target outcome can be simultaneously affected by some unobserved factors. Consequently, learning with only the labeled data may lead to severely biased results when deployed to the full population. Our paper tackles this challenge by exploiting the fact that in many applications the historical decisions were made by a set of heterogeneous decision-makers. In particular, we analyze this setup in a principled instrumental variable (IV) framework. We establish conditions for the full-population risk of any given prediction rule to be point-identified from the observed data and provide sharp risk bounds when the point identification fails. We further propose a weighted learning approach that learns prediction rules robust to the label selection bias in both identification settings. Finally, we apply our proposed approach to a semi-synthetic financial dataset and demonstrate its superior performance in the presence of selection bias. § INTRODUCTION The problem of selective labels is common in many decision-making applications involving human subjects. In these applications, each individual receives a certain decision that in turn determines whether the individual's outcome label is observed. For example, in judicial bail decision-making, the outcome of interest is whether a defendant returns to the court without comitting another crime were the defendant released. But this outcome cannot be observed if the bail is denied. In lending, the default status of a loan applicant cannot be observed if the loan application is not approved. In hiring, a candidate's job performance cannot be observed if the candidate is not hired. The selective label problem poses serious challenges to develop effective machine learning algorithms to aid decision-making <cit.>. Indeed, the labeled data may not be representative of the full population that will receive the decisions, which is also known as a selection bias problem. As a result, good performance on the labeled subjects may not translate into good performance for the full population, and machine learning models trained on the selectively labeled data may perform poorly when deployed in the wild. This is particularly concerning when historical decisions depended on unobservable variables not recorded in the data, as is often the case when historical decisions were made by humans. Then the labeled data and the full population can differ substantially in terms of factors unknown to the machine learning modelers. In this paper, we tackle the selective label problem by exploiting the fact that in many applications the historical decisions were made by multiple decision-makers. In particular, the decision-makers are heterogenous in that they have different decision rules and may make different decision to the same unit. Moreover, the decision-makers face similar pools of subjects so that each subject can be virtually considered as randomly assigned to one decision-maker. This structure provides opportunities to overcome the selective label problem, since an unlabelled subject could have otherwise been assigned to a different decision-maker and been labeled. For example, judicial decisions are often made by multiple judges. They may have different degrees of leniency in releasing the same defendant. This heterogeneous decision-maker structure has been also used in the existing literature to handle selection bias <cit.>. To harness the decision-maker heterogeneity, we view the decision-maker assignment as an instrumental variable (IV) for the historical decision subject to selection bias. We thoroughly study the evaluation of machine learning prediction algorithms from an identification perspective (see <Ref>). We provide a sufficient condition for the prediction risk of an algorithm over the full population to be uniquely identified (i.e., point-identified) from the observed data. We show that the sufficient condition is very strong and the full-population prediction risk may often be unidentifiable. Thus we alternatively provide tight bounds on the full-population prediction risk under a mild condition. This lack of identification highlights intrinsic ambiguity in algorithm evaluation with selective labels, explaining why the heuristic single-valued point evaluation estimators in <cit.> tend to be biased (see <Ref>). We further develop an algorithm to learn a prediction rule from data (see <Ref>). This complements the existing selective label literature that mostly focuses on the robust evaluation of machine learning models <cit.>, but the models are often directly trained on biased labeled data to begin with. In this paper, we seek robustness not only in evaluation but also in the model training. Our proposed learning algorithm builds on our our identification results and apply both when the full-population prediction risk is point-identified and when it is only partially identified. It is based on a unified weighted classification procedure that can be efficiently implemented via standard machine learning packages. We demonstrate the performance of our proposed algorithm in <Ref> and theoretically bound its generalization error in <Ref>. §.§ Related Work Our work is directly related to the growing literature on the selective label problem. <cit.> formalize the problem in the machine learning context and leverage the decision-maker heterogeneity via a heuristic “contraction” technique. <cit.> provide formal identification analysis and derive partial identification bounds on multiple model accuracy measures in a variety of settings, including the setting with decision-maker assignment IV. See <Ref> for a more detailed comparison of our work with these papers. Some recent papers also tackle the selective label problem by leveraging alternative information. For example, <cit.> propose to impute missing labels when human decision-makers reach concensus decisions. <cit.> propose a Bayesian approach under simplified parametric models. <cit.> uses a geographic proximity instrumental variable in a immigration law enforcement application. Notably, all of these papers only focus on model evaluation, while our paper also studies model learning. Finally, we note that a few recent papers study the selective label problem in an online learning setting <cit.>, while we focus on an offline setting. Our work builds on the IV literature in statistics and econometrics. IV is a common and powerful tool to handle unmeasured confounding in causal inference <cit.>. Actually, some empirical studies have already considered heterogenous decision-makers as an IV, such as the judge IV in <cit.>. However, these analyses are based on linear parametric models, while our identification analysis is free of any parametric restrictions. Our point identification analysis builds on <cit.> and our partial identification analysis builds on <cit.>. Our paper directly extends these results to the evaluation of prediction risks under selective labels, and develop novel reformulations that enable efficient learning of prediction rules via a unified procedure. Our work is also tied to the literature of learning individualized treatment rules (ITR) <cit.>. Particularly relevant to our work are <cit.>, who study ITR learning with instrumental variables in the point identification setting and partial identification setting respectively. They consider a causal problem with a binary treatment variable and a binary IV, while we consider a selective label problem with a multi-valued IV. Moreover, our paper features a unified solution to learning in both identification settings. § PROBLEM FORMULATION We now formalize the problem of selective labels. Consider a sample of i = 1, …, n units. For each unit i, we let Y_i^*∈{0, 1} denote the true outcome of interest, and X_i ∈𝒳 and U_i ∈𝒰 denote certain observable and unobservable features that may be strongly dependent with Y_i^*. However, the true outcome Y_i^* is not fully observed. Instead, the observability of Y_i^* depends on a binary decision D_i ∈{0, 1} made by a human decision-maker Z_i ∈ [m] {1, …, m} according to the features (X_i, U_i). Specifically, the observed outcome Y_i ∈{1, 0, NA} (where NA stands for a missing value) is given as follows: Y_i = { Y_i^* D_i = 1, NA D_i = 0. . In other words, each unit's outcome is selectively labeled according to the binary decision. We assume that (Y_i, Y^*_i, D_i, Z_i, X_i, U_i) for i = 1, …, n are independent and identically distributed draws from a common population (Y, Y^*, D, Z, X, U). The observed sample is given by S_n = {(Y_i, D_i, X_i, Z_i), i = 1, …, n }. Figure <ref> shows an example causal graph for the variables. The selective label problem is prevalent in various scenarios, including the judicial bail problem where judges must determine defendant release. For each defendant i, the assigned judge is denoted as Z_i, and the binary variable D_i represents the release decision. The true outcome of interest, denoted as Y_i^*, indicates whether the defendant would return to court without committing a crime if granted bail. This outcome is only observable when bail is granted (D_i = 1), as denial of bail prevents the defendant from committing a crime or appearing in court. The judges' decision D_i may rely on multiple features, such as demographic information (X_i) that can be observed in the data. However, other features, like the defendant's behavior or the presence of their family in the courtroom, may be private to the judges (denoted as U_i). Considering unobserved features U_i is crucial since we often lack knowledge of all the features accessible to decision-makers, as emphasized by <cit.>. Our goal is to learn a classifier f: 𝒳↦{0, 1} to accurately predict the true outcome Y^* from the observed features X. This classifier is useful in aiding the future decision-making. In particular, we are interested in the performance of the prediction function when it is deployed to the full population, which is formalized by following oracle expected risk with respect to a zero-one loss: min_f: 𝒳→{0, 1} ℛ_oracle(f) := 𝔼[𝕀{Y^*≠ f(X) } ], where 𝕀{·} is the indicator function such that 𝕀{A} = 1 if and only if the event A happens. Here we focus on the canonical zero-one loss for simplicity but our results can easily extend to loss functions that assign different losses to false positive and false negative misclassification errors. Unfortunately, with selective labels, we cannot directly train classifiers based on <ref>, since the true outcome Y^* is not entirely observed. Instead, we can only observe Y^* for the selective sub-population with positive historical decision (i.e., D = 1). In this case, a natural idea is to consider restricting to the labeled sub-population and minimizing ℛ_select(f) = 𝔼[𝕀{Y^*≠ f(X) }| D = 1]. However, this risk may differ substantially from the oracle risk ℛ_oracle(·), because of the difference between the labeled sub-population and the full population (a.k.a, selection bias). As a result, classifiers trained according to labeled data may not perform well when applied to the whole population. Indeed, the historical decision D may be affected by both observable features X and unobservable features U that are in turn dependent with the true outcome Y^*. So selection bias can occur due to both the observable features X and unobservable features U, and we need to adjust for both to remove the selection bias, as formalized by the following assumption. [Selection on Unobservables] Y^* ⊥ D | X, U. There exist methods that correct for the selection bias due to observable features X under a stronger selection on observables assumption Y^* ⊥ D | X <cit.>. However, the weaker and more realistic <Ref> requires also correcting for the selection bias due to unobservable features U. This challenging setting is also known as the missing-not-at-random problem in the missing data literature, which cannot be solved without further assumptions or additional information. To tacke this challenge, we follow the recent literature <cit.> and exploit the fact that in many applications the historical decisions that cause the selective label problem were made by multiple decision-makers. In particular, the multiple decision-makers have two characteristics: (1) the decision-makers are heterogeneous in that they have different propensities to make a positive decision to the same instance; (2) the instances can be viewed as randomly assigned to the decision-makers so different decision-makers have a similar pool of cases. For example, <cit.> discuss the plausibility of these two characteristics in a range of applications such as judicial bail, medical treatment, and insurance approvals. In the judicial bail example, characteristic (1) holds because different judges often have different degrees of leniency in making the bail decisions, and characteristic (2) holds when cases are randomly assigned to different judges, as is argued in <cit.>. In this paper, we propose to address the selection bias by using the decision-maker assignment Z as an instrumental variable for the historical decision D. Specifically, we note that Z satisfies the following assumptions that are standard in the IV literature <cit.>. [IV conditions] The assignment Z satisfies the following three conditions: * IV relevance: Z ⊥̸D | X. * IV independence: Z ⊥ (U, Y^*) | X. * Exclusion restriction: Z ⊥ Y | D, Y^*. The IV relevance condition requires that the decision-maker assignment Z is dependent with the decision D even after controlling for the observed features X. This condition holds when the decision-makers are heterogeneous (i.e., the characteristic (1) above) so that the identity of the decision-maker has direct effect on the decision. The IV independence condition states that the decision-maker assignment is independent of the unobserved features U and true outcome Y^* given the observed features X. This condition trivially holds when the decision-maker assignment is randomly assigned (i.e., characteristic (2) above). The exclusion restriction condition holds trivially since the observed outcome Y is entirely determined by the decision D and true outcome Y^* so the decision-maker assignment Z has no direct effect on Y. It then remains to study how to leverage the decision-maker IV Z in the learning problem given in <ref>. In particular, one fundamental question regards the identification aspect of the problem, that is, whether the full-population oracle risk ℛ_oracle(·) in <ref> and any Bayes optimal classifier can be recovered from the distribution of the selectively labeled data. We thorougly investigate this problem in the next section. § IDENTIFICATION ANALYSIS In this section, we analyze the identification of the learning problem described in <ref> with the decision-maker IV available. Before the formal analysis, we first note that the oracle risk objective in <ref> has an equivalent formulation in terms of the full-population conditional expectation μ^*(X) 𝔼[Y^*| X] of the true outcome given the observed features X. The oracle risk ℛ_oracle satisfies that for any f: 𝒳↦{0, 1}, ℛ_oracle(f) = 𝔼[μ^*(X)] + 𝔼[(1 - 2 μ^*(X)) · f(X)]. <Ref> suggests that the identification of the oracle risk hinges on the identification of the conditional expectation function μ^*. If μ^* is uniquely determined from the distribution of the selectively labeled data, then so is the oracle risk ℛ_oracle. Moreover, optimizing the oracle risk ℛ_oracle is equivalent to optimizing the risk ℛ. Hence, we will focus on the risk function ℛ(·) in the sequel. §.§ Point Identification In this subsection, we first study when the full-population conditional expectation μ^* and risk function ℛ(·) can be uniquely identified from the distribution of the observed data. It is well known that under the strong selection-on-observable condition Y^* ⊥ D | X, the conditional expectation can be identified via μ^*(X) = 𝔼[Y | D = 1, X]. However, this identification is invalid in the presence of unobserved features U. Instead, we consider identification based on an instrumental variable Z. Below we provide a sufficient identification condition adapted from <cit.> on causal effect estimation with instrumental variables. Assume <Ref> and the following condition: ( (D, Z | X, U), 𝔼[Y^*| X, U] | X ) = 0,   almost surely. Then both μ^* and ℛ_oracle are identifiable: μ^*(X) = r(X) (DY, Z | X) /(D, Z | X),  ℛ_oracle(f) = 𝔼[r(X) + (1 - 2r(X) ) · f(X) ]. <Ref> states that under the condition in <ref>, the function μ^* can be identified by the ratio r(X) of two conditional covariances involving only observed data, and the risk function ℛ(·) can be identified by ℛ_point(·) accordingly. In <Ref>, we will use this formulation to develop a learning algorithm. The condition <ref> is a generalization of the no unmeasured common effect modifier assumption in <cit.> from binary IV to multi-valued IV. It holds when conditionally on observed features X, all unobserved variables in U that modify the D-Z correlations are uncorrelated with those in U that affect the true outcome Y^*. To understand <ref>, we further provide a proposition below. Under <Ref>, <ref> holds if the following holds almost surely: ( 𝔼[D | Z = j, X, U] - 𝔼[D | Z = k, X, U], 𝔼[Y^* | X, U ] | X ) = 0, ∀ j, k ∈ [m]. When m = 2, <ref> is sufficient and necessary for <ref>. <Ref> provides a sufficient condition for the key identification condition in <ref>. It allows the unobserved features to impact both the true outcome Y^* and the decision D, but limits the impact in a specific way. For example, the condition in <ref> holds when D | Z = j, X, U = g_j(X) + q(U) for some functions g_j and q, namely, when the impact of the unobserved features is additive and uniform over all decision-makers. This means that although different decision-makers have different decision rules, on expectation they use the unobserved feature information in the same way. This is arguably a strong assumption that may often be violated in practice, but it formally shows additional restrictions that may be needed to achieve exact identification, highlighting the challenge of learning in the selective label setup even with the decision-maker IV. §.§ Partial Identification Bounds As we show in <Ref>, achieving point identification may require imposing strong restrictions that can be easily violated in practice (see <ref>). In this subsection, we avoid strong point identification conditions and instead provide tight partial identification bounds on the conditional expectation function μ^* and the risk function ℛ under only a mild condition below. There exist two known functions a(X) ∈ [0, 1] and b(X) ∈ [0, 1] such that a(X) ≤𝔼[Y^*| X, U] ≤ b(X),   almost surely. <Ref> requires known bounds on the conditional expectation Y^* | X, U but otherwise allows it to depend on the unobserved features U arbitrarily. This assumption is very mild because it trivially holds for a(X) = 0, b(X) = 1 given that the true outcome Y^* is binary. Under this condition, we obtain the following tight bound on μ^* following <cit.>. Assume <Ref> hold. Then μ^*(X) ∈ [l(X), u(X)] almost surely, where l(X) = max_z ∈ [m] {𝔼[DY | X, Z = z] + a(X) ·(1 - 𝔼[D | X, Z = z] ) }, u(X) = min_z ∈ [m] {𝔼[DY | X, Z = z] + b(X) ·(1 - 𝔼[D | X, Z = z] ) }. Moreover, l(X) and u(X) are the tightest bounds on μ^*(X) under <Ref>. We note that for a binary instrument, the bounds in <ref> coincide with the linear programming bounds in <cit.>. We discuss their connections in <Ref>. Based on <Ref>, we can immediately derive tight bounds on the risk function ℛ. Under assumptions in <Ref>, ℛ(f) ≤ℛ_oracle(f) ≤ℛ(f) for any f, where ℛ(f) = 𝔼[ min_μ(X) ∈ [l(X), u(X)]{μ(X) + (1 - 2μ(X)) f(X) }], R(f) = 𝔼[ max_μ(X) ∈ [l(X), u(X)]{μ(X) + (1 - 2μ(X)) f(X) }]. When we only assume the weak (but credible) <Ref>, the prediction risk objective is intrinsically ambiguous in that it cannot be uniquely identified from data. Instead, the risk can be only partially identified according to <Ref>. However, the partial identification risk bounds are still useful in learning an effective classification rule. One natural idea is to take a robust optimization approach and minimize the worst-case risk upper bound <cit.>. See <Ref> for details. §.§ Comparisons with Existing Works <cit.> propose a “contraction” technique to evaluate the performance of a given classifier in the judicial bail problem. Specifically, they provide an estimator for the “failure rate” of a classifier, i.e., the probability of falsely classifying a crime case into the no-crime class. They note that this estimator can be biased and provide a bound on its bias. Our results offer a new perspective to understand the bias. Indeed, <cit.> do not impose strong restrictions like <ref>, so the “failure rate” is actually not identified in their setting. In this case, any point estimator, including that in <cit.>, is likely to be biased <cit.>. In this paper, we explicitly acknowledge the lack of identification and provide tight bounds on the prediction risk, in order to avoid point estimates with spurious bias. <cit.> studies the evaluation of a prediction rule with selective labels in the partial identification setting. They consider bounding a variety of different prediction risk measures and one of their IV bounds can recover our bounds in <Ref>. Their paper focuses on risk evaluation and their risk bounds are not easy to optimize. In contrast, we thoroughly study the learning problem in <Ref> and propose a unified weighted classification formulation for both point and partial identification settings, thereby enabling efficient learning via standard classification packages. § A WEIGHTED LEARNING ALGORITHM WITH CONVEX SURROGATE LOSSES In this section, we propose a weighted learning algorithm to learn a classifier from data. Our algorithm is generic and can handle both the point-identified risk in <ref> and the worst-case risk for the partially identified setting. §.§ A Weighted Formulation of Risk Objectives We first note that a binary-valued classifier is typically obtained according to the sign of a real-valued score function, i.e., we can write f(x) = (sgn(h(x)) + 1)/2 ∈{0, 1} for a function h: 𝒳↦. With slight abuse of notation, we use ℛ_oracle(h) to denote the oracle risk of a classifier f given by a score function h. We aim to learn a good score function within a certain hypothesis class ℋ (e.g., linear function class, decision trees, neural networks). Let η^*(X) = μ^*(X) - 1/2. For any h: 𝒳↦, ℛ_oracle(h) = ℛ(h) + C_0 where ℛ(h) η^*(X)(η^*(X)) (h(X)) and C_0 is a constant that does not depend on h. <Ref> shows that for the learning purpose, it suffices to only consider the shifted risk ℛ(h). Below we provide a weighted formulation for this shifted risk in the two identification settings. Let ℛh; w = w(X)𝕀{[w(X)] [h(X)]}. Under the assumptions in <Ref>, ℛ(h) = ℛ_point(h) ℛh; w_point where w_point(X) = r(X)-1/2. Under the assumptions in <Ref>, the tightest upper bound on ℛ(h) is given by ℛ_partial(h) ℛh; w_partial where w_partial(X) = max{u(X) - 1/2, 0} + min{l(X) - 1/2, 0}. We can then learn the score function based on minimizing the point-identified risk ℛ_point or the partially-identified risk bound ℛ_partial respectively, which corresponds to minimizing ℛh; w for a suitable weight function w. In particular, optimizing ℛh; w amounts to solving a weighted classification problem where we use the sign of h(X) to predict the sign of w(X) with a misclassification penalty w(X). However, minimizing the weighted risk ℛh; w is generally intractable because of the non-convex and non-smooth indicator function 𝕀{·} therein. A common approach to overcome this challenge is to replace the indicator function by a convex surrogate function <cit.>. We follow this approach and focus on the surrogate risk: ℛ_Φ(h; w) := 𝔼[|w(X)| ·Φ( [w(X)]h(X) ) ], where Φ: ↦ [0, ∞) is a convex surrogate loss, such as the Hinge loss Φ(α) = max{1 - α, 0}, logistic loss Φ(α) = log(1 + e^-α), and exponential loss Φ(α) = e^-α, and so on. §.§ Empirical Risk Minimization We further propose a learning algorithm with empirical data based on <ref>, which is summarized in <Ref>. We describe its implementations in this subsection and leave a theoretical analysis of its generalization error bound to <Ref>. *Cross-fitting. Note the surrogate risk ℛ_Φ(h; w) involves an unknown weight function w, so we need to estimate w before approximating the surrogate risk. We call w a nuisance function since it is not of direct interest. In <Ref>, we apply a cross-fitting approach to estimate w and the surrogate risk function. Specifically, we randomly split the data into K folds and then train ML estimators for the weight function w using all but one fold data. This gives K weight function estimators ŵ_1, …, ŵ_K, and each estimator ŵ_k is evaluated only at data points in I_k held out from its training. This cross-fitting approach avoids using the same data to both train a nuisance function estimator and evaluate the nuisance function value, thereby effectively alleviating the overfitting bias. This approach has been widely used in statistical inference and learning with ML nuisance function estimators <cit.>. *Estimating weight function w. According to <Ref>, the weight functions in the point-identified and partially identified settings are different so need to be estimated separately. In the point-identified setting (M = point-id in <Ref>), we need to first estimate the function r(·) in <ref>, the ratio of the two conditional covariances (DY, Z | X) and (D, Z | X). A direct approach is to first estimate the two conditional covariances separately and then take a ratio. For example, we can write (DY, Z | X) = 𝔼[DYZ | X] - 𝔼[DY | X]𝔼[Z | X], where the conditional expectations can be estimated by regression algorithms with DYZ, DY, Z as labels and X as features respectively. We can similarly estimate (D, Z | X). Alternatively, if we focus on tree or forest methods, then we may employ the Generalized Random Forest method that can estimate a conditional covariance ratio function in a single step <cit.>. In the partially identified setting (M = partial-id in <Ref>), we need to first estimate the bounds l(·) and u(·) in <ref> that involve conditional expectations DY| X, Z and D | X, Z. We can estimate the latter via regression algorithms and then plug them into <ref>. *Classifier Optimization. We can solve for a classifier by minimizing the empirical approximation for the surrogate risk, as is shown in <ref>. We note that the minimization problem in <ref> corresponds to a weighted classification problem with ŵ_k(X_i)'s as the labels and ŵ_k(X_i)'s as weights. So it can be easily implemented by off-the-shelf ML classification packages that take additional weight inputs. For example, when the surrogate loss Φ is the logistic loss, we can simply run a logistic regression with the aforementioned labels and weights. We can also straightforwardly incorporate regularization in <ref> to reduce overfitting. § GENERALIZATION ERROR BOUNDS In this section, we theoretically analyze our proposed algorithm by upper bounding its generalization error. Specifically, we hope to bound its suboptimality in terms of the target risk objective, either ℛ_point or ℛ_partial, depending on the identification setting. In the following lemma, we first relate this target risk suboptimality to the surrogate risk. When the surrogate loss function Φ is logistic loss, hinge loss, or exponential loss, then the ĥ output from <ref> under identification mode M ∈{point, partial} satisfies ℛ_M(ĥ) - inf_hℛ_M(h) ≤{ℛ_Φ(ĥ; w_M) - inf_h ∈ℋℛ_Φ(h; w_M) }_Estimation error ℰ(ℋ) + {inf_h ∈ℋℛ_Φ(h; w_M) - inf_hℛ_Φ(h; w_M) }_Approximation error 𝒜(ℋ), where the form of ℛ_M(h) is defined in <Ref>. <Ref> shows that the suboptimality of the output classifier can be upper bounded by the sum of two terms 𝒜(ℋ) and ℰ(ℋ), both defined in terms of the surrogate risk ℛ_Φ. The term 𝒜(ℋ) captures the approximation error of the hypothesis class ℋ. If the class ℋ is flexible enough to contain any optimizer of the surrogate risk, then the approximation error term 𝒜(ℋ) becomes zero. The term ℰ(ℋ) quantifies how well the output classifier ĥ approximates the classifier within ℋ in terms of the surrogate risk, and we refer to it as an estimation error term. In the sequel, we focus on deriving a generalization bound for estimation error ℰ(ℋ). Fix the identification mode M ∈{point, partial}. Then ℰ(ℋ) ≤2/K∑_k=1^Ksup_h ∈ℋ| ℛ_Φ(h; ŵ_k) - ℛ_Φ(h; w_M) | + 2/K∑_k=1^K sup_h ∈ℋ|ℛ_Φ^k(h; ŵ_k) - ℛ_Φ(h; ŵ_k) | . The first term in <Ref> captures the error due to estimating the unknown weight function w_M, and the second term is an empirical process term. To bound these two terms, we need to further specify the estimation errors of the nuisance estimators, as we will do in <Ref>. In <Ref>, we further restrict the complexity of the hypothesis class ℋ by limiting the growth rate of its covering number. This is a common way to quantify the complexity of a function class in statistical learning theory <cit.>. Let ŵ_k(·) be the k-th estimator of the weight function w_M in <Ref>. Assume that there exist positive constants C_3, C_4, γ such that max{w_M(X), ŵ_1(X), …, ŵ_K(X)}≤ C_3 almost surely, and 𝔼[ŵ_k(X) - w_M(X)] ≤ C_4n^-γ for all k ∈ [K]. Let C_5, C_6, τ be some positive constants. Suppose h_∞ < C_5 for any h ∈ℋ. Moreover, assume that the ϵ-covering number of ℋ with respect to the ·_∞ norm, denoted as N(ϵ; ℋ; ·_∞), satisfies log N(ϵ; ℋ; ·_∞) ≤ C_6ϵ^-τ for any ϵ∈ (0, C_5]. We can upper bound the two terms in <Ref> under <Ref> and plug the bounds into <Ref>. This leads to the following theorem. Fix the identification mode M ∈{point, partial}. Assume that <Ref> hold. Given 0 < τ < 2, with probability at least 1 - δ, the ĥ output from <ref> satisfies ℛ_M(ĥ) - min_hℛ_M(h) ≤𝒜(ℋ) + 𝒪( n^-γ + n^-1/2(1 + √(log(1/δ))) ). As we can see from <Ref>, if the estimator of weight function w(·) in Assumption <ref> converges to its true value at rate γ = 1/2, the estimation error of excess risk is then simplified to ℛ_M(ĥ) - min_hℛ_M(h) ≤𝒜(ℋ) + 𝒪( n^-1/2(2 + √(log(1/δ))) ). § NUMERIC EXPERIMENTS In this section, we evaluate the performance of our proposed algorithm in a semi-synthetic experiment based on the home loans dataset from <cit.>. This dataset consists of 10459 observations of approved home loan applications. The dataset records whether the applicant repays the loan within 90 days overdue, which we view as the true outcome Y^*, and various transaction information of the bank account. The dataset also includes a variable called ExternalRisk, which is a risk score assigned to each application by a proprietary algorithm. We consider ExternalRisk and all transaction features as the observed features X. *Synthetic selective labeling. In this dataset the label of interest is fully observed, so we choose to synthetically create selective labels on top of the dataset. Specifically, we simulate 10 decision-makers (e.g., bank officers who handle the loan applications) and randomly assign one to each case. We simulate the decision D from a Bernoulli distribution with a success rate p_D that depends on an “unobservable” variable U, the decision-maker identity Z, and the ExternalRisk variable (which serves as an algorithmic assistance to human decision-making). We blind the true outcome Y^* for observations with D = 0. Specifically, we construct U as the residual from a random forest regression of Y^* with respect to X over the whole dataset, which is naturally dependent with Y^*. We then specify p_D according to Model 1: p_D = β·( α·expit{U } + (1 - α) ·expit{ (1 + Z) ·ExternalRisk}), Model 2: p_D = β·( expit{α· U + (1 - α) · (1 + Z) ·ExternalRisk}). Here the expit function is given by expit(t) = 1/(1 + exp(-t)). The parameter α∈ (0, 1) controls the impact of U on the labeling process and thus the degree of selection bias, and the parameter β∈ (0, 1) further adjusts the overall label missingness. We can easily verify that the point-identification condition in <ref> holds under Model 1 but it does not hold under Model 2. *Methods and Evaluation. We randomly split our data into training and and testing sets at a 7:3 ratio. On the training set, we apply four types of different methods. The first two are our proposed method corresponding to the point identification and partial identification settings (“point learning” and “partial learning”) respectively. The third and fourth are to run classification algorithms on the labeled subset only (“selected sample”) and the full training set (“full sample”). For each type of method, we try multiple different classification algorithms including AdaBoost, Gradient Boosting, Logistic Regression, Random Forest, and SVM. Our proposed method also need to estimate some unknown weight functions, which we implement by K=5 fold cross-fitted Gradient Boosting. All hyperparameters are chosen via 5-fold cross-validation. See Appendix <ref> for more details. We evaluate the classification accuracy of the resulting classifiers on the testing data. *Results and Discussions. <Ref> reports the testing accuracy of each method in 50 replications of the experiment when β = 1 and α∈{0.5, 0.7, 0.9}. We observe that as the degree of selection bias α grows, the gains from using our proposed method relative to the “selected sample” baseline also grows, especially for the “partial learning” method. Interestingly, the “partial learning” method has better performance even under Model 1 when the point identification condition holds. This is perhaps because the “point learning” method requires estimating a conditional variance ratio, which is often difficult in practice. In contrast, the “partial learning” method only requires estimating conditional expectations and tends to be more stable. This illustrates the advantage of the “partial learning” method: it is robust to the failure of point-identification, and it can achieve more stable performance even when point-identification indeed holds. In <Ref>, we further report the results for β∈{0.5, 0.25} with more missing labels. These are more challenging settings so the performance of our method and the “selected sample” baseline all degrades, but our proposed methods (especially the “partial learning” method) still outperform the baseline. § CONCLUSION In this paper, we study the evaluation and learning of a prediction rule with selective labels. We exploit a particular structure of heterogeneous decision-makers that arises in many applications. Specifically, we view the decision-maker assignment as an instrumental variable and rigorously analyze the identification of the full-population prediction risk. We propose a unified weighted formulation of the prediction risk (or its upper bound) when the risk is point-identified (or only partially identified). We develop a learning algorithm based on this formulation that can be efficiently implemented via standard machine learning packages, and empirically validate its benefit in numerical experiments. § SUPPLEMENT TO IDENTIFICATION ANALYSIS §.§ Proofs for <Ref> To begin with, by the iterated law of expectation, we have 𝔼[𝕀{Y^*≠ f(X) }] = 𝔼[𝔼[ 𝕀{Y^*≠ f(X) }| X ] ] = 𝔼[ℙ(Y^* = 1, f(X) = 0 | X) + ℙ(Y^* = 0, f(X) = 1 | X) ] = 𝔼[ℙ(Y^* = 1 | f(X) = 0, X) ·ℙ(f(X) = 0 | X ) ] + 𝔼[ℙ(Y^* = 0 | f(X) = 1, X) ·ℙ(f(X) = 1 | X) ]. It is essential to notice that 𝔼[ℙ(Y^* = 1 | f(X) = 0, X) ·ℙ(f(X) = 0 | X ) ] = 𝔼[ ℙ(Y^* = 1 | X) ·ℙ(f(X) = 0 | X ) ] = 𝔼[ 𝔼[Y^*| X] ·𝔼[1 - f(X) | X] ] = 𝔼[ 𝔼[μ^*(X) · (1 - f(X)) | X ] ] = 𝔼[μ^*(X) · (1 - f(X)) ]. The first equality use the consistency of the probability conditioning on X, and the second equality holds because Y^* and f(X) are both binary random variables with support {0, 1}. In a similar manner, we have 𝔼[ℙ(Y^* = 0 | f(X) = 1, X) ·ℙ(f(X) = 1 | X ) ] = 𝔼[ℙ(Y^* = 0 | X) ·ℙ(f(X) = 1 | X ) ] = 𝔼[𝔼[1 - Y^*| X] ·𝔼[f(X) | X] ] = 𝔼[(1 - μ^*(X)) · f(X) ]. Aggregating these together, we have 𝔼[𝕀(Y^*≠ f(X)) ] = 𝔼[μ^*(X) ] + 𝔼[(1 - 2 μ^*(X)) · f(X) ]. We prove this result in two steps. Step I. For any z ∈𝒵, the conditional expectation 𝔼[Y^*| X] could be expressed as 𝔼[Y^*| X] = 𝔼[𝔼[Y^*| X, U] | X ] = 𝔼[𝔼[Y^*| X, U, Z = z] | X ] = 𝔼[𝔼[DY^*| X, U, Z = z] | X ] + 𝔼[𝔼[(1 - D)Y^*| X, U, Z = z] | X ] = 𝔼[𝔼[DY | X, U, Z = z] | X ] + 𝔼[𝔼[Y^*| X, U, Z = z] ·𝔼[(1 - D) | X, U, Z = z] | X ] = 𝔼[𝔼[DY | X, U, Z = z] | X, Z = z ] + 𝔼[𝔼[Y^*| X, U] ·𝔼[(1 - D) | X, U, Z = z] | X ]. In the first equality, we apply the iterated law of expectation, and the IV independence is used in the second equality. In the fourth equality, we utilize the unconfoundedness and the last equality follows from IV independence again. Equivalently, we have 𝔼[Y^*| X] = 𝔼[DY | X, Z = z] + 𝔼[𝔼[Y^*| X, U] ·𝔼[(1 - D) | X, U, Z = z] | X ]. By subtracting 𝔼[Y^*| X] on both sides of the equality, we have for any z ∈𝒵, 0 = 𝔼[DY | X, Z = z] - 𝔼[𝔼[Y^*| X, U] ·𝔼[D | X, U, Z = z] | X ]. Multiplying the weight z ·ℙ(Z = z | X) on both sides of equation above generates the following: 0 = 𝔼[zDY | X, Z = z] ·ℙ(Z = z | X) - 𝔼[𝔼[Y^*| X, U] ·𝔼[zD | X, U, Z = z] ·ℙ(Z = z | X, U) | X ]. Here we using the fact that instrumental variable Z is independent with U given X (<Ref>) again. Taking the summation (or integral) over 𝒵 yields the following results: 0 = 𝔼[ZDY | X] - 𝔼[𝔼[Y^*| X, U] ·𝔼[ZD | X, U] | X ]. Finally, observe that { 𝔼[ZDY | X] = (DY, Z | X) + 𝔼[DY | X] ·𝔼[Z | X] 𝔼[ZD | X, U] = (D, Z | X, U) + 𝔼[D | X, U] ·𝔼[Z | X, U] . we have 0 = (DY, Z | X)_I + 𝔼[DY^*| X] ·𝔼[Z | X]_II - 𝔼[𝔼[Y^*| X, U] ·(D, Z | X, U) | X ]_III - 𝔼[𝔼[Y^*| X, U] ·𝔼[D | X, U] ·𝔼[Z | X, U] | X ]_IV, where here we use the fact that 𝔼[DY^*| X] = 𝔼[DY | X]. By the assumptions of unconfoundedness and IV independence again, we have 𝔼[Y^*| X, U] ·𝔼[D | X, U] ·𝔼[Z | X, U] = 𝔼[D Y^*| X, U] ·𝔼[Z | X]. Therefore the II and IV terms in <ref> cancel out, which show us that 0 = (DY, Z | X) - 𝔼[𝔼[Y^*| X, U] ·(D, Z | X, U) | X ]. Step II. By assuming that ( (D, Z | X, U), 𝔼[Y^*| X, U] | X ) = 0, we have 𝔼[𝔼[Y^*| X, U] ·(D, Z | X, U) | X ] = 𝔼[𝔼[Y^*| X, U] | X ] ·𝔼[(D, Z | X, U) | X ]. According to conditional covariance identity, we have (D, Z | X) = 𝔼[(D, Z | X, U) | X ] + 𝔼[(𝔼[D | X, U], 𝔼[Z | X, U] ) | X ] = 𝔼[(D, Z | X, U) | X ] + 𝔼[(𝔼[D | X, U], 𝔼[Z | X] ) | X ] = 𝔼[(D, Z | X, U) | X ]. Combining <ref> with <ref> leads us to the desired result, that is, 𝔼[Y^*| X] = (DY, Z | X) /(D, Z | X) := r(X). Substitute the result into risk function ℛ_oracle(f) in (<ref>) completes this proof. Notice that the conditional covariance (D, Z | X, U) could be decomposed as (D, Z | X, U) = 𝔼[DZ | X, U] - 𝔼[D | X, U] ·𝔼[Z | X, U] = ∑_j=1^m𝔼[DZ | X, U, Z = j] ·ℙ(Z = j | X, U) - 𝔼[D | X, U] ·∑_j=1^m j ·ℙ(Z = j | X, U) = ∑_j=1^m j ·𝔼[D | X, U, Z = j] ·ℙ(Z = j | X, U) - ∑_k = 1^m𝔼[D | X, U, Z = k] ·ℙ(Z = k | X, U) ·∑_j=1^m j ·ℙ(Z = j | X, U) = ∑_k=1^m∑_j=1^m j ·ℙ(Z = j | X, U) ·ℙ(Z = k | X, U) ·[𝔼[D | X, U, Z = j] - 𝔼[D | X, U, Z = k] ]. Therefore, if ( 𝔼[D | Z = j, X, U] - 𝔼[D | Z = k, X, U], 𝔼[Y^* | X, U ] | X ) = 0, ∀ j, k ∈ [m] almost surely, <Ref> is guaranteed by the decomposition above. According to (<ref>), for any z ∈𝒵, the conditional expectation 𝔼[Y^*| X] could be expressed as 𝔼[Y^*| X] = 𝔼[DY | X, Z = z] + 𝔼[𝔼[Y^*| X, U] ·𝔼[(1 - D) | X, U, Z = z] | X ]. Since this formula holds for every z ∈𝒵, we must have 𝔼[Y^*| X] = max_z ∈𝒵{𝔼[DY | X, Z = z] + 𝔼[𝔼[Y^*| X, U] ·𝔼[(1 - D) | X, U, Z = z] | X ] } 𝔼[Y^*| X] = min_z ∈𝒵{𝔼[DY | X, Z = z] + 𝔼[𝔼[Y^*| X, U] ·𝔼[(1 - D) | X, U, Z = z] | X ] }. Without loss of generality, we assume 𝔼[(1 - D) | X, U, Z = z] ≥ 0. As we have assumed that 𝔼[Y^*| X, U] ∈ [α(X), β(X)] in <Ref>, we then have 𝔼[Y^*| X] ≥max_z ∈𝒵{𝔼[DY | X, Z = z] + α(X) ·𝔼[𝔼[(1 - D) | X, U, Z = z] | X, Z = z ] } = max_z ∈𝒵{𝔼[DY | X, Z = z] + α(X) ·𝔼[(1 - D) | X, Z = z] }. If α(X) is a sharp lower bound for 𝔼[Y^*| X, U], the lower bound (<ref>) is also sharp for 𝔼[Y^*| X]. Similarly, we have 𝔼[Y^*| X] ≤min_z ∈𝒵{𝔼[DY | X, Z = z] + β(X) ·𝔼[𝔼[(1 - D) | X, U, Z = z] | X, Z = z ] } = min_z ∈𝒵{𝔼[DY | X, Z = z] + β(X) ·𝔼[(1 - D) | X, Z = z] }. Combining the results above, we have max_z ∈𝒵 {𝔼[DY | X, Z = z] + α(X) ·(1 - 𝔼[D | X, Z = z] ) } ≤𝔼[Y | X] ≤ min_z ∈𝒵 {𝔼[DY | X, Z = z] + β(X) ·(1 - 𝔼[D | X, Z = z] ) }. For any μ'(X) ∈ [l(X) , u(X)], we have (1/2 - μ'(X))f(X) ≤max_μ(X) ∈ [l(X), u(X)]{(1 - 2 μ(X))f(X)}. The inequality above still holds when we take expectation on both sides, which suggests that ℛ(f) ≤ℛ(f). We can show that ℛ(f) ≥ℛ(f) in a similar way. §.§ Balke and Pearl's Bound <cit.> provides partial identification bounds for the average treatment effect of a binary treatment with a binary instrumental variable. In this section, we adapt their bound to our setting with partially observed labels and a binary IV (i.e., the assignment to one of two decision-makers). Under <Ref>, we have following decomposition of joint probability distribution of (Y, D, Z, U) ℙ(Y, D, Z, U) = ℙ(Y | D, U) ·ℙ(D | Z, U) ·ℙ(Z) ·ℙ(U). Here we omit the observed covariates X for simplicity, or alternatively, all distributions can be considered as implicitly conditioning on X. Now we define three response functions which characterize the values of Z, D(0), D(1), and Y^*: r_Z = { 0 if Z = 0 1 if Z = 1 ., r_D = { 0 if D(0) = 0 and D(1) = 0 1 if D(0) = 0 and D(1) = 1 2 if D(0) = 1 and D(1) = 0 3 if D(0) = 1 and D(1) = 1 ., r_Y = { 0 if Y^* = 0 1 if Y^* = 1 .. Next, we specify the joint distribution of unobservable variables r_D and r_Y as follows: q_kj = ℙ(r_D = k, r_Y = j) ∀ k ∈{0, 1, 2, 3}, j ∈{0, 1 }, which satisfies the constraint ∑_k=0^3 (q_k0 + q_k1) = 1. Then the target mean parameter of the true outcome can be written as a linear combinations of the q's. Moreover, we note that the observable distribution ℙ(Y, D | Z) is fully specified by the following six variables p_na,0 = ℙ(D = 0 | Z = 0) p_na,1 = ℙ(D = 0 | Z = 1) p_01,0 = ℙ(Y = 0, D = 1 | Z = 0) p_01,1 = ℙ(Y = 0, D = 1 | Z = 1) p_11,0 = ℙ(Y = 1, D = 1 | Z = 0) p_11,1 = ℙ(Y = 1, D = 1 | Z = 1), with constraints p_11,0 + p_01,0 + p_na,0 = 1 and p_11,1 + p_01,1 + p_na,1 = 1. We also have the following relation between p's and q's: p_na,0 = q_00 + q_01 + q_10 + q_11 p_01,0 = q_20 + q_30 p_11,0 = q_21 + q_31 p_na,1 = q_00 + q_01 + q_20 + q_21 p_01,1 = q_10 + q_30 p_11,1 = q_11 + q_31. Therefore, we have p = Pq where p=(p_na,0, …, p_11,1), q=(q_00, …, q_31), and P = [ 1 1 0 0 1 1 0 0; 0 0 1 1 0 0 0 0; 0 0 0 0 0 0 1 1; 1 0 1 0 1 0 1 0; 0 1 0 1 0 0 0 0; 0 0 0 0 0 1 0 1; ]. Then the lower bound on Y^* can be written as the optimal value of the following linear programming problem q_01 + q_11 + q_21 + q_31 subject to ∑_k=0^3∑_j=0^1 q_kj = 1 P q = p q_kj≥ 0 k ∈{0, 1, 2, 3}, j ∈{0, 1}. . Similarly, the upper bound on Y^* can be written as the optimal value of the following optimization problem: q_01 + q_11 + q_21 + q_31 subject to ∑_k=0^3∑_j=0^1 q_kj = 1 P q = p q_kj≥ 0 k ∈{0, 1, 2, 3}, j ∈{0, 1}. In fact, by simply comparing the variables in the objective function 𝔼[Y^*] = q_01 + q_11 + q_21 + q_31 and those in constraints, one could find that p_11,0 = q_21 + q_31≤𝔼[Y^*] p_11,1 = q_11 + q_31≤𝔼[Y^*] p_11,0 + p_na,0 = q_00 + q_10 + q_01 + q_11 + q_21 + q_31≥𝔼[Y^*] p_11,1 + p_na,1 = q_00 + q_20 + q_01 + q_11 + q_21 + q_31≥𝔼[Y^*]. If we let L = max{p_11, 0, p_11, 1} = max_z ∈{0, 1} {ℙ(Y = 1, D = 1 | Z = z) } U = min{p_11,0 + p_na, 0, p_11,1 + p_na, 1} = min_z ∈{0, 1} {ℙ(Y = 1, D = 1 | Z = z) + ℙ(D = 0 | Z = z) }. We then have the following partial bounds of 𝔼[Y | X]. L ≤ 𝔼[Y^*] ≤ U. According to <cit.>, the bounds above are tight for 𝔼[Y^*]. We note that if we condition on X in these bounds, then the corresponding bound on Y^* | X coincide with the bounds in <Ref> specialized to a binary instrument. § PROOFS FOR <REF> Recall that ℛ_oracle(f) = 𝔼[μ^*(X) + (1 - 2 μ^*(X)) f(X)] where μ^*(X) := 𝔼[Y^*| X]. Suppose there is a real-valued function h on 𝒳 such that for each f, we have f(X) = ([h(X)] + 1)/2. Let η^*(X) = μ^*(X) - 1/2, we have, ℛ_oracle(f) = 𝔼[μ^*(X) + (1 - 2 μ^*(X)) ([h(X)] + 1/2) ] = 1/2 + 𝔼[(1/2 - μ^*(X) ) [h(X)] ] = 1/2 + 𝔼[(- η^*(X) ) [h(X)] ] := ℛ_oracle(h). Here we redefine the oracle risk function ℛ_oracle(h) with respect to h, which can be further decomposed into ℛ_oracle(h) = 1/2 + 𝔼[ (- η^*(X) ) [h(X) ] ] = 1/2 + 𝔼[|η^*(X)| ·[- η^*(X) ] [h(X) ] ] = 1/2 + 𝔼[|η^*(X)| ·(2 𝕀{[η^*(X) ] ≠[h(X) ] } - 1 ) ] = 1/2 - 𝔼[|η^*(X)|] + 2 𝔼[|η^*(X)| ·𝕀{[η^*(X) ] ≠[h(X) ] }] := C_0 + ℛ(h). Here we let C_0 = 1/2 - 𝔼[|η^*(X)|] and ℛ(h) := 𝔼[|η^*(X)| ·𝕀{[η^*(X) ] ≠[h(X) ] }] and the constant term 2 is omitted as it does not affect the optimization over h. Therefore, in the sequel, we only consider optimizing the risk function ℛ(h) instead. Recall from <Ref> that ℛ(h) = 𝔼[|η^*(X)| ·𝕀{[η^*(X) ] ≠[h(X) ] } ] where η^*(X) = μ^*(X) - 1/2. Under the case of point identification, the conditional mean function μ^*(X) is identified as r(X) = (DY, Z | X)/(D, Y | X). We then have w_point(X) := r(X) - 1/2 = η^*(X). In this case, we define the risk function origins from ℛ(h) as ℛ_point(h) := 𝔼[|w_point| ·𝕀{[w_point(X) ] ≠[h(X) ] }]. Under the case of partial identification, we have μ^*(X) ∈ [l(X), u(X)]. Let l̃(X) = l(X)-1/2 and ũ(X) = u(X) - 1/2, we then have η^*(X) ∈ [l̃(X), ũ(X)]. In this sense, we define partial risk from ℛ(h) as ℛ_partial(h) := 𝔼[max_η(X) ∈ [l̃(X), ũ(X)] |η(X)| ·𝕀{(η(X)) ≠(h(X)) }], which can be further written as ℛ_partial(h) = 𝔼[max_η(X) ∈ [l̃(X), ũ(X)] |η(X)| ·𝕀{[η(X)] ≠[h(X) ] }] = 𝔼[ 𝔼[ max_η(X) ∈ [l̃(X), ũ(X)] |η(X)| ·𝕀{[η(X)] ≠[h(X) ] }|l̃(X), ũ(X) ] ] = 𝔼[|ũ(X)| ·𝕀{1 ≠[(X) ] }·𝕀{l̃(X) > 0}] + 𝔼[|l̃(X)| ·𝕀{-1 ≠[h(X) ] }·𝕀{ũ(X) < 0}] + 𝔼[max(|ũ(X)| ·𝕀{1 ≠[h(X) ] }, |l̃(X)| ·𝕀{-1 ≠[h(X) ] }) ·𝕀{0 ∈ [l̃(X), ũ(X)] }] = 𝔼[|ũ(X)| ·𝕀{1 ≠[h(X) ] }·𝕀{l̃(X) > 0}] + 𝔼[|l̃(X)| ·𝕀{-1 ≠[h(X) ] }·𝕀{ũ(X) < 0}] + 𝔼[ |ũ(X)| ·𝕀{1 ≠[h(X) ] }·𝕀{0 ∈ [l̃(X), ũ(X)] }] + 𝔼[ |l̃(X)| ·𝕀{-1 ≠[h(X) ] }·𝕀{0 ∈ [l̃(X), ũ(X)] }] = 𝔼[|ũ(X)| ·𝕀{1 ≠[h(X) ] }·𝕀{ũ(X) > 0}] + 𝔼[|l̃(X)| ·𝕀{-1 ≠[h(X) ] }·𝕀{l̃(X) < 0}] = 𝔼[|ũ(X)| ·𝕀{1 ≠[h(X) ] }·𝕀{ũ(X) > 0}] + 𝔼[|l̃(X)| ·(1 - 𝕀{1 ≠[h(X) ] }) ·𝕀{l̃(X) < 0}] = 𝔼[|l̃(X)| ·𝕀{l̃(X) < 0 }] + 𝔼[ ( |ũ(X)| ·𝕀{ũ(X) > 0} - |l̃(X)| ·𝕀{l̃(X) < 0}) ·𝕀{1 ≠[h(X) ] }]. Let w_partial(X) = |ũ(X)| ·𝕀{ũ(X) > 0} - |l̃(X)| ·𝕀{l̃(X) < 0}, we have ℛ_partial(h) = 𝔼[|l̃(X)| ·𝕀{l̃(X) < 0 }] _(⋯) + 𝔼[w_partial(X) ·𝕀{1 ≠[h(X) ]}] = (…) + 𝔼[w_partial(X) ·1 - (h(X))/2] = (…) + 1/2𝔼[w_partial(X) ] + 1/2𝔼[- w_partial(X) ·[h(X) ] ] = (…) + 1/2𝔼[w_partial(X) ] + 1/2𝔼[|w_partial(X)| ·[-w_partial(X) ] ·[h(X) ] ] = (…) + 1/2𝔼[w_partial(X) ] + 1/2𝔼[|w_partial(X)| ·(2 𝕀{[w_partial(X) ] ≠[h(X) ]} - 1 ) ] = (⋯) + 𝔼[|w_partial(X)| ·𝕀{[w_partial(X) ] ≠[h(X) ] }]. Since the term (⋯) does not affect the optimization over h, we may drop it in the sequel. Therefore, with slight abuse of notation, we define the risk function under partial identification as ℛ_partial := 𝔼[|w_partial(X)| ·𝕀{[w_partial(X) ] ≠(h(X)) }], where w_partial(X) = |ũ(X)| ·𝕀{ũ(X) > 0} - |l̃(X)| ·𝕀{l̃(X) < 0} = max(u(X) - 1/2, 0) + min(l(X) - 1/2, 0). Overall, the expected risk function under point and partial identification can be written in an unified form as ℛh; w = w(X)𝕀{[w(X)] [h(X)]} with w_point(X) = r(X) - 1/2 and w_partial(X) = max(u(X) - 1/2, 0) + min(l(X) - 1/2, 0). § PROOFS FOR <REF> §.§ Decomposition of Excess Risk In this subsection, we will prove <Ref> in two steps. We first show that the excess risk ℛ_M(ĥ) - inf_hℛ_M(h) is boundned by the excess Φ-risk ℛ_Φ(ĥ; w_M) - min_hℛ_Φ(h; w_M) with respect to Hinge loss, logistic loss and exponential loss for M ∈{point, partial}. We then complete the proof of <Ref> by showing that the excess Φ-risk can be expressed in two parts. Denote ℛ_M^* = inf_h ℛ_M(h) and ℛ_Φ^* = inf_h ℛ_Φ(h; w_M). For any measurable function h and M ∈{point, partial}, the excess risk with respect to h is controlled by the excess Φ-risk: ℛ_M(h) - ℛ_M^*≤ℛ_Φ(h; w_M) - ℛ_Φ^*. Let h^* = _h ℛ_M(h) be the Bayes optimal function which minimizes the expected risk ℛ_M(h). For both M ∈{point, partial}, the excess risk with respect to measurable function h can be written into ℛ_M(h) - ℛ_M^* = 𝔼[ |w_M(X)| ·( 𝕀{[w_M(X)] ≠[h(X)] } - 𝕀{[w_M(X)] ≠[h^*(X)] }) ] = 𝔼[ |w_M(X)| ·𝕀{[h^*(X)] ≠[h(X)] }]. Here we use the fact that the Bayes optimal function h^*(X) should have the same sign as weight function w(X). Recall from (<ref>) that the Φ-risk is defined as ℛ_Φ(h; w) = 𝔼[|w(X)| ·Φ[w(X)] · h(X) ] and consider the surrogate loss functions: Hinge loss: Φ(α) = max{1 - α, 0} logistic loss: Φ(α) = log(1 + e^-α) exponential loss: Φ(α) = e^-α . Notice that for any Φ∈{Hinge, logistic, exponential}, the surrogate loss Φ(α) is lower-bounded (in fact, inf_αΦ(α) = 0), then we have inf_α𝔼[Φ(α(X) )] = 𝔼[inf_αΦ(α(X))]. Hence, for any given w(X), the Bayes risk ℛ_Φ^* can be formalized as ℛ_Φ^* = inf_h𝔼[ |w(X)| ·Φ( [w(X)] · h(X) ) ] = 𝔼[ inf_h |w(X)| ·Φ( [w(X)] · h(X) ) ], which suggests that the excess Φ-risk ℛ_Φ(h) - ℛ_Φ^* can be written as the expectation of |w(X)| ·[𝕀{w(X) ≥ 0}·( Φ(h(X)) - inf_αΦ(α(X) ) ) + 𝕀{w(X) < 0}·( Φ(-h(X)) - inf_αΦ(-α(X) ) ) ]. In the following contents, we compare the value of ℛ_M(h) - ℛ_M^* and ℛ_Φ(h) - ℛ_Φ^* by considering all possible value of (w(X), h(X)) for any fixed X = x. * Consider the case when w(x) ≥ 0, we have [h^*(x)] = [w(x)] = +1 and 𝕀{[h^*(x)] ≠[h(x)] } = 𝕀{1 ≠[h(x)] }. Meanwhile, for any Φ∈{Hinge, logistic, exponential}, we have inf_αΦ(α(x) ) = 0 and Φ(h(x) ) - inf_αΦ(α(x) ) = Φ(h(x)) - 0. * If h(x) ≥ 0, then Φ(h(x)) ≥ 0 holds for any Φ∈{Hinge, logistic, exponential}: Φ_Hinge(h(x) ) = (1 - |h(x)|)^+≥ 0 Φ_logistic(h(x) ) = log_2(1 + e^-|h(x)| ) ≥ 0 Φ_exponential(h(x) ) = e^-|h(x)|≥ 0 . Therefore, we have Φ(h(x) ) - inf_αΦ(α(x) ) ≥ 0 = 𝕀{1 ≠[h(x)] }. * If h(x) < 0, then Φ(h(x)) ≥ 1 holds for any Φ∈{Hinge, logistic, exponential}: Φ_Hinge(h(x) ) = (1 + |h(x)|)^+≥ 1 Φ_logistic(h(x) ) = log_2(1 + e^|h(x)| ) ≥log_2(1 + e^0) = 1 Φ_exponential(h(x) ) = e^|h(x)|≥ e^0 = 1 . Therefore, we have Φ(h(x) ) - inf_αΦ(α(x) ) ≥ 1 = 𝕀{1 ≠[h(x)] }. * We can do the similar analysis when w(x) < 0 and obtain the same conclusion that Φ( h(x) ) - inf_αΦ(α(x) ) ≥𝕀{[h^*(x)] ≠ h(x) }. Finally, by multiplying the common weight |w(X)| and taking the expectation over X, we conclude that ℛ_Φ(h; w_M) - ℛ_Φ^*≥ℛ_M(h) - ℛ_M^*. Therefore, when the excess Φ-risk is minimized, the original excess risk also attains its minimum. We are now ready to give the proof of Lemma <ref>. Notice that the excess Φ-risk defined in <Ref> can always be decomposed into ℛ_Φ(ĥ; w_M) - inf_hℛ_Φ(h; w_M) = ℛ_Φ(ĥ; w_M) - inf_h ∈ℋℛ_Φ(h; w_M) + inf_h ∈ℋℛ_Φ(h; w_M) - inf_hℛ_Φ(h; w_M). Here we explicitly write the risk function ℛ_Φ in terms of hypothesis function h as well as the weight function w_M to emphasize the role of weight function w(·) in the risk function. Finally, Combining the result of <Ref> with <ref>, we complete the proof of <Ref>. §.§ Decomposition of Estimation Error In this part, we further prove <Ref>. Let h_Φ^* = _h ℛ_Φ(h; w_M) be the Bayes optimal minimiser of expected Φ-risk, while denote h_Φ, ℋ^*∈_h ∈ℋℛ_Φ(h; w_M) be the best-in-class minimiser of expected Φ-risk. We can now write the estimation error ℰ(ℋ) as ℰ(ℋ) = ℛ_Φ(ĥ; w_M) - inf_h ∈ℋℛ_Φ(h; w_M) = ℛ_Φ(ĥ; w_M) - ℛ_Φ(h_Φ, ℋ^*; w_M) = [ ℛ_Φ(ĥ; w_M) - ℛ_Φ(h_Φ^*; w_M) ] _Part I + [ ℛ_Φ(h_Φ^*; w_M) - ℛ_Φ(h_Φ, ℋ^*; w_M) ] _Part II. Note that h_Φ^* is the global optimal of expected Φ-risk, we have Part II≤ 0. Before we go for Part I, we define ℛ_Φ(h) := 1/K∑_k=1^Kℛ_Φ(h; ŵ_k) = 1/K∑_k=1^K𝔼[L_Φ(h; ŵ_k)(X)] ≥ 0 ℛ_Φ(h) := 1/K∑_k=1^Kℛ_Φ^k(h; ŵ_k) = 1/K∑_k=1^K1/n_k∑_i ∈ I_k L_Φ(h; ŵ_k) ≥ 0. One may notice that ĥ defined in <ref> is the best-in-class minimiser of ℛ_Φ(h). As a result, for Part I we have Part I = [ ℛ_Φ(ĥ; w_M) - ℛ_Φ(ĥ) ] + [ ℛ_Φ(ĥ) - ℛ_Φ(h_Φ^*) ] + [ ℛ_Φ(h_Φ^*) - ℛ_Φ(h_Φ^*; w_M ) ] ≤ 2 sup_h ∈ℋ| ℛ_Φ(h) - ℛ_Φ(h; w_M) | + [ ℛ_Φ(ĥ) - ℛ_Φ(h_Φ^*) ] = 2 sup_h ∈ℋ|1/K∑_k=1^Kℛ_Φ(h; ŵ_k) - ℛ_Φ(h; w_M) | + [ ℛ_Φ(ĥ) - ℛ_Φ(h_Φ^*) ] ≤2/K∑_k=1^Ksup_h ∈ℋ|ℛ_Φ(h; ŵ_k) - ℛ_Φ(h; w_M) | + [ ℛ_Φ(ĥ) - ℛ_Φ(h_Φ^*) ]_(**). The establishment of the last inequality is guaranteed by Jensen's inequality and the convexity of absolute value function |·|, at the same time, pointwise supremum over ℋ preserve such convexity. In fact, one may notice that the second term (**) above is indeed bounded by the supremum of an empirical process. To see this, notice that (**) = [ ℛ_Φ(ĥ) - ℛ_Φ(ĥ) ] + [ ℛ_Φ(ĥ) - ℛ_Φ(h_Φ^*) ] + [ ℛ_Φ(h_Φ^*) - ℛ_Φ(h_Φ^*) ] ≤sup_h ∈ℋ| ℛ_Φ(h) - ℛ_Φ(h) | + 0 + sup_h ∈ℋ| ℛ_Φ(h) - ℛ_Φ(h) | = 2 sup_h ∈ℋ|1/K∑_k=1^K[ ℛ_Φ(h; ŵ_k) - ℛ_Φ(h; ŵ_k) ] | ≤2/K∑_k=1^Ksup_h ∈ℋ| ℛ_Φ(h; ŵ_k) - ℛ_Φ(h; ŵ_k) |. The first inequality holds because we have assumed ĥ to be the minimiser of empirical Φ-risk ℛ(h) over ℋ in (<ref>), while the reasons for the establish of the last inequality are exactly the same as what we analyze in <ref>. Combining the results from <ref> to <ref> leads to the bound of the estimation error ℰ(ℋ): ℰ(ℋ) ≤2/K∑_k=1^Ksup_h ∈ℋ|ℛ_Φ(h; ŵ_k) - ℛ_Φ(h; w_M) | _Supremum of Nuisance Estimation Error + 2/K∑_k=1^Ksup_h ∈ℋ| ℛ_Φ(h; ŵ_k) - ℛ_Φ^k(h; ŵ_k) | _Supremum of Empirical Process. §.§ Generalization Error Bounds In this final subsection, we will prove <Ref> under <Ref>. We will first demonstrate the generalization bound of estimation error ℰ(ℋ) by introducing the following lemma. We then incorporate the approximation error 𝒜(ℋ) into <Ref> and then derived the final theoretical guarantee of excess risk. The complete proof of <Ref> is given at the end of this subsection. Throughout this section, we let K be a fixed number much smaller than n, so all n_k = I_k's are of the same order of n. To begin with, we introduce our first conclusion in <Ref>, which upper-bounds the supremum of nuisance estimation term in <Ref> over h ∈ℋ. Assume that <Ref> hold. For any function h ∈ℋ, k ∈ [K] for a fixed K and Φ∈{Hinge, logistics, exponential}, we have sup_h ∈ℋ|ℛ_Φ(h; ŵ_k) - ℛ_Φ(h; w_M) | = 𝒪(n^- γ). To simplify the notations, we define a new loss function L_Φ(h; w)(X) := |w(X)| ·Φ([w(X)] · h(X)) ∈ℝ^+, and the empirical risk function defined in <Ref> is then ℛ_Φ^k(h; ŵ_k) = 1/n_k∑_i ∈ I_k L_Φ(h; ŵ_k)(X_i), where I_k denotes the index set of k-th fold in cross-fitting and ŵ_k is an estimate of the weight function using sample other than the k-th fold. For each h ∈ℋ and k ∈ [K], the difference of expected Φ-risk with respect to distinct nuisance functions is the expectation of the following: L_Φ(h; ŵ_k)(X) - L_Φ(h; w_M)(X) = |ŵ_k(X)| ·Φ( [ŵ_k(X)] · h(X) ) - |w_M(X)| ·Φ( [w_M(X)] · h(X) ). For any fixed X = x, we have the following bounds on surrogate loss Φ(·) with respect to h(x): * When h(x) ≥ 0, we have 0 < Φ(h(x)) = { e^-h(x)≤ 1 exponential loss (1 - h(x))^+≤ 1 Hinge loss log_2(1 + e^-h(x)) ≤ 1 logistic loss .. * When h(x) < 0, we have 1 < Φ(h(x)) = { e^-h(x)≤ e^h(x)_∞ exponential loss (1 - h(x))^+≤ 1 + h(x)_∞ Hinge loss log_2(1 + e^-h(x)) ≤ 1 + h(x)_∞log_2 e logistic loss .. According to <Ref>, we have h_∞≤ C_5, which implies 1 < Φ(h(x)) = { e^-h(x)≤ e^C_5 exponential loss (1 - h(x))^+≤ 1 + C_5 Hinge loss log_2(1 + e^-h(x)) ≤ 1 + C_5 log_2 e logistic loss .. From the analysis above, we can see that Φ(h(x)) is uniformly bounded under <Ref>. Consequently, we can now upper-bound the compound surrogate risk function Φ([w(x)] · h(x)) by enumerating the possible sign of functions ŵ_k(x) and w_M(x). * Case 1: Suppose ŵ_k(x) ≥ 0 and w_M(x) ≥ 0, we have that for any Φ∈{Hinge, logistic, exponential}, L_Φ(h; ŵ_k)(x) - L_Φ(h; w_M)(x) = ( |ŵ_k(x)| - |w_M(x)| ) ·Φ(h(x)) ≤( |ŵ_k(x) - w_M(x)| ) ·Φ(h(x)) = 𝒪(|ŵ_k(x) - w_M(x)| ). * Case 2: Suppose ŵ_k(x) ≥ 0 while w_M(x) < 0, we have that for any Φ∈{Hinge, logistic, exponential}, L_Φ(ŵ, h; x) - L_Φ(w, h; x) = |ŵ(x)| ·Φ( h(x) ) - |w(x)| ·Φ( -h(x) ) ≤ŵ(x) ·Φ(h(x)) - w(x) ·Φ(-h(x)) ≤(ŵ_k(x) - w_M(x) ) ·max{Φ(h(x)), Φ(-h(x)) } = 𝒪(|ŵ_k(x) - w_M(x)| ). The first inequality holds because we assume w(x) < 0, and the last inequalities hold because Φ(·) is uniformly bounded over ℋ. * Case 3: Suppose ŵ_k(x) < 0 and w_M(x) ≥ 0, the analysis procedure is similar to Case 2 and we can get similar results by symmetry. * Case 4: Suppose ŵ_k(X) < 0 and w_M(x) < 0, the analysis procedure is similar to Case 1 and we can get similar results by symmetry. We can see from the above analysis that <ref> is always upper-bounded by |w(x) - ŵ(x)| up to a finite constant. Combining the results above with <Ref>, which suggests that 𝔼|ŵ_k(X) - w_M(X)| ≤ C_4 n^- γ, we have sup_h ∈ℋ|ℛ_Φ(h; ŵ_k) - ℛ_Φ(h; w_M) | = sup_h ∈ℋ|𝔼[ L_Φ(h; ŵ_k)(X) - L_Φ(h; w_M)(X) ] | ≤sup_h ∈ℋ𝔼|L_Φ(h; ŵ_k)(X) - L_Φ(h; w_M)(X) | ≤𝔼[ sup_h ∈ℋ|L_Φ(h; ŵ_k)(X) - L_Φ(h; w_M)(X) | ] = 𝒪( n^- γ) for any Φ∈{Hinge, logisitic, exponential}. To facilitate our analysis on the generalization error bound of empirical process defined in <Ref>, we denote a new variable W = (Y, D, X, Z) over the domain 𝒲 = 𝒴×𝒳×𝒟×𝒵 with unknown distribution P and a function g_k(W) = L_Φ(h; ŵ_k)(X) in a function class 𝒢_k= {W ↦ L_Φ(h; ŵ_k)(X) | h ∈ℋ}∈ℝ^+. The empirical process defined in <Ref> can then be expressed as sup_h ∈ℋ|ℛ_Φ^k(h; ŵ_k) - ℛ_Φ(h; ŵ_k) | = sup_g ∈𝒢|1/n_k∑_i ∈ I_k g_k(W_i) - 𝔼[g_k(W)] |, ∀ k ∈ [K]. Given that h ∈ℋ is uniformly bounded on x ∈𝒳 according to <Ref> and Y is binary, one can justify the new function g_k ∈𝒢_k is also uniformly bounded over w ∈𝒲. Now, given the i.i.d. sample 𝒮_n = {W_1, …, W_n} drawn from P, the empirical Rademacher complexity and Rademacher complexity with respect to 𝒢_k can be defined as R_𝒮_n(𝒢_k) = 𝔼_σ[sup_g_k ∈𝒢_k1/n∑_i = 1^nσ_i g_k(W_i) ], R_n(𝒢_k) = 𝔼_σ, P[sup_g_k ∈𝒢_k1/n∑_i = 1^nσ_i g_k(W_i) ]. where σ = (σ_1, …, σ_n)^⊤ is a vector of i.i.d Rademacher variables taking values in {+1, -1 }. According to <cit.>, we can use the Rademacher complexity to upper bound the empirical process in <Ref>. Without loss of generality, we assume g_k is uniformly bounded by 1 over 𝒢_k, that is, 0 ≤ g_k(W) ≤ 1 for any g_k ∈𝒢_k, k = 1, …, K. Then, with probability at least 1 - δ, for all k ∈ [K], sup_g_k ∈𝒢_k[1/n_k∑_i ∈ I_k g_k(W_i) - 𝔼[g_k(W)] ] ≤ 2 R_𝒮_n(𝒢_k) + 3 √(2 log(1/δ)/n). With <Ref>, once we upper-bound the empirical Rademacher complexity with respect to 𝒢_k for k ∈ [K], we could get the convergence rate of sup_h ∈ℋ|ℛ_Φ^k(h; ω̂_k) - ℛ_Φ(h; ω̂) |. To further bound R_𝒮_n(𝒢_k) for k ∈ [K], we use the chaining technique with covering numbers <cit.>. Given the function class 𝒢_k, a ϵ-cover of 𝒢_k with respect to a pseduo-metric ρ is a set {g^(1), …, g^(N)}⊂𝒢_k such that for each g_k ∈𝒢_k, there exists some i ∈{1, …, N} such that the pseudo-metric ρ(g_k, g^(i)) ≤ϵ. The ϵ-covering number N(ϵ; 𝒢_k, ρ) is the cardinality of the smallest ϵ-cover. In brief, a ϵ-covering can be visualized as a collection of balls of radius ϵ that cover the set 𝒢_k and the ϵ-covering number is the minimial number of these balls. In our case, we equip the pseudo-metric ρ(g, g̃) with sup-norm over, namely, ρ(g, g̃) = g - g̃_∞ := sup_w ∈𝒲 |g(w) - g̃(w)|. Following the approach of <cit.>, we introduce the Dudley's entropy integral bound for the empirical Rademacher complexity as below. Define C_sup := sup_g, g̃∈𝒢_kg - g̃_∞, the empirical Rademacher complexity is upper-bounded by R_𝒮_n(𝒢_k) ≤inf_ϵ∈ [0, C_sup/2]{4 ϵ + 12/√(n)∫_ϵ^C_sup/2 dν√(log N(ν; 𝒢_k, ρ) )}. The proof of <Ref> are postponed to <Ref>. Finally, given the entropy condition of covering number N(ϵ; 𝒢_k, ·_∞) in <Ref>, we can give the following bound on the empirical process. When 0 < τ < 2, with probability at least 1 - δ, we have sup_h ∈ℋ|ℛ_Φ^k(h; ŵ_k) - ℛ_Φ(h; ŵ_k) | ≤𝒪( n^-1/2 √(log(1/δ) )) , ∀ k ∈ [K]. To simplify the notation, here we omit the subscript k in weight function ŵ, function g and class 𝒢, as well as the superscript k in empirical Φ-risk function ℛ_Φ. As mentioned previously, we replace n_k with n. By the definition of function class 𝒢 and Lemma <ref>, we have sup_h ∈ℋ|ℛ_Φ(h; ŵ) - ℛ_Φ(h; ŵ) | = sup_g ∈𝒢|1/n∑_i=1^n g(W_i) - 𝔼[g(W)] | ≤ 2 R_𝒮_n(𝒢) + 3 √(2 log(1/δ)/n). Based on the Dudley's entropy bound in Lemma <ref> and Assumption <ref>, we have sup_h ∈ℋ|ℛ_Φ(h;ŵ) - ℛ_Φ(h;ŵ) | ≤inf_ϵ∈ [0, C_sup/2]{8 δ + 24/√(n)∫_ϵ^C_sup/2 dν√(log N(ν; 𝒢, ρ ) )} + 3 √(2 log(1/δ)/n ) ≤inf_ϵ∈ [0, C_sup/2]{8 ϵ + 24/√(n)∫_δ^C_sup/2ν^- τ/2 dν} + 3 √(2 log(1/δ)/n ) ≤inf_ϵ∈ [0, C_sup/2]{8 ϵ + 48/(2 - τ) √(n)[(C_sup/2 )^1 - τ/2 - ϵ^1 - τ/2] } + 3 √(2 log(1/δ)/n ). By simply computation, we know the minimum value of RHS above is achieved at ϵ = 𝒪(n^-1/τ). Since h ∈ℋ is uniformlly bounded on x ∈𝒳 according to <Ref> and Y = y is binary, one can justify the new function g ∈𝒢 is also uniformlly bounded over w ∈𝒲 given any surrogate Φ∈{Hinge, logistic, exponential}. Therefore, C_sup := sup_g, g̃∈𝒢g - g̃_∞ is indeed a finite constant. Therefore, we have sup_h ∈ℋ|ℛ_Φ(ŵ; h) - ℛ_Φ(ŵ; h) | ≤𝒪(n^-1/τ + 1/2 - τ[n^-1/2 - n^-1/τ] + n^-1/2√(log(1 / δ))) ≤𝒪(n^-1/τ + 1/2 - τ n^-1/2 + n^-1/2√(log(1 / δ))) ≤𝒪(n^-1/2·(1 + √(log(1 / δ))) ) when 0 < τ < 2. Therefore, with probability at least 1 - δ and 0 < τ < 2, we have sup_h ∈ℋ|ℛ_Φ(h; ŵ) - ℛ_Φ(h; ŵ) | ≤𝒪(n^-1/2·(1 + √(log(1 / δ))) ). Combine with the results in <Ref>, we obtain the generalization bound of estimation error: ℰ(ℋ) = 𝒪( n^-γ + n^-1/2(1 + √(log(1/δ))) ). By Adding up the approximation error 𝒜(ℋ) in <Ref> and the generalization bound of ℰ(ℋ) above, we obtain the desired result, which is ℛ_M(ĥ) - min_hℛ_M(h) ≤𝒜(ℋ) + 𝒪( n^-γ + n^-1/2(1 + √(log(1/δ))) ). §.§ Technical Lemmas To simplify the notation, here we omit the subscript k in weight function ŵ as well as function g and class 𝒢. As mentioned previously, we replace n_k with n. Define G(W_1, …, W_n) := sup_g ∈𝒢[1/n∑_i=1^n g(W_i) - 𝔼[g(W_i) ] ]. We firstly bound the function G(W_1, …, W_n) using McDiarmid's inequality. To use McDiarmid's inequality, we firstly check that the bounded difference condition holds: G(W_1, …, W_i, …, W_n) - G(W_1, …, W_i', …, W_n) ≤sup_g ∈𝒢{1/n∑_j=1^n g(W_j) } - sup_g ∈𝒢{1/n∑_j ≠ i^n g(W_j) + 1/n g(W_i') } ≤sup_g ∈𝒢{1/n∑_j = 1^n g(W_j) - 1/n∑_j ≠ i^n g(W_j) - g(W_i') /n} = sup_g ∈𝒢{1/n(g(W_i) - g(W_i') ) }≤2/n . The second inequality holds because in general, sup_h A(f) - sup_f B(f) ≤sup_f [A(f) - B(f)], and the last inequality comes from the fact that 0 ≤ g(W) ≤ 1, ∀ g ∈𝒢 according to statement. We can thus apply McDiarmid's inequality with parameters c_1 = ⋯ = c_n = 2/n, ℙ(G(W_1, …, W_n) - 𝔼[G(W_1, …, W_n)] ≥ t ) ≤exp(- 2 t^2/∑_i=1^nc_i^2 ) = exp(- n t^2/2 ), and therefore, with probability 1 - δ G(W_1, …, W_n) ≤𝔼[G(W_1, …, W_n)] + √(2 log(1/δ)/n ). where here we set δ = exp(- n ρ_n t^2/2 ). By Symmetrization in <Ref>, we have 𝔼 [G(W_1, …, W_n)] ≤ 2 R_n(𝒢), which implies that sup_g ∈𝒢[1/n∑_i=1^n g(W_i) - 𝔼[g(Z)] ] ≤ 2 R_n(𝒢) + √(2 log(1/δ)/n ). Next, we bound the expected Rademacher complexity R_n(𝒢) through empirical Rademacher complexity R_S_n(𝒢) via McDiarmid inequality again. To begin with, we define a new function G̃(W_1, …, W_n) = R_𝒮_n(𝒢) := 𝔼_σ[sup_g ∈𝒢1/n∑_i=1^nσ_i g(W_i) ], Again, we find that G̃ also satisfies the bounded difference condition: G̃(W_1, …, W_i, …, W_n) - G̃(W_1, …, W_i', …, W_n) ≤𝔼[ sup_g ∈𝒢{1/n∑_j=1^n h(W_j) } - sup_g ∈𝒢{1/n∑_j ≠ i^n g(W_j) + 1/n g(W_i') }] ≤𝔼[ sup_g ∈𝒢{1/n∑_j = 1^n g(W_j) - 1/n∑_j ≠ i^n g(W_j) - g(W_i') /n}] = 𝔼[ sup_g ∈𝒢{1/n(g(W_i) - g(W_i') ) }] ≤2/n . Applying the McDiarmid's inequality with parameter c_1 = ⋯ = c_n = 2/n leads to the following result ℙ( G̃(W_1, …, W_n) - 𝔼[G̃(W_1, …, W_n)] ≥ t ) ≤exp(- 2 t^2/∑_i=1^nc_i^2 ) = exp(- n t^2/2 ). Similarly, by letting δ = exp (- n t^2/2 ), we have with probability at least 1 - δ, 𝔼[ G̃(W_1, …, W_n)] ≤G̃(W_1, …, W_n) + √(2 log(1/δ)/n ), namely, R_n(𝒢) ≤R_𝒮_n(𝒢) + √(2 log(1/δ)/n ). Finally, with probability 1 - δ, we have sup_g ∈𝒢[1/n∑_i=1^n g(W_i) - 𝔼[g(Z)] ] = G(W_1, …, W_n) ≤𝔼[G(W_1, …, W_n)] + √(2 log(1/δ)/n ) ≤ 2 R_n(𝒢) + √(2 log(1/δ)/n ) ≤ 2 (R_𝒮_n(𝒢) + √(2 log(1/δ)/n )) + √(2 log(1/δ)/n ) = 2 R_𝒮_n(𝒢) + 3 √(2 log(1/δ)/n ). To simplify the notation, here we omit the subscript k in weight function ŵ as well as function g and class 𝒢. We now prove the argument by constructing a sequence of ϵ_j-cover with decreasing radius ϵ_j for j = 1, 2, …, n. For each j ∈ℕ_+, let ϵ_j := C_sup / 2^j and 𝒞_j ⊂𝒢 be a minimal ϵ_j-cover of metric space (𝒢, ·_∞) with size |𝒞_j| = N(ϵ_j; 𝒢, ·_∞ ). For any g ∈𝒢 and j ∈ℕ_+, we can find some g_j ∈𝒞_j such that such that ρ(g, g^[j]) ≤ϵ_j, where the metric ρ(·, ·) is defined in <ref>. The sequence g_1, g_2, … converges towards g. This sequence can be used to define the following telescoping sum of any g ∈𝒢: for any given m ∈ℕ too be chosen later, we have g = g - g^[m] + ∑_j=1^m (g^[j] - g^[j-1]) with g^[0] := 0. Upon these, the empirical Rademacher complexity can be written as R̂_𝒮_n(𝒢) = 𝔼_σ[sup_g ∈𝒢1/n∑_i=1^nσ_i g(W_i) ] = sup_g ∈𝒢𝔼_σ[ 1/n∑_i=1^nσ_i g(W_i) ] ≤𝔼_σ[ sup_g, g^[m]∈𝒢 ρ(g, g^[m]) ≤ϵ_m1/n∑_i=1^nσ_i (g(W_i) - g^[m](W_i)) ] + 𝔼_σ[ ∑_j=1^msup_g^[j], g^[j-1]∈𝒢1/n∑_i=1^nσ_i (g^[j](W_i) - g^[j-1](W_i) ) ]. Next, we bound the two summands separately. The first summand is bounded by ϵ_m as 𝔼_σ[ sup_g, g^[m]∈𝒢 ρ(g, g^[m]) ≤ϵ_m1/n∑_i=1^nσ_i (g(W_i) - g^[m](W_i)) ] ≤sup_g, g^[m]∈𝒢 ρ(g, g^[m]) ≤ϵ_m1/n∑_i=1^n| g(W_i) - g^[m](W_i) | ≤sup_g, g^[m]∈𝒢 ρ(g, g^[m]) ≤ϵ_mg - g^[m]_∞≤ϵ_m. We now bound the second argument in <ref>. For each j ∈ℕ, notice that there are at most |𝒞_j| · |𝒞_j-1| different ways to create a vector in ℝ^n of the form (g^[j](W_1) - g^[j-1](W_1), ⋯, g^[j](W_n) - g^[j-1](W_n) ) with g^[j]∈𝒞_j and g^[j-1]∈𝒞_j-1. Let 𝒞 = ⋃_j=1^m𝒞_j be the union of all covers, which is a finite subset of 𝒢. By using Lemma <Ref> (Massart's lemma), the second summand in <ref> can be upper bounded by ∑_j=1^m𝔼_σ[sup_g^[j], g^[j-1]∈𝒢1/n∑_i = 1^nσ_i (g^[j](W_i) - g^[j-1](W_i)) ] = ∑_j=1^m𝔼_σ[sup_g^[j], g^[j-1]∈𝒞1/n∑_i = 1^nσ_i (g^[j](W_i) - g^[j-1](W_i)) ] ≤ ∑_j=1^msup_g^[j], g^[j-1]∈𝒞√(∑_i=1^n (g^[j](W_i) - g^[j-1](W_i))^2 )·√(2 log |𝒞_j| · |𝒞_j-1| )/n = ∑_j=1^msup_g^[j], g^[j-1]∈𝒞g^[j] - g^[j-1]_∞·√(2 log |𝒞_j| · |𝒞_j-1| /n). With the triangular inequality of sup-norm, for any j, j-1 ∈ℕ^+, we have (using the fact ϵ_k - 1 = 2 ϵ_k) ||g^[j] - g^[j-1]||_∞ ≤g^[j] - g _∞ + g - g^[j-1]_∞ ≤ϵ_j + ϵ_j-1 = (1 + 2) ϵ_j = 6 (ϵ_j - ϵ_j+1). Moreover, since ϵ_j = ϵ_j-1/2, we have |𝒞_j-1| ≤ |𝒞_j|. Putting <ref> and the results above together, we have R̂_𝒮_n(𝒢) ≤ϵ_m + 12 ∑_j=1^m (ϵ_j - ϵ_j+1) √(log N(ϵ_j; 𝒢, ρ)/n) ≤ 2 ϵ_m+1 + 12 ∫_ϵ_m+1^C_sup/2 d ν√(log N(ν; 𝒢, ρ)/n), where the last inequality follows as the integral is lower-bound by its lower Riemann sum as the function ν↦ N(ν; 𝒢, ρ ) is decreasing. For any ϵ∈ [0, C_sup]/2, choose m such that ϵ < ϵ_m+1≤ 2 ϵ. The statement of the theorem thus follows by taking the infimum over ϵ∈ [0, C_sup/2]. Given the Rademacher complexity of 𝒢 defined in equation <ref>, we have 𝔼[sup_g ∈𝒢(1/n∑_i=1^n g(W_i) - 𝔼[g(W)] ) ] ≤ 2 R_n(𝒢) and 𝔼[sup_g ∈𝒢(𝔼[g(W)] - 1/n∑_i=1^n g(W_i) ) ] ≤ 2 R_n(𝒢). To simplify the notation, here we omit the subscript k in weight function ŵ as well as function g and class 𝒢. Now, let 𝒮_n' = {Z_1', …, Z_n'} be an independent copy of the data 𝒮_n = {Z_1, …, Z_n}. Let (σ_i)_i ∈{1, …, n} be i.i.d. Rademacher random variables, which are also independent of 𝒮_n and 𝒮_n'. Using that for all i ∈{1, …, n}, 𝔼[g(W_i') |𝒮_n ] = 𝔼[g(W)], we have 𝔼[sup_g ∈𝒢(𝔼[g(W)] - 1/n∑_i=1^ng(W_i) ) ] = 𝔼[sup_g ∈𝒢(1/n∑_i=1^n𝔼[g(W_i') |𝒮_n] - 1/n∑_i=1^n g(W_i) ) ] = 𝔼[sup_g ∈𝒢(1/n∑_i=1^n𝔼[g(W_i') - g(W_i) |𝒮_n ] ) ] by the definition of independent copy 𝒮_n'. Then using that the supremum of the expectation is less than expectation of the supremum, 𝔼[sup_g ∈𝒢(𝔼[g(W)] - 1/n∑_i=1^ng(W_i) ) ] ≤𝔼[𝔼( sup_g ∈𝒢(1/n∑_i=1^n[g(W_i') - g(W_i) ] ) |𝒮_n ) ] = 𝔼[sup_g ∈𝒢(1/n∑_i=1^n[g(W_i') - g(W_i) ] ) ] = 𝔼[sup_g ∈𝒢(1/n∑_i=1^nσ_i [g(W_i') - g(W_i) ] ) ] (Symmetrization) ≤𝔼[sup_g ∈𝒢(1/n∑_i=1^nσ_i g(W_i) ) ] + 𝔼[sup_g ∈𝒢(1/n∑_i=1^n -σ_i g(W_i) ) ] = 2 𝔼[sup_g ∈𝒢(1/n∑_i=1^nσ_i g(W_i) ) ] = 2 R_n(ℋ). The reasoning is essentially identical for 𝔼[sup_g ∈𝒢(𝔼[g(W)] - 1/n∑_i=1^n g(W_i) ) ] ≤ 2 R_n(ℋ). Let A ⊂ℝ^n be a finite set, with r = sup_X ∈ A ||X||_2, then the following holds: 𝔼_σ[sup_X ∈𝒜1/n∑_i=1^nσ_i X_i ] ≤r √(2 log |A|)/n, where σ_i's are independent Rademacher variables taking values in {-1, +1} and X_1, …, X_n are the components of vector X. For any t > 0, using Jensen's inequality, rearranging terms, and bounding the supremum by a sum, we obtain exp(t ·𝔼_σ[sup_X ∈ A∑_i=1^nσ_i X_i ] ) ≤𝔼_σ[exp(t sup_X ∈ A∑_i=1^nσ_i X_i ) ] = 𝔼_σ[sup_X ∈ Aexp(t ∑_i=1^nσ_i X_i ) ] ≤∑_X ∈ A𝔼_σ[exp(t ∑_i=1^n σ_i X_i ) ]. We next use the independence of the σ_i's, then apply the bound 𝔼[e^s(X - μ)] ≤exp((b - a)^2/8 s^2 ), which is known as Hoeffding's lemma, and the definition of radius r to write: ∑_X ∈ A∏_i=1^n𝔼[exp(t σ_i X_i) ] ≤∑_X ∈ A∏_i=1^nexp(t^2 (2 X_i)^2/8) = ∑_X ∈ Aexp(t^2/2∑_i=1^n X_i^2 ) ≤∑_X ∈ Aexp(t^2 r^2/2) = |A| ·exp(t^2 r^2/2). Taking the logarithm on both sides and dividing by t yields: 𝔼_σ[sup_X ∈ A∑_i=1^nσ_i X_i ] ≤log |A|/t + t r^2/2. Notice that such inequality holds for every t, we can minimize over t and get t = √(2 log |A|)/r and get 𝔼_σ[sup_X ∈ A∑_i=1^nσ_i X_i ] ≤ r √(2 log |A|). Dividing both sides by n leads to the desired result. § ADDITIONAL NUMERICAL EXPERIMENT RESULTS Recall in <Ref> we synthetically create selective labels on top of the FICO dataset. Specifically, we simulate 10 decision-makers and randomly assign one to each case. We also generate the decision D (which determines whether the label of this case is missing or not) from a Bernoulli distribution with parameter p_D given as follows: Model 1: p_D = β·( α·expit{U } + (1 - α) ·expit{ (1 + Z) ·ExternalRisk}), Model 2: p_D = β·( expit{α· U + (1 - α) · (1 + Z) ·ExternalRisk}). Here the expit function is given by expit(t) = 1/(1 + exp(-t)). The parameter α∈ (0, 1) controls the impact of U on the labeling process and thus the degree of selection bias, and the parameter β∈ (0, 1) further adjusts the overall label missingness. In this section, we conduct additional experiments under different degree of missingness β = [1.0, 0.5, 0.25] as mentioned previously. <Ref> report the testing accuracy of each method in 50 replications with α∈{0.5, 0.7, 0.9 } under the case of β = 0.5 and β = 0.25 respectively. As the parameter β decreases, the size of the labeled dataset becomes smaller, and the performance of our methods and the "selected sample" baseline all degrades. In spite of this, both "point learning" and "partial learning" methods dominates the "selected sample" baseline on average. Interestingly, the "partial learning" method has better performance than "point learning" with higher average accuracy and lower variance. Finally, we remark that the decision-maker assignment Z plays different roles in different methods. Our proposed methods ("point learning" and "partial learning") treatment the decision-maker assignment Z as an instrumental variable to correct for selection bias. In contrast, the baseline methods ("selected sample" and "full sample") do not necessarily need Z. However, for a fair comparison between our proposals and the baselines, we still incorporate Z as a classification feature in the baseline methods, so they also use the information of Z.
http://arxiv.org/abs/2306.09107v1
20230615130323
Characteristic THz-emissions induced by optically excited collective orbital modes
[ "Sangeeta Rajpurohit", "Christian Jooss", "Simone Techert", "Tadashi Ogitsu", "P. E. Blöchl", "L. Z. Tan" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
[email protected] Molecular Foundry, Lawrence Berkeley National Laboratory, USA Institute for Material Physics, Georg-August-Universität Göttingen, Germany Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany Institute of X-ray Physics, Georg-August-Universität Göttingen, Germany Lawrence Livermore National Laboratory, Livermore, USA Institute for Theoretical Physics, Clausthal University of Technology, Germany Institute for Material Physics, Georg-August-Universität Göttingen, Germany Molecular Foundry, Lawrence Berkeley National Laboratory, USA We study the generation of collective orbital modes, their evolution, and the characteristic nonlinear optical response induced by them in a photoinduced orbital-ordered correlated oxide using real-time simulations based on an interacting multiband tight-binding (TB) model. The d-d optical transitions under femtoseconds light-pulse in a orbital-ordered state excite collective orbital modes, also known as "orbitons". The consistent inclusion of electronic interactions and the coupling between charge, spin, and lattice degrees of freedom in the TB-model provide a better understanding of the underlying mechanisms of the collective orbital modes. The dynamics of Jahn-Teller vibrational modes in the photoinduced state modify the intersite orbital interaction, which further amplifies these orbital modes. In the presence of weak ferroelectricity, the excitation of collective orbital modes induces a strong THz oscillatory photocurrent which is long-lived. This suggests an alternative way to experimentally detect quasiparticles describing such collective modes through THz-emission studies in the photoinduced state. Our study also elucidates that quasiparticle dynamics in improper ferroelectric oxides can be exploited to achieve highly interesting and non-trivial optoelectronic properties. Characteristic THz-emissions induced by optically excited collective orbital modes L.Z. Tan July 31, 2023 ================================================================================== Transition-metal oxides (TMOs) belong to the strongly correlated material class that hosts a variety of exotic ordered phases. The low-lying excitations in these ordered states are dictated by quasiparticles described by the corresponding collective modes such as polarons, magnons, and orbitons <cit.>. The fundamental knowledge of the nature of the quasiparticles, their dynamics, and decay is important to understand their impact on transport properties. In several TMOs, such as manganites, the splitting of d-orbitals due to electronic interactions and the Jahn-Teller (JT) effect results in the formation of orbital-ordered states. The interactions between orbitals at different sites in such materials allow propagation of local orbital excitation to the entire lattice, resulting in the formation of "orbitons". Similarly to collective spin-wave excitations in spin-ordered states, orbitons are elementary excitations in materials that host a long-range orbital-order pattern in the ground state <cit.>. Experimental observations of orbitons in materials remain challenging due to the non-negligible coupling between orbital, phonon, and spin degrees of freedom. Although orbiton excitation in orthorhomic manganites such as LaMnO_3, has been reported in a previous Raman scattering study <cit.>, the interpretation of several low-energy peaks as orbitons by the authors has not been supported by subsequent experiments. For example, resonating-inelastic X-ray scattering experiments could not capture the orbiton in the energy range suggested by earlier studies <cit.>. The complex interaction between orbitons and phonons in manganites is expected to create hybridized orbiton-phonon excitations instead of pure orbitons and phonons <cit.>. Several previous theoretical studies have demonstrated the crucial role of orbiton-phonon coupling, the hybridized character of the corresponding orbiton-phonon excitations, and the shift of the quasiparticle bands <cit.>. Ultrafast pump-probe experiments are powerful alternative tools for studying quasiparticle excitations and their dynamics by selectively modulating electronic, lattice, and spin degrees with optical excitations. The pathway from a photoinduced insulator to a metal transition in Pr_1-xCa_xMnO_3 following short intense optical pulses is accompanied by coherent orbital waves with ∼ 31 THz frequency <cit.>. In the present work, we theoretically study the collective orbital excitation, their evolution, and the characteristic nonlinear optical response induced by them using real-time simulations based on an interacting three-dimensional tight-binding (TB) model. The multiband TB-model employed explicitly takes into account interacting electrons and their coupling to spin and phonons, which is needed to accurately describe the collective modes of these degrees of freedom. We report low-lying orbital excitations in orbital-ordered states, which exhibit symmetry breaking in the orbital-space. In these states, the orbital super-exchange mechanism induced by the restrictive intersite hybridization gives rise to a non-local orbital interaction and allows propagation of the local orbital excitations as a collective mode. The reported collective orbital modes have characteristic frequencies of ∼5.0 and 90 THz. Interestingly, in the presence of weak ferroelectricity, the collective orbital modes induce a non-linear optical response in the form of THz emission. Previous studies indicate that the interaction between orbital degrees of freedom and phonons in manganites results in a mixed character of the orbiton-phonon excitations, which present a major challenge in experiments to disentangle them into orbitons and phonons. Our work presents an alternative way to experimentally detect the signature of collective orbital modes by exploiting the THz-emission generated by these quasiparticles in the presence of weak ferroelectricity. The spin- and orbital-order (OO) of the two orthorhombic manganites RMnO_3 considered in this study, A-type and E-type, are shown in Figure <ref> (a-b). In the A-type spin order (SO), the ab-planes are ferromagnetic. On the other hand, the ab-plane in the E-type SO consists of quasi-one-dimensional ferromagnetic zigzag chains that are antiferromagnetically aligned with each other. The adjacent ab-planes in both SOs are stacked antiferromagnetically on top of each other in the z-direction. To study orbital excitations, their evolution and decay in the A- and E-type magnetic orders of RMnO_3, we combined a 3d TB-model with real-time simulations. The model considers the octahedral crystal-field splitting of the Mn 3d-shell into three non-bonding t_2g-orbitals and two anti-bonding e_g-orbitals. The e_g orbitals are explicitly incorporated in the model, whereas the localized t_2g-electrons are treated as classical spins S⃗_R. In addition to e_g-electrons and t_2g-spins, we also take into account lattice degrees of freedom by considering the local breathing mode Q_1,R and Jahn-Teller (JT) active modes Q_2,R and Q_3,R oxygen octahedra. The potential energy of the system is expressed as E_pot(|ψ_n⟩,S⃗_R,Q_i,R) = E_e(|ψ_n⟩) +E_S(S⃗_R)+E_ph(Q_i,R) + E_e-ph(|ψ_n⟩,Q_i,R)+E_e-S(|ψ_n⟩,S⃗_R) in terms of one-particle states |ψ_n⟩=∑_σ,α,i |χ_σ,α,i⟩ψ_σ,α,i,n of e_g-electrons, t_2g-spin S⃗_R and oxygen octahedral phonon modes Q_i,R. The basis set |χ_σ,α,i⟩'s for the one-particle states consists of local spin orbitals with spin σ∈{↑,↓} and orbital character α∈. The E_e(|ψ_n⟩) term describes the delocalization of e_g-electrons between Mn sites and the on-site Coulomb interaction. The Hund's coupling tends to align spin of eg-electron along local t_2g-spin S⃗_R. The e_g electron density is coupled to the local octahedral breathing mode Q_1,R while the Jahn-Teller (JT) active modes Q_2,R and Q_3,R are coupled with the occupancies of the e_g orbitals. The term E_S(S⃗_R) is Heisenberg-like intersite antiferromagnetic coupling between t_2g-spins S⃗_R. For complete details of the model and its parameters, we refer to <cit.>. Unlike the previous simple 1d model-based studies of orbitons, the above TB-model takes into account electron-phonon, electron-electron, and electron-spin interaction on an equal footing, allowing for a systemic investigation of their combined effect on orbital physics in manganites. In perovskites manganites, an octahedral distortion around one Mn site is coupled to distortions around neighboring Mn sites, resulting in a cooperative lattice effect, allowing propagation of interaction between eg-orbitals at different Mn sites. Our model takes into account this cooperative nature of the octahedra distortion. Figure <ref> e-f shows the density of states in the ground state of the A- and E-types RMnO_3 calculated using the above TB model (Eq. <ref>) by optimizing the electronic and structural degree of freedom while keeping the t_2g-spin configuration fixed. In the ground state, all Mn-sites have the oxidation state 3+ and are JT active. The JT-effect at Mn^3+ sites lifts the degeneracy of the local e_g-orbitals, where the lower filled e_g-state |Θ_l⟩_R in the ground state is described by the linear combination |Θ_l⟩_R = -sin(γ)|d_x^2-y^2⟩±cos(γ)|d_3z^2-r^2⟩ of the Mn e_g-orbitals where R is the site index and γ=45^∘. The corresponding unoccupied states |Θ_u⟩_R are |Θ_u⟩_R = -sin(γ)|d_x^2-y^2⟩∓cos(γ)|d_3z^2-r^2⟩. Our model predicts both the A-type and E-type magnetic states as band-insulator with band-gap of 1.02 eV and 0.89 eV, respectively. This band gap arises predominantly from the JT-splitting at Mn^3+ sites and is highly sensitive to the amplitude of the JT modes. The local e_g-orbital polarizations form a long-range OO in the ground-state as shown in Figure <ref> a-b, where the orbital polarization alternates between the Mn sites along the x- and y-directions. In both A-type and E-type magnetic states, the lower eg-orbital |Θ_l⟩_R at every R^th Mn-site hybridizes with the upper |Θ_u⟩_R+m orbital located at the spin-aligned (R+m)^th site pointing towards the former. The E-type state is improper ferroelectric where ferroelectricity originates from the restricted Mn-Mn exchange interaction between spin-aligned Mn sites <cit.>. The charge-center between the spin-aligned Mn sites moves off the center, indicated by a yellow arrow in Figure <ref>-a. Our model calculations predict a net polarization ∼ 18.2 μ C/cm^2 in the b⃗ direction in the E-type state (more information in SI). This bulk polarization value is slightly higher than theoretical value predicted from First-principles based studies <cit.>. The current model includes only the valence eg-electrons and does not consider the polarization contribution from the core electrons, A-type cations, and oxygen anions. We demonstrate the generation of collective orbital modes in the photoinduced A-type and E-type states using real-time time-dependent density-functional theory (TD-DFT) formalism based on the TB-model defined in Eq.<ref>. In our simulations, the effect of phonons is simulated using Ehrenfest dynamics. The one-particle electron wavefunctions evolve under the time-dependent Schrödinger equation, whereas the oxygen atoms obey Newton's equations of motion. The t_2g-spins are kept fixed during the simulations. The effect of the electric-field defined by the vector potential A(t)=e⃗_⃗s⃗ωIm(A_oe^-iω t)g(t), where A_o, ω, and e⃗_⃗s⃗ is the amplitude of the vector potential, angular frequency, and direction of the electric-field, implemented in our model through the Peierls substitution method <cit.>. We consider a Gaussian-shape of the pulse imposed by g(t). We study the photocurrent dynamics under a finite Gaussian-shaped light-pulse using a 8×8×2 super-cell with 512 atoms, 128 Mn, and 384 oxygen atoms. The simulations are performed with periodic boundary conditions with k-points sampling at Γ and temperature T=0. Figure <ref> (g-h) shown the spectral-distribution of optical absorption, obtained by calculating the photon-absorption density D_p (total number of photons absorbed per site). We compute D_p=δ E_pot/ħω from the total change in energy, defined in Eq. <ref>, before and after a 30-fs Gaussian-shaped light pulse. The A-type (E-type) state exhibits a broad absorption peak around frequency ω_p_1=1.30 (2.0) eV. We assign this peak to dipole-allowed intra-chain electronic transitions from the majority-spin |w_1⟩-states, which are bonding states of lower |Θ_l⟩ and upper |Θ_u⟩ orbitals pointing toward each other and are located at spin-aligned neighboring sites, to the corresponding anti-bonding states |w_2⟩. We select ω_p_1=1.33 (ω_p_1=1.79) to simulate the evolution of optical excitations into collective orbital modes in the A-type (E-type) state under a 100-fs light pulse with light-polarization in the ab-plane along a⃗ (a⃗ and b⃗). The results remain qualitatively similar for the light polarization along a⃗ and b⃗ E-type state and we discuss only the b⃗ case here. Figures <ref> (a-b) and <ref> (a-b) show the dynamics of local orbital polarization and phonon modes during and after the 100-fs light-pulse with different intensities ranging from A_o=0.05 to 0.30 ħ/ea_o. We define an orbital polarization vector P⃗_i P_i,x = ∑_σρ_σ,α,i,σ,β,i-ρ_σ,β,i,σ,α,i P_i,y = ∑_σ-iρ_σ,α,i,σ,β,i+iρ_σ,β,i,σ,α,i, P_i,z = ∑_σρ_σ,α,i,σ,α,i-ρ_σ,β,i,σ,β,i for every site i to follow changes in local orbital polarization. Here ρ̂ is the on-site one-particle reduced density matrix calculated using the expression ρ_σ,α,i,σ,β,i(t) = ∑_n f_n ∑_σ⟨ψ_n(t)|Θ_α,σ,i'⟩ ⟨Θ_β,σ,i'|ψ_n(t)⟩. δ_σ,σ. where indexes α and β∈{|Θ_l⟩, |Θ_u⟩} (defined in Eq. <ref> and <ref>) and f_n(t) is the instantaneous occupancy of the one-particle states ψ_n(t)⟩. The calculated on-site polarization vector P⃗ has values (-0.18,0.00,-0.75) and (0.11,0.00,0.70) in the E-type and A-type ground-state, respectively. The electronic transitions during photoexcitation transfer electrons from the occupied |Θ_l⟩ to the empty |Θ_u⟩ alters the local electron densities in e_g-orbitals which is reflected in the changes in P_i,z component; see Figures <ref>-a and <ref>-a. This modification in local orbital-polarization excites the octahedral JT-modes that are strongly coupled to e_g-orbital occupancies, see Figure <ref>-b and <ref>-b. On photo-excitation the shorter (longer) O-O bond distances d_⊥ (d_||) in the ab-plane start to decrease (increase). The oscillations of the JT-mode are enhanced at higher light intensities. A careful analysis of the dynamics of the local P⃗_i vectors indicates oscillating components with frequencies other than JT modes. The Fourier transform of P⃗_i (t) shows multiple peaks (see Figures <ref> c-e and <ref> c-e). Besides the strongest peak close to the light-field frequency, both the E-type and the A-type exhibit a peak at 5.0 THz in the absence of atom dynamics. The A-type exhibits another strong peak at ∼ 90 THz. We attribute these THz peaks to collective orbital modes excited in the photoinduced state. In the presence of atom dynamics, a distinct peak at ∼ 14-THz appears in both the A-type and E-type (indicated by the black vertical arrow) due to JT-modes dynamics. The low-energy spectrum of the orbital-waves in A-type and E-type states can be understood from the effective orbital super-exchange interactions originating from the hybridization between orbital polarized sites. The weak orbital super-exchange interaction J^1 ∼ t”^2/(Δ^JT+U-3J), originating from the hybridization of orthogonal e_g-orbitals |Θ_l⟩_R and |Θ_u⟩_R+m at the spin-aligned sites Mn_R and Mn_R+m dictates the frequency of the collective orbital modes in the E-type phase. The frequency of collective orbital modes in the A-type depends on J^1 as well the relatively stronger orbital super-exchange interaction J^2∼ t^2/(Δ^JT+U-3J), induced by the hybridization t between e_g-orbitals with similar orbital-polarization at nearby spin-aligned Mn-sites. The excited collective orbital modes can modulate the optical conductivity which in turn can induce a strong nonlinear optical response in the form of low-frequency THz-emission in the presence of ferroelectricity. To investigate the THz-emission of ferroelectric E-type RMnO_3 we calculate the photocurrent generation and its evolution under a 100-fs light pulse. The instantaneous total photocurrent current density j⃗^tot(t), is defined as j⃗^tot(t) = ∑_n f_n∑_l' ϵ⟨ NN⟩∑_σ∑_α,β·(ψ^*_σ,α,l,n (t) T_α,β, l,l'ψ_σ,β,l',n (t) - ψ^*_σ,β,l',n (t)T_β,α,l',lψ_σ,α,l,n (t) ) e⃗_l-l', In the above equations, V=d_Mn-Mn^3N_R is the volume of the unit cell with the total N_R Mn-sites and the average Mn-Mn bond length d_Mn-Mn=3.845. Here, f_n(t) is the instantaneous occupancy of the one-particle states |ψ_n⟩, e⃗_l-l'=(R⃗_⃗l⃗-R⃗_⃗l⃗'⃗/|R⃗_⃗l⃗-R⃗_⃗l⃗'⃗|) is the unit vector in the direction joining the sites l and l', and T'_α,β, l,l'=T_α,β, l,l'e^-iA⃗(t)(R⃗_⃗l⃗-R⃗_⃗l⃗'⃗), where T_α,β, l,l' is the hopping matrix element between e_g-orbitals α and β at sites l and l', respectively. Figure <ref> (a-b) shows the evolution of the integrated ∫^t_t=o j^tot(t) dt and instantaneous current j^tot(t) at different intensities in the presence and absence of atom dynamics. In the absence of atom dynamics, the generation of a transient dc-current between 0-0.20 fs is followed by a long-lived THz oscillatory current. This THz current oscillations in the E-type state persist even after the light pulse, hinting at an underlying mechanism other than the shift-current <cit.>. We attribute this THz current to the collective orbital excitations. The Fourier transform of j^tot(t) indicates a peak at ∼ 5.0 THz, similar to that of the P_i (t) oscillations. To gain further insights into the origin of this THz-current, we look at the evolution of the asymmetry in the bonding between the Mn site and its next-nearest (NN) neighbors on either side along the FM chain with similar orbital polarization. This asymmetry in the bonding between NN neighboring Mn-sites with similar orbital polarization is defined by the quantity h^inter_i(t)=∑_σρ_σ,α_1,i,σ,α_2,i'_+(t)-ρ_σ,α_1,i,σ,α_2,i'_-(t) which is computed from the off-diagonal elements ρ_σ,α,i,σ',β,i' of the one-particle reduced density matrix ρ̂ where α_1 and α_2 are lower orbitals |Θ_l⟩ at spin-aligned next nearest-neighboring (NN) sites i and i'± along the FM chain with the same orbital polarization. Subscripts + and - of the site index i' indicate the NN sites in the +ve and -ve directions w.r.t. i^th Mn along the FM chain. Figure <ref>-c shows the dynamics of h_i^inter(t). The h^inter_i(t) exhibit THz oscillations similar to j^tot(t). The deviations of h_i^inter(t) from zero represent the asymmetry in the bonding between NN neighboring Mn-sites. The experimental detection of orbitons is challenging as these quasiparticles can couple to other excitations such as phonons or magnons. These couplings make it difficult to disentangle the specific contribution of orbitons in experimental measurements. Our study demonstrates that the optical pump and THz-probe setups can be an alternative to the present RIXS and Raman-spectroscopy for the experimental detection of these quasiparticles in optically excited ferroelectrics. In conclusion, our study predicts the generation of collective orbital modes on photoexcitation in manganites exhibiting long-range orbital-order. The presence of weak ferroelectricity ensures the manifestation of collective modes in the non-linear photocurrent that results in characteristic THz-emission. These THz-emissions are further enhanced by the dynamical modulation of intersite orbital interactions induced by optical phonon dynamics. Complex quantum materials with strong correlations and interactions are known to display several interesting and non-trivial optical properties. Our work highlights that quasi-particle excitations and their dynamics can be traced by studying the non-linear optical responses of such materials under light illumination. This work was primarily supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences, and Engineering Division, as part of the Computational Materials Sciences Program. Additional support for data interpretation was provided by the Molecular Foundry, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) 217133147/SFB 1073, projects B02, B03, and C02.
http://arxiv.org/abs/2306.12273v1
20230621135108
Josephson Diode Effect in Andreev Molecules
[ "J. -D. Pillet", "S. Annabi", "A. Peugeot", "H. Riechert", "E. Arrighi", "J. Griesmar", "L. Bretheau" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
[email protected] [email protected] Laboratoire de Physique de la Matière condensée, CNRS, Ecole Polytechnique, Institut Polytechnique de Paris, 91120 Palaiseau, France We propose a new platform for observing the Josephson diode effect: the Andreev molecule. This nonlocal electronic state is hosted in circuits made of two closely spaced Josephson junctions, through the hybridization of the Andreev states. The Josephson diode effect occurs at the level of one individual junction while the other one generates the required time-reversal and spatial-inversion symmetry breaking. We present a microscopic description of this phenomenon based on fermionic Andreev states, focusing on single channels in the short limit, and we compute both supercurrent and energy spectra. We demonstrate that the diode efficiency can be tuned by magnetic flux and the junctions transmissions, and can reach 45 %. Going further, by analyzing the Andreev spectra, we demonstrate the key role played by the continuum, which consists of leaky Andreev states and is largely responsible for the critical current asymmetry. On top of proposing an experimentally accessible platform, this work elucidates the microscopic origin of the Josephson diode effect at the level of the fermionic Andreev states. Josephson Diode Effect in Andreev Molecules L. Bretheau July 31, 2023 =========================================== § INTRODUCTION In a Josephson diode (JD), the magnitude of the critical current depends on the direction in which the supercurrent flows. Although it looks exotic, this phenomenon is actually quite ubiquitous and appears in multiple superconducting circuits. For instance, it has been known for decades that asymmetric SQUIDs can behave as such, either when they are composed of tunnel Josephson junctions and linear inductances <cit.> or when they are made of well-transmitted Josephson junctions <cit.>. The JD effect, and its close cousin the superconducting diode effect, have recently received renewed interest in the field of quantum materials, with experiments performed in superconducting films <cit.> and Josephson junctions <cit.>. In both cases, the materials, which behave as a bulk superconductor or play the role of the junction's weak link, display internal symmetry breakings that originate from magnetic interactions (either intrinsic like spin-orbit or induced by a magnetic field) or a non-centrosymmetric crystalline structure. Examining the critical current asymmetry can actually serve as a practical and potent method for investigating the electronic characteristics of novel materials, such as detecting the presence and type (Rashba or Dresselhaus) of spin-orbit interaction <cit.>. To observe the JD effect, it is required to break both spatial-inversion and time-reversal symmetries. In a symmetric system, the current-phase relation satisfies I(-δ)=-I(δ), which indeed imposes I_c^+=-I_c^- for the critical currents i.e. no JD effect. This broken-symmetry requirement is fulfilled in asymmetric SQUIDs at finite magnetic flux <cit.>, in the above-mentioned Josephson weak links <cit.>, in multiterminal Josephson devices <cit.> and in systems with finite Cooper pair momentum <cit.>. In this work, we propose and investigate a new platform that can display the JD effect: the Andreev molecule. It is composed of two closely-spaced well-transmitted Josephson junctions (JJ), which exhibit hybridization of their respective Andreev spectrum into a molecular state. Consequently, this system exhibits nonlocal Josephson effect, with the supercurrent flowing through each JJ depending on the superconducting phase difference across the other one <cit.>. Another striking consequence of hybridization in Andreev molecules is the JD effect, which occurs at the level of one JJ due to the proximity of the other one. We present here a theoretical description of this phenomenon in terms of microscopic Andreev states, such an approach being sparsely explored in the literature on JD <cit.>. By computing the supercurrent as a function of the relevant parameters, we show that the JD effect on one JJ can be controlled by phase-biasing the other JJ and that its magnitude depends on the distance between the JJ as well as their respective transmission. The JD effect can thus be considered as a sensitive probe to characterize the degree of nonlocality of the Andreev molecule. Going further, we compute the energy spectra and investigate the respective role played by the Andreev bound states (ABS) and the continuum. This microscopic analysis enables us to elucidate the mechanisms leading to symmetry breakings that manifest directly in the Andreev spectra and cause the JD effect. § MODELING AND SYMMETRY ANALYSIS Fig. <ref>a shows the schematics of the device, which implements a Josephson diode. It emphasizes that the device we consider is a two-terminal element, fed on one side by a current I_L and connected to the ground on the other side. The circuit is primarily composed of a JJ based on a single-channel weak link of transmission τ_L. This left JJ is in series with a second junction of transmission τ_R, the right JJ, which is enclosed in a loop threaded by a magnetic flux. In such a configuration, the supercurrents carried by the two junctions I_L and I_R can be different and the superconducting phase differences δ_L and δ_R can be controlled independently. In the following, we will study how the right JJ can influence the supercurrent I_L through the left JJ and generate a critical current asymmetry. We model this circuit using the Bogolubiov-de-Gennes formalism, where electrons obey the 2×2 Hamiltonian in the Nambu space H = ([ H_0+H_𝑊𝐿 Δ(x); Δ^*(x) -H_0-H_𝑊𝐿 ]). Here H_0=-ħ^2/2m∂_x^2-μ is the single particle energy, with m the electron mass and μ the chemical potential. The scattering at the two weak links, located at x=± l/2, is modeled by H_𝑊𝐿=U_Lδ(x+l/2)+U_Rδ(x-l/2), with amplitudes U_L/R related to the transmissions τ_L/R=1/[1+(U_L/R/ħ v_F)^2], where v_F is the Fermi velocity. For simplicity we indeed use Dirac δ-functions for the scatterers, which are appropriate for weak links of length shorter than the superconducting coherence length ξ_0. Electron pairing in each superconductor is described by the step function Δ(x) = Δ e^iδ_L if x<-l/2 Δ if |x|<l/2 Δ e^iδ_R if x>l/2 where the superconducting gap amplitude Δ is considered constant along the whole device. In the following, we will use this model to compute the eigenspectrum of H, as a function of the relevant parameters δ_L, δ_R, τ_L, τ_R and l. One typically finds a spectrum composed of discrete ABS at energies E_ABS∈ ]-Δ,Δ[ and a continuum of scattering states at energies |E|≥Δ. More details about the calculations can be found in reference <cit.>. Within this model, the device behavior depends crucially on the relative distance l/ξ_0 between the junctions. When l ≫ξ_0, each junction hosts independent ABS and does not exhibit nonlocal Josephson effect. On the contrary, when l∼ξ_0 the ABS of the left and right junction couple and hybridize. This leads to avoided crossings in the Andreev spectrum when two ABS become degenerate. As the coupling between ABS increases, the size of these avoided crossings grows, potentially pushing some ABS outside of the superconducting gap. This can result in a spectrum where the total number of ABS varies with the phase, as illustrated in Fig. <ref>b where one varies δ_L ∈ [-2π,2π] with a constant δ_R =3π/5. This calculation, and all of the followings, is performed for a fixed distance l=ξ_0/2, where the hybridization between ABS is strong <cit.> and where we expect a large JD effect. Strikingly, the ABS spectrum shown in Fig. <ref>b is asymmetric with respect to δ_L=0 or π. This originates from both spatial-inversion and time-reversal symmetry breaking at the left JJ level. The Hamiltonian H(δ_L,δ_R) of Eq. (<ref>) indeed satisfies the following symmetry constraints. First, it trivially breaks local space-inversion symmetry ℐ H(δ_L,δ_R)ℐ^-1 H(-δ_L,δ_R), where ℐ is the unitary inversion operator. Second, it has a global time-reversal symmetry 𝒯 H(δ_L,δ_R)𝒯^-1 = H(-δ_L,-δ_R), the antiunitary operator 𝒯 being complex conjugation. This causes I_L/R(-δ_L,-δ_R)=-I_L/R(δ_L,δ_R). But crucially, time-reversal symmetry can locally be broken at the level of the left JJ. Considering δ_R 0 (mod π) as a fixed parameter, one indeed gets 𝒯 H(δ_L,δ_R)𝒯^-1 H(-δ_L,δ_R). This results in I_L(-δ_L) ≠ -I_L(δ_L) and one can get I_Lc^+≠ -I_Lc^-. From the perspective of the left JJ only, the required symmetries are broken, which allows it to behave as a JD. Interestingly, these symmetry breakings are achieved in a nonlocal way due to the spatial extension of the ground state wavefunction over the two JJ. Though the phase δ_R is set across the right JJ, it affects the current flowing through the left one, which causes the JD effect. § SUPERCURRENT AND DIODE EFFECT To study the JD effect, we now derive the supercurrent flowing through the left junction at zero temperature I_L = 1/φ_0∑_E_ABS<0∂ E_ABS/∂δ_L + I_L^cont. where the first term corresponds to the contribution of the negative-energy ABS (φ_0 is the reduced flux quantum). The second term is the current carried by the continuum, which is obtained by integrating the contribution of all negative-energy scattering states <cit.>. Fig. <ref>a shows such a current I_L, as a function of δ_L. This local current-phase relation (CPR) is plotted for different values of δ_R, for symmetric transmissions τ=τ_L/R=0.85. The CPR of the left JJ depends on δ_R, thus demonstrating the nonlocal Josephson effect <cit.>. More importantly here, the minimum and maximum values of the CPR, also known as the negative and positive critical current (red and green dashed lines), can be different in magnitude. Indeed, when 0<δ_R<π, one gets I_Lc^+ -I_Lc^-, i.e. a finite JD effect. Moreover, the critical currents are achieved at asymmetric phases δ_L^+ -δ_L^- (mod 2π). The CPR thus shows a shift in phase with a finite supercurrent at δ_L=0. Both phenomena—JD effect and φ_0-junction physics—originate from symmetry breaking. In contrast, when time-reversal symmetry is restored at δ_R = 0 (mod π), one finds I_L(δ_L=0)=0 and I_Lc^+=-I_Lc^- (darkest and lightest blue curves). The dependence of the critical currents I_Lc^+ and -I_Lc^- as a function of δ_R is shown in Fig. <ref>b for τ=0.85. The critical current of the left JJ modulates with the phase of the right JJ, which is a direct signature of nonlocal Josephson effect, as recently observed in Andreev molecules based on InAs/Al heterostructures <cit.>. On top of that, Fig. <ref>b directly exhibits critical current asymmetry. The JD effect is indeed finite for all phase values but δ_R = 0 (mod π), where the system is time-reversal invariant. Interestingly, one finds the relation I_Lc^+ (δ_R)=-I_Lc^- (-δ_R), a direct consequence of the global time-reversal symmetry mentioned in Section <ref>. To further characterize the JD effect, we introduce the dimensionless diode efficiency η=I_Lc^+-|I_Lc^-|/I_Lc^++|I_Lc^-|, such that η=0 means no JD effect and η=1 corresponds to an ideal Josephson diode. The diode efficiency is shown in Fig. <ref>c as a function of δ_R, for different transmissions τ, in the case of symmetric junctions. The efficiency is an odd function of δ_R due to global time-reversal symmetry. It modulates periodically and cancels at δ_R=0 and π. It reaches its maximal value η_max at an intermediate phase that depends on the transmission τ. The maximum efficiency grows exponentially with τ, as shown in Fig. <ref>d. It remains below ∼ 2  % for τ<0.5, while it reaches ∼ 45 % for τ=1. Finally, we explored the influence of transmission asymmetry τ_L τ_R, as illustrated by the inset of Fig. <ref>d. We find that the JD effect is favored by higher transmissions for both junctions. We have thus shown that the JD effect occurs in Andreev molecules, with large efficiencies that correspond to critical current asymmetries as big as I_Lc^+/|I_Lc^-| ≃ 265 %. Let's now try to elucidate its microscopic origin, at the level of the fermionic Andreev states. § MICROSCOPIC ANALYSIS For our microscopic analysis, we compute both the density of states and the supercurrent spectral density j_L of the left JJ, in the energy domain. In Fig. <ref>a, we introduce a new compact representation that allows to visualize both quantities at the same time. For energies |E| ≥Δ, we plot the color-coded supercurrent density j_L of the continuum. For energies |E|<Δ, we plot the ABS energies while the supercurrent they carry is encoded in color. Such combined Andreev spectra are shown in Fig. <ref>a, as a function of energy E and local phase δ_L, for three different phases δ_R across the right JJ. We have chosen here large symmetric transmissions τ_L=τ_R=0.94, for which we have demonstrated a strong JD effect. At δ_R=0 (left panel), the two JJ do not couple and we recover the well-known ABS spectrum of a single channel in the short limit. In that case, the continuum does not contribute to the supercurrent <cit.>, i.e. j_L=0. The spectra are dramatically changed when δ_R=3π/5 or π (central and right panels of Fig. <ref>a). In such cases, the ABS of the left and right junctions hybridize, resulting in the emergence of avoided crossings around the degeneracy points (at δ_L=±δ_R for τ_L=τ_R). Due to this level-repulsion, ABS can be expelled into the continuum and transform into Andreev scattering states. These "leaky" Andreev states are no longer discrete but acquire a finite width inside the continuum. They appear as broad resonances in the current density j_L (see linecut in Fig. <ref>b), with a width that increases as |E| departs from the gap edge at |E|=Δ until they fade away for |E| →∞. Similarly to the ABS energy, these resonances, which are highlighted as dashed lines in the Andreev spectra, depend on δ_L, thereby carrying a supercurrent. Strikingly, the leaky Andreev states appear precisely at phases δ_L where the ABS vanish in the continuum. Using this compact representation, we could thus make explicit the connection between ABS and leaky Andreev states. This explains why the continuum can carry significant supercurrent in Andreev molecules. Going further, we now study the Andreev spectra symmetry that determines the appearance of the JD effect. At δ_R =0 or π (left and right panels of Fig. <ref>a), the overall symmetry of the spectra with respect to δ_L indicates the absence of the JD effect. This is consistent with both our global symmetry analysis performed in Section <ref> and our results from Section <ref>. Interestingly, at δ_R =π, the continuum carries a significant supercurrent, opposite to the one carried by the ABS. This results in a largely reduced critical current, as can be seen in Fig. <ref>b. However, this does not lead to a critical current asymmetry since time-reversal symmetry is preserved. When δ_R 0 (mod π), the Andreev spectra are no longer symmetrical with respect to δ_L, as exemplified by the central panel of Fig. <ref>a computed at δ_R=3π/5. In that case, the positive and negative critical currents are such that |I_Lc^-|>I_Lc^+. These currents are reached at phases δ_L^+ and δ_L^-, where the Andreev spectrum is completely different, with two negative energy ABS at δ_L^+ and a single one at δ_L^-. This asymmetry takes root in the different microscopic mechanisms responsible for the ABS hybridization (see Fig. <ref>c). At δ_L^+, the dominant mechanism is the double crossed Andreev reflection (dCAR), while at δ_L^- it involves double elastic cotunneling (dEC) of Cooper pairs <cit.>. Indeed, at δ_L=δ_L^±, which is in the vicinity of ±δ_R, currents I_L and I_R are counter-propagating (resp. co-propagating) via dCAR (resp. dEC). These two mechanisms are not equally probable, hence the asymmetry, especially at large transmission. When τ∼ 1, dEC is very likely while the dCAR probability vanishes to zero due to spatial translational symmetry that results in conserved momenta <cit.>. The avoided crossing is therefore much larger in the vicinity of δ_L^-, where dEC is the dominant mechanism. It is so strong that one pair of ABS is expelled into the continuum and morphs into leaky Andreev states. Consequently, the positive and negative critical currents are carried by very different fermionic states. Indeed at δ_L=δ_L^-, both the ABS and the continuum contribute to I_Lc^-, in the same direction (red arrows corresponding to negative current in Fig. <ref>c, last column). In contrast at δ_L=δ_L^+, the continuum contribution is negligible and the current is carried by two different ABS, with opposite direction (blue and red arrows), resulting in a largely reduced critical current I_Lc^+. Interestingly, it seems that this JD effect can be related to the current flow in the right JJ (purple arrow in Fig. <ref>c, last column). We see that the critical current is smaller when I_L and I_R counter-propagate, as if the right JJ was inducing a friction-like effect, impeding the supercurrent flow through the left JJ. Finally, we have seen that the emergence of the JD effect is linked to a key contribution of the continuum to the supercurrent, though we are in the short junction limit. Such a phenomenon seems intimately related to symmetry breakings, as recently discussed in Ref. <cit.> in the context of finite Cooper pair momentum. Remarkably, we find this same notion in the Andreev molecule. § CONCLUDING REMARKS In conclusion, we have demonstrated the occurrence of the Josephson diode effect in Andreev molecules. This quantum phenomenon is a ground state property and results from the nonlocality of the Andreev molecule wavefunction. Our findings reveal that the critical current of a junction can depend on the phase across a neighboring junction as well as the relative current flow, unequivocally establishing the presence of the JD effect. The efficiency η of this effect increases with the transmission of weak links, and can reach 45 %. Although disorder is likely to diminish η significantly, such a large effect should be readily accessible experimentally. By performing both a symmetry and a microscopic study, we could investigate the origin of the JD effect. In particular, by analyzing the Andreev spectra, we could demonstrate the key role played by the continuum composed of leaky Andreev states and their connection to the ABS. We hope that our work will motivate further exploration of the JD effect, both theoretically and experimentally, employing a similar microscopic approach by examining the spectrum of Andreev states and the role of the continuum. Such investigations could potentially lead to an improved understanding of the underlying mechanisms responsible for the JD effect in exotic quantum materials. On the other hand, it would be interesting to investigate this physics in the context of superconducting quantum dots, where Coulomb interaction plays a crucial role. Moreover, we anticipate that intricate architectures comprising chains of several junctions, i.e. Andreev polymers, could display both long-range nonlocal Josephson effect and enhance Josephson diode effect. Finally, the asymmetric Josephson potential engineered in Andreev molecules could be harnessed to implement non-reciprocal quantum devices operating at microwave frequencies such as circulators and quantum limited amplifiers <cit.>. During the preparation of the manuscript, we became aware of a related experimental work <cit.>. §.§ Acknowledgements JDP acknowledges support of Agence Nationale de la Recherche through grant ANR-20-CE47-0003. LB acknowledges support of the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 947707). 35 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Fulton et al.(1972)Fulton, Dunkleberger, and Dynes]fulton_quantum_1972 author author T. A. Fulton, author L. N. Dunkleberger, and author R. C. Dynes, https://doi.org/10.1103/PhysRevB.6.855 journal journal Physical Review B volume 6, pages 855 (year 1972) NoStop [Le Masne(2009)]LeMasne2009 author author Q. Le Masne, title Asymmetric current fluctuations and Andreev states probed with a Josephson junction, @noop Ph.D. thesis (year 2009)NoStop [Della Rocca et al.(2007)Della Rocca, Chauvin, Huard, Pothier, Esteve, and Urbina]DellaRocca2007 author author M. L. Della Rocca, author M. Chauvin, author B. Huard, author H. Pothier, author D. Esteve, and author C. Urbina, https://doi.org/10.1103/PhysRevLett.99.127005 journal journal Phys. Rev. Lett. volume 99, pages 127005 (year 2007)NoStop [Bretheau(2013)]Bretheau2013a author author L. Bretheau, title Localized Excitations in Superconducting Atomic Contacts : Probing the Andreev Doublet, http://hal.archives-ouvertes.fr/tel-00772851/ type Phd thesis (year 2013)NoStop [Souto et al.(2022)Souto, Leijnse, and Schrade]Souto2022 author author R. S. Souto, author M. Leijnse, and author C. Schrade, https://doi.org/10.1103/PhysRevLett.129.267702 journal journal Phys. Rev. Lett. volume 129, pages 1 (year 2022) NoStop [Ando et al.(2020)Ando, Miyasaka, Li, Ishizuka, Arakawa, Shiota, Moriyama, Yanase, and Ono]Ando2020 author author F. Ando, author Y. Miyasaka, author T. Li, author J. Ishizuka, author T. Arakawa, author Y. Shiota, author T. Moriyama, author Y. Yanase, and author T. Ono, https://doi.org/10.1038/s41586-020-2590-4 journal journal Nature volume 584, pages 373 (year 2020)NoStop [Bauriedl et al.(2022)Bauriedl, Bäuml, Fuchs, Baumgartner, Paulik, Bauer, Lin, Lupton, Taniguchi, Watanabe, Strunk, and Paradiso]Bauriedl2022 author author L. Bauriedl, author C. Bäuml, author L. Fuchs, author C. Baumgartner, author N. Paulik, author J. M. Bauer, author K. Q. Lin, author J. M. Lupton, author T. Taniguchi, author K. Watanabe, author C. Strunk, and author N. Paradiso, https://doi.org/10.1038/s41467-022-31954-5 journal journal Nat. Commun. volume 13, pages 1 (year 2022), NoStop [Shin et al.(2021)Shin, Son, Yun, Park, Zhang, Shin, Park, and Kim]Shin2021 author author J. Shin, author S. Son, author J. Yun, author G. Park, author K. Zhang, author Y. J. Shin, author J.-G. Park, and author D. Kim, http://arxiv.org/abs/2111.05627 (year 2021), https://arxiv.org/abs/2111.05627 arXiv:2111.05627 NoStop [Bocquillon et al.(2017)Bocquillon, Deacon, Wiedenmann, Leubner, Klapwijk, Brüne, Ishibashi, Buhmann, and Molenkamp]Bocquillon2017 author author E. Bocquillon, author R. S. Deacon, author J. Wiedenmann, author P. Leubner, author T. M. Klapwijk, author C. Brüne, author K. Ishibashi, author H. Buhmann, and author L. W. Molenkamp, https://doi.org/10.1038/nnano.2016.159 journal journal Nat. Nanotechnol. volume 12, pages 137 (year 2017) NoStop [Wu et al.(2022)Wu, Wang, Xu, Sivakumar, Pasco, Filippozzi, Parkin, Zeng, McQueen, and Ali]Wu2022 author author H. Wu, author Y. Wang, author Y. Xu, author P. K. Sivakumar, author C. Pasco, author U. Filippozzi, author S. S. Parkin, author Y. J. Zeng, author T. McQueen, and author M. N. Ali, https://doi.org/10.1038/s41586-022-04504-8 journal journal Nature volume 604, pages 653 (year 2022)NoStop [Baumgartner et al.(2022a)Baumgartner, Fuchs, Costa, Reinhardt, Gronin, Gardner, Lindemann, Manfra, Faria Junior, Kochan, Fabian, Paradiso, and Strunk]Baumgartner2022 author author C. Baumgartner, author L. Fuchs, author A. Costa, author S. Reinhardt, author S. Gronin, author G. C. Gardner, author T. Lindemann, author M. J. Manfra, author P. E. Faria Junior, author D. Kochan, author J. Fabian, author N. Paradiso, and author C. Strunk, https://doi.org/10.1038/s41565-021-01009-9 journal journal Nat. Nanotechnol. volume 17, pages 39 (year 2022a)NoStop [Jeon et al.(2022)Jeon, Kim, Yoon, Jeon, Han, Cottet, Kontos, and Parkin]jeon_zero-field_2022 author author K.-R. Jeon, author J.-K. Kim, author J. Yoon, author J.-C. Jeon, author H. Han, author A. Cottet, author T. Kontos, and author S. S. P. Parkin, https://doi.org/10.1038/s41563-022-01300-7 journal journal Nature Materials volume 21, pages 1008 (year 2022) NoStop [Pal et al.(2022)Pal, Chakraborty, Sivakumar, Davydova, Gopi, Pandeya, Krieger, Zhang, Date, Ju, Yuan, Schröter, Fu, and Parkin]Pal2022 author author B. Pal, author A. Chakraborty, author P. K. Sivakumar, author M. Davydova, author A. K. Gopi, author A. K. Pandeya, author J. A. Krieger, author Y. Zhang, author M. Date, author S. Ju, author N. Yuan, author N. B. Schröter, author L. Fu, and author S. S. Parkin, https://doi.org/10.1038/s41567-022-01699-5 journal journal Nat. Phys. volume 18, pages 1228 (year 2022) NoStop [Baumgartner et al.(2022b)Baumgartner, Fuchs, Costa, Picó-Cortés, Reinhardt, Gronin, Gardner, Lindemann, Manfra, Faria Junior, Kochan, Fabian, Paradiso, and Strunk]Baumgartner2022a author author C. Baumgartner, author L. Fuchs, author A. Costa, author J. Picó-Cortés, author S. Reinhardt, author S. Gronin, author G. C. Gardner, author T. Lindemann, author M. J. Manfra, author P. E. Faria Junior, author D. Kochan, author J. Fabian, author N. Paradiso, and author C. Strunk, journal journal J. Phys. Condens. Matter volume 34 (year 2022b) NoStop [Trahms et al.(2023)Trahms, Melischek, Steiner, Mahendru, Tamir, Bogdanoff, Peters, Reecht, Winkelmann, von Oppen, and Franke]Trahms2023 author author M. Trahms, author L. Melischek, author J. F. Steiner, author B. Mahendru, author I. Tamir, author N. Bogdanoff, author O. Peters, author G. Reecht, author C. B. Winkelmann, author F. von Oppen, and author K. J. Franke, journal journal Nature volume 615, (year 2023) NoStop [Rasmussen et al.(2016)Rasmussen, Danon, Suominen, Nichele, Kjaergaard, and Flensberg]rasmussen_effects_2016 author author A. Rasmussen, author J. Danon, author H. Suominen, author F. Nichele, author M. Kjaergaard, and author K. Flensberg, https://doi.org/10.1103/PhysRevB.93.155406 journal journal Physical Review B volume 93, pages 155406 (year 2016) NoStop [Gupta et al.(2022)Gupta, Graziano, Pendharkar, Dong, Dempsey, Palmstrøm, and Pribiag]gupta_superconducting_2022 author author M. Gupta, author G. V. Graziano, author M. Pendharkar, author J. T. Dong, author C. P. Dempsey, author C. Palmstrøm, and author V. S. Pribiag, https://doi.org/10.48550/arXiv.2206.08471(year 2022), note arXiv:2206.08471NoStop [Chiles et al.(2022)Chiles, Arnault, Chen, Larson, Zhao, Watanabe, Taniguchi, Amet, and Finkelstein]chiles_non-reciprocal_2022 author author J. Chiles, author E. G. Arnault, author C.-C. Chen, author T. F. Q. Larson, author L. Zhao, author K. Watanabe, author T. Taniguchi, author F. Amet, and author G. Finkelstein, https://doi.org/10.48550/arXiv.2210.02644(year 2022), note arXiv:2210.02644 NoStop [Zhang et al.(2023)Zhang, Ahari, Rashid, de Coster, Taniguchi, Watanabe, Gilbert, Samarth, and Kayyalha]zhang_reconfigurable_2023 author author F. Zhang, author M. T. Ahari, author A. S. Rashid, author G. J. de Coster, author T. Taniguchi, author K. Watanabe, author M. J. Gilbert, author N. Samarth, and author M. Kayyalha, https://doi.org/10.48550/arXiv.2301.05081(year 2023), note arXiv:2301.05081NoStop [Mélin(2021)]melin_dc-josephson_2021 author author R. Mélin, https://doi.org/10.48550/arXiv.2103.03519 (year 2021), note arXiv:2103.03519NoStop [Davydova et al.(2022)Davydova, Prembabu, and Fu]davydova_universal_2022 author author M. Davydova, author S. Prembabu, and author L. Fu, https://doi.org/10.1126/sciadv.abo0309 journal journal Science Advances volume 8, (year 2022) NoStop [Pillet et al.(2019)Pillet, Benzoni, Griesmar, Smirr, and Girit]pillet_nonlocal_2019 author author J.-D. Pillet, author V. Benzoni, author J. Griesmar, author J.-L. Smirr, and author Ç. Ö. Girit, https://doi.org/10.1021/acs.nanolett.9b02686 journal journal Nano Letters volume 19, pages 7138 (year 2019) NoStop [Pillet et al.(2020)Pillet, Benzoni, Griesmar, Smirr, and Girit]pillet_scattering_2020 author author J.-D. Pillet, author V. Benzoni, author J. Griesmar, author J.-L. Smirr, and author Ç. Girit, https://doi.org/10.21468/SciPostPhysCore.2.2.009 journal journal SciPost Physics Core volume 2, pages 009 (year 2020)NoStop [Kornich et al.(2019)Kornich, Barakov, and Nazarov]kornich_fine_2019 author author V. Kornich, author H. S. Barakov, and author Y. V. Nazarov, https://doi.org/10.1103/PhysRevResearch.1.033004 journal journal Physical Review Research volume 1, pages 033004 (year 2019)NoStop [Kornich et al.(2020)Kornich, Barakov, and Nazarov]kornich_overlapping_2020 author author V. Kornich, author H. S. Barakov, and author Y. V. Nazarov, @noop journal journal Physical Review B volume 101, pages 195430 (year 2020)NoStop [Kocsis et al.(2023)Kocsis, Scherübl, Fülöp, Makk, and Csonka]kocsis_strong_2023 author author M. Kocsis, author Z. Scherübl, author G. Fülöp, author P. Makk, and author S. Csonka, https://doi.org/10.48550/arXiv.2303.14842 (year 2023), note arXiv:2303.14842NoStop [Matsuo et al.(2022)Matsuo, Lee, Chang, Sato, Ueda, Palmstrøm, and Tarucha]matsuo_observation_2022 author author S. Matsuo, author J. S. Lee, author C.-Y. Chang, author Y. Sato, author K. Ueda, author C. J. Palmstrøm, and author S. Tarucha, https://doi.org/10.1038/s42005-022-00994-0 journal journal Communications Physics volume 5, pages 1 (year 2022)NoStop [Haxell et al.(2023)Haxell, Coraiola, Hinderling, ten Kate, Sabonis, Svetogorov, Belzig, Cheah, Krizek, Schott, Wegscheider, and Nichele]haxell2023demonstration author author D. Z. Haxell, author M. Coraiola, author M. Hinderling, author S. C. ten Kate, author D. Sabonis, author A. E. Svetogorov, author W. Belzig, author E. Cheah, author F. Krizek, author R. Schott, author W. Wegscheider, and author F. Nichele, @noop (year 2023), https://arxiv.org/abs/2306.00866 arXiv:2306.00866 NoStop [Beenakker and van Houten(1991)]PhysRevLett.66.3056 author author C. W. J. Beenakker and author H. van Houten, https://doi.org/10.1103/PhysRevLett.66.3056 journal journal Phys. Rev. Lett. volume 66, pages 3056 (year 1991)NoStop [Freyn et al.(2011)Freyn, Douçot, Feinberg, and Mélin]freyn_production_2011 author author A. Freyn, author B. Douçot, author D. Feinberg, and author R. Mélin, https://doi.org/10.1103/PhysRevLett.106.257005 journal journal Physical Review Letters volume 106, pages 257005 (year 2011) NoStop [Feinberg et al.(2015)Feinberg, Jonckheere, Rech, Martin, Douçot, and Mélin]feinberg_quartets_2015 author author D. Feinberg, author T. Jonckheere, author J. Rech, author T. Martin, author B. Douçot, and author R. Mélin, https://doi.org/10.1140/epjb/e2015-50849-3 journal journal The European Physical Journal B volume 88, pages 99 (year 2015)NoStop [Sliwa et al.(2015)Sliwa, Hatridge, Narla, Shankar, Frunzio, Schoelkopf, and Devoret]sliwa_reconfigurable_2015 author author K. M. Sliwa, author M. Hatridge, author A. Narla, author S. Shankar, author L. Frunzio, author R. J. Schoelkopf, and author M. H. Devoret, https://doi.org/10.1103/PhysRevX.5.041020 journal journal Physical Review X volume 5, pages 041020 (year 2015)NoStop [Frattini et al.(2017)Frattini, Vool, Shankar, Narla, Sliwa, and Devoret]frattini_3-wave_2017 author author N. E. Frattini, author U. Vool, author S. Shankar, author A. Narla, author K. M. Sliwa, and author M. H. Devoret, https://doi.org/10.1063/1.4984142 journal journal Applied Physics Letters volume 110, pages 222603 (year 2017)NoStop [Frattini et al.(2018)Frattini, Sivak, Lingenfelter, Shankar, and Devoret]frattini_optimizing_2018 author author N. E. Frattini, author V. V. Sivak, author A. Lingenfelter, author S. Shankar, and author M. H. Devoret, https://doi.org/10.1103/PhysRevApplied.10.054020 journal journal Physical Review Applied volume 10, pages 054020 (year 2018)NoStop [Matsuo et al.(2023)Matsuo, Imoto, Yokoyama, Sato, Lindemann, Gronin, Gardner, Manfra, and Tarucha]matsuo2023josephson author author S. Matsuo, author T. Imoto, author T. Yokoyama, author Y. Sato, author T. Lindemann, author S. Gronin, author G. C. Gardner, author M. J. Manfra, and author S. Tarucha, @noop (year 2023), https://arxiv.org/abs/2305.07923 arXiv:2305.07923 NoStop
http://arxiv.org/abs/2306.05726v1
20230609074624
In-Sample Policy Iteration for Offline Reinforcement Learning
[ "Xiaohan Hu", "Yi Ma", "Chenjun Xiao", "Yan Zheng", "Zhaopeng Meng" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Multimodal Explainable Artificial Intelligence: A Comprehensive Review of Methodological Advances and Future Research Directions Nikolaos Rodis, Christos Sardianos, Georgios Th. Papadopoulos, Panagiotis Radoglou-Grammatikis, Panagiotis Sarigiannidis and  Iraklis Varlamis N. Rodis is with the Department of Informatics and Telematics, Harokopio University of Athens, e-mail: [email protected]. C. Sardianos is with the Department of Informatics and Telematics, Harokopio University of Athens, e-mail: [email protected]. G. Th. Papdopoulos is with the Department of Informatics and Telematics, Harokopio University of Athens, e-mail: [email protected]. P. Radoglou-Grammatikis is with the Department of Electrical and Computer Engineering, University of Western Macedonia and K3Y Ltd, e-mail: [email protected], [email protected]. P. Sarigiannidis is with the Department of Electrical and Computer Engineering, University of Western Macedonia, e-mail: [email protected]. I. Varlamis is with the Department of Informatics and Telematics, Harokopio University of Athens, e-mail: [email protected]. July 31, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Offline reinforcement learning (RL) seeks to derive an effective control policy from previously collected data. To circumvent errors due to inadequate data coverage, behavior-regularized methods optimize the control policy while concurrently minimizing deviation from the data collection policy. Nevertheless, these methods often exhibit subpar practical performance, particularly when the offline dataset is collected by sub-optimal policies. In this paper, we propose a novel algorithm employing in-sample policy iteration that substantially enhances behavior-regularized methods in offline RL. The core insight is that by continuously refining the policy used for behavior regularization, in-sample policy iteration gradually improves itself while implicitly avoids querying out-of-sample actions to avert catastrophic learning failures. Our theoretical analysis verifies its ability to learn the in-sample optimal policy, exclusively utilizing actions well-covered by the dataset. Moreover, we propose competitive policy improvement, a technique applying two competitive policies, both of which are trained by iteratively improving over the best competitor. We show that this simple yet potent technique significantly enhances learning efficiency when function approximation is applied. Lastly, experimental results on the D4RL benchmark indicate that our algorithm outperforms previous state-of-the-art methods in most tasks. § INTRODUCTION Deep reinforcement learning (RL) has achieved considerable success in various decision-making fields such as video games <cit.>, recommendation and advertising systems <cit.>, logistics transportation <cit.>, and robotics <cit.>. However, a critical obstacle that hinders the application of RL is its trial-and-error learning paradigm. Exploratory actions may cause serious damage to agents <cit.>, making this learning paradigm daunting when extended to real-world applications related to life safety, such as autonomous driving <cit.> and medical care <cit.>. Offline RL aims to solve this problem by learning policies using previously collected data through unknown behavior policies without interacting with the environment. Therefore, offline RL is expected to be a promising direction for applying RL to security-critical applications <cit.>. Offline datasets often provide limited coverage of the state-action space. Directly applying traditional RL algorithms may experience extrapolation errors, leading to severe overestimation of Q-values for out-of-distribution (OOD) state-action pairs. To address this, certain offline RL methods impose various constraints on the loss functions of common RL algorithms, promoting pessimism towards accessing OOD state-action pairs. Some approaches constrain the learned policy to remain close to the behavior policy by minimizing their KL-divergence <cit.>. However, these methods require the dataset to be generated by an expert or near-optimal policy to obtain a reliable behavior policy for regularization using techniques such as behavioral cloning (BC) <cit.>. When applied to datasets originating from more suboptimal policies, these methods exhibit subpar performance. Recent studies suggest refining traditional behavioral cloning by filtering the data to clone <cit.> or incorporating future states or rewards as conditions using generative models <cit.> or more sophisticated diffusion models <cit.> to enhance the performance of behavior-regularized methods. r0cm < g r a p h i c s > Performance of using top, middle, bottom 5%BC policies as regularization in TD3+5%BC across five random seeds. The shaded region represents the standard deviation. We also empirically demonstrate the impact of regularization utilizing distinct policies on the 'hopper-medium-replay' and 'hopper-medium-expert' datasets in D4RL <cit.>. We leverage Percentile Behavior Cloning (%BC) to generate policies of varied performance <cit.>. Specifically, for behavioral cloning, we filter trajectories of the top 5%, median 5%, and bottom 5% returns. Following this, we modify TD3+BC to develop the TD3+5%BC algorithm by replacing the behavior regularization with the 5%BC policy, and subsequently train TD3+5%BC on the original dataset. As illustrated in Figure <ref>, a significant discrepancy in the performance of 5%BC (denoted by dashed lines) derived from different 5% data subsets is observed. The TD3+5%BC, reliant on the regularization of 5%BC, also exhibits considerable variation (indicated by solid lines). These findings affirm that the refined regularization term assists in learning an superior policy, motivating us to devise advanced policies for regularization. Recently, a more appealing approach has been proposed that constrains the set of actions for bootstrapping to the support of the dataset for in-sample maximum calculation. BCQ <cit.> first propose using a generative model to approximate the support of the dataset and select the best action in the generated actions. IQL <cit.> present to use expectile regression as an approximation of in-sample maximum. <cit.> observe that IQL's performance deteriorates when the data distribution is biases towards suboptimal actions, as the expectile regression targets are dragged down. Consequently, <cit.> approximate the in-sample max objective with an in-sample softmax and develop InAC, a methodology that updates an in-sample soft greedy policy solely using actions sampled from the dataset. Building on the in-sample max concept, this paper contributes a novel algorithm, In-Sample Policy Iteration (ISPI), which substantially enhances behavior-regularized methods in offline RL. The central idea lies in the continuous refinement of the policy used for behavior regularization, allowing ISPI to implicitly circumvents the invocation of out-of-sample actions and thus avert catastrophic learning failures. Our theoretical analysis confirms that ISPI is able to learn an in-sample optimal policy. Empirical research in the tabular setting also corroborates its capability to identify the optimal solution when the data is gathered using a highly sub-optimal behavior policy. Moreover, we propose competitive policy improvement, an innovative technique employing two policies, which are iteratively trained to surpass the performance of the best competitor between them. We demonstrate that incorporating this straightforward yet potent technique in the function approximated ISPI implementation yields superior performance. Experimental results on the widely-used D4RL benchmarks show that ISPI in offline deep RL settings outperform previous state-of-the-art methods on the majority of tasks, with both rapid training speed and reduced computational overhead. § BACKGROUND §.§ Markov Decision Process We consider Markov Decision Process (MDP) determined by M={, , P, r, γ} <cit.>, where and represent the state and action spaces. The discount factor is given by γ∈[0, 1), r:×→ denotes the reward function, P:×→Δ() encapsulates the transition dynamics [We use Δ() to denote the set of probability distributions over for a finite set .]. Given a policy π:→Δ(), we use ^π to denote the expectation under the distribution induced by the interconnection of π and the environment. The value function specifies the future discounted total reward obtained by following policy π, v^π(s) = ^π[ ∑_t=0^∞γ^t r(s_t, a_t) | s_0 = s] , The state-action value function is defined as q^π(s,a) = r(s,a) + γ_s'∼ P(·|s,a) [v^π(s')] . There exists an optimal policy π^* that maximizes values for all states s∈. The optimal value functions, v^* and q^*, satisfy the Bellman optimality equation, v^*(s) = max_a r(s,a) + γ_s'[v^*(s')] , q^*(s,a) = r(s,a) + γ_s'∼ P(·|s,a)[ max_a' q^*(s', a') ] . §.§ Offline Reinforcement Learning In this work, we consider the problem of learning an optimal decision making policy from a previously collected offline dataset, denoted as = {s_i, a_i, r_i, s_i'}^n-1_i=0. The data is generated follows this procedure: s_i∼ρ, a_i∼π_, s'_i ∼ P(·|s_i,a_i), r_i=r(s_i, a_i), where ρ represents an unknown probability distribution over states, and π_ signifies an unknown behavior policy. In offline RL, the learning algorithm can only take samples from without collecting new data through interactions with the environment. Due to limited coverage of the state-action space, directly applying online RL algorithms may encounter extrapolation errors triggered by bootstrapping from OOD actions. To overcome this issue, behavior-regularized methods impose a constraint on the policy to emulate π_ during the optimization by integrating a KL-divergence term <cit.>. In particular, let τ > 0 and consider the following objective max_π_a∼π[ q (s, a) ] - τ( π( s) || π_ ( s) ) , Here, q is some value estimate. Typical choices encompass the value of π_ <cit.>, or the value of π <cit.>. The challenge of behavior regularization is that it relies on the dataset being generated by an expert or near-optimal π_. When used on datasets derived from more suboptimal policies—typical of those prevalent in real-world applications—these methods do not yield satisfactory results. Besides behavior regularization, another strategy is to find the in-sample optimal policy that directly avoids selecting out-of-distribution actions. In contrast to eq:br, it considers max_π≼π__a∼π[ q(s,a) ] , where π≼π_ denotes that the support of π is a subset of the support of π_. Though a simple idea, designing a computationally efficient algorithm to solve this objective is not trivial. <cit.> propose to learn a generative model π'≈π_. The in-sample maximum is approximated by taking the maximum from a few candidate actions sampled from π'. Unfortunately, this cannot avoid querying OOD actions, which results in poor performance. IQL instead uses expectile regression to approximate the in-sample max <cit.>. The core idea is to learn a value function that predicts upper expectiles that are a (close) lower bound to the true maximum. The method nicely avoids querying OOD actions and empirically performs well. <cit.> observe that IQL performs poorly when the data collecting policy is suboptimal since this pulls down the expectile regression targets. They instead approximate eq:in-sample-max with an in-sample softmax for τ > 0 max_π≼π__a∼π[ q(s,a) ] ≈τlog∑_a:π_(a)>0exp( q(s,a)/τ) . They show that it is more straightforward to approximate this using only actions in the dataset. Based on this idea, an actor-critic algorithm is developed with provable convergence in the tabular case. This algorithm also exhibits superior empirical performance compared to IQL, particularly when the offline data is procured from a suboptimal policy. However, one limitation of <cit.> is the necessity to fine-tune the parameter τ for each domain. A smaller τ results in a smaller approximation error, yet it may also present challenges for optimization <cit.>. This paper contributes in-sample policy iteration, a new offline RL algorithm that harnesses the benefits of both behavior-regularized and in-sample algorithms. Our primary insight lies in the continuous refinement of the policy applied for behavior regularization. This iterative refinement process enables ISPI to gradually improve itself within the support of behavior policy and provably converges to the in-sample optimal policy. We also design scalable implementations of ISPI with function approximation that can be easily optimized by sampling from offline data. § IN-SAMPLE POLICY ITERATION FOR OFFLINE REINFORCEMENT LEARNING This section introduces in-sample policy iteration that provides a simple yet potent algorithm for optimizing in-sample optimality in offline RL. Formally, consider the in-sample Bellman optimality equation <cit.> q_π_^*(s,a) = r(s,a) + γ_s'∼ P(·|s,a)[ max_a': π_(a'|s') > 0 q_π_^*(s', a') ] . In-sample optimality explicitly avoids bootstrapping from OOD actions while still guaranteeing optimality for transitions that are well-supported by the offline data. As previously discussed, the fundamental challenge lies in the development of scalable algorithms for its resolution. We propose an in-sample policy iteration (ISPI) algorithm for solving eq:in-sample-bellman-optimality. Consider the following objective for τ>0 max_π_a∼π[ q^π (s, a) ] - τ( π( s) || π̅ ( s) ) , which generalizes the behavior regularization using an arbitrary reference policy. A few key observations. First, eq:md has a closed form solution (See <ref>) π̅^*(a|s) ∝π̅(a|s) exp( q^π(s,a)/τ) Accordingly, we have π̅^*(a|s)=0 as long as π̅(a|s)=0. Second, the optimal policy π̅^* is guaranteed to improve over the reference policy π̅. To verify this, _a∼π̅^*[ q^π̅^* (s,a) ] ≥_a∼π̅^*[ q^π̅^* (s,a) ]- τ( π̅^* (s) || π̅ (s) ) ≥_a∼π̅[ q^π̅ (s,a) ] , where the first inequality uses the non-negativity of KL divergence, the second inequality is due to the fact that π̅^* is the optimal policy of eq:md. In conclusion, eq:md implicitly guarantees policy improvement constrained on the support of π̅. The core insight of ISPI is to extend this key observation in an iteratively manner, which results in the following policy iteration algorithm for offline RL. Let π_0 be the initial policy. In each iteration t≥ 0, we first evaluate the current policy π_t, and update the policy by π_t+1(a | s) ∝π_t (a|s) exp( q^π_t (a|s) /τ) , where τ > 0 is a hyperparameter. As shown in eq:md-closed-form and eq:md-pi, this simple update guarantees for any t≥ 1, v^π_t≥ v^π_t-1 and π_t≼π_ , given that π_0 ≼π_. In practice, we assume that there exists a black box policy estimator that produces q_t ≈ q^π_t. Replace q^π_t-1 with q_t and apply telescoping, the policy update becomes π_t+1(a | s) ∝π_t (a|s) exp( q_t (a|s) /τ) = π_0(a|s) exp( ∑_i=0^t-1 q_i (a|s)/τ) . Thus, ISPI can be viewed as a softened or averaged version of policy iteration. Intuitively, averaging reduces noise of the value estimator and increases the robustness of policy update. We note that this algorithm is a special case of the mirror descent algorithm in online learning <cit.>. Similar idea has been previously investigated in online RL <cit.>. In particular, the Politex algorithm considers exactly the same update as eq:ispi for online RL <cit.>. Our key contribution is to show this simple yet powerful policy update rule also facilitates offline learning, as it guarantees policy improvement while implicitly avoid querying OOD actions. Finally, we define the optimal state value v_^*(s)=max_π≼π__a∼π[ q^*_π_(s,a)] for s∈. The next result characterizes the error bound of ISPI when compared to this in-sample optimal value. We initialize the initial policy π_0 of ISPI with the uniform policy that is on the support of π_. Let π_t be the policy of ISPI at iteration t, and ε_t = max_s,a |q̂(s,a) - q^π_t(s,a)| be error of value estimator when evaluating π_t. Then there exists a parameter τ such that for any s∈ v^*_(s) - v^π_t(s) ≤1/(1-γ)^2√(2log ||/t) + 2 max_0≤ i ≤ t-1ε_i/1-γ . We note that if policy evaluation can be solved exactly at each learning iteration, ISPI is guaranteed to find the in-sample optimal policy. § POLICY OPTIMIZATION USING IN-SAMPLE POLICY ITERATION This section introduces two practical implementations of ISPI. Similarly to <cit.> and <cit.>, both of these implementations do not need samples from a learned approximation of π_ as used in <cit.>. Throughout this section we generically develop algorithms for continuous actions. Extension to discrete action setting is straightforward. We build ISPI as an actor-critic algorithm based on TD3+BC <cit.>. We learn a policy (actor) π_ω with parameters ω, and action-value (critic) q_θ with parameters θ. The policy π_ω is parameterized using a Gaussian policy with learnable mean <cit.>. We also normalize features of every states in the offline dataset as discussed in <cit.>. We first describe a straightforward implementation of ISPI. At iteration t, we update the policy by solving eq:md using gradient descent. The reference policy π̅ is reinitialized using a copy of the current parameters ω_t and kept frozen during the update of the policy. Since the policy π_ω is parameterized with a Gaussian policy, optimizing eq:md is straightforward as the KL divergence between two Gaussians can be expressed analytically[The KL divergence also has analytical form with parameterized covariance. ]: max_ω_s∼[ q_θ (s, π_ω(s)) - τ (π_ω(s) - π̅ (s))^2 ] , where τ is a hyperparameter. We note that this update rule resembles that of TD3+BC, except that the reference policy is updated iteratively, while it is fixed as the behavior policy π_ in TD3+BC. In practice, eq:seq-ispi0 is optimized with limited steps of gradient updates. Thus the in-sample property, π_ω_t+1≼π_, cannot be guaranteed exactly. We therefore explicitly add a behavior regularization to eq:seq-ispi0, which results in the following objective for policy update max_ω_s,a∼[ q_θ (s, π_ω(s)) - τλ (π_ω(s) - π̅ (s))^2 - τ (1-λ) (π_ω(s) - a )^2 ] . The parameter λ controls the balance between in-sample policy improvement and behavior regularization. For λ=0, we exactly recover the TD3+BC policy update. The parameter τ is tuned using the same method described in <cit.>. Finally, the critic is updated by min_θ_s,a,r,s'∼[ 1/2( r + γ q_θ̅(s', π_ω(s')) - q(s,a) )^2] , where θ̅ is the parameters of a target network. We also apply the double-Q trick to stabilize the training <cit.>. This complete the description of our In-Sample Policy Iteration with Sequential reference policy update (ISPI-S) algorithm. Algorithm <ref> gives the pseudocode of ISPI-S. A limitation of ISPI-S lies in the potential for a negligible difference between the learning policy and the reference policy, owing to the limited gradient steps when optimizing eq:ispi-s. This minor discrepancy may restrict the policy's improvement over the reference policy. To improve the efficiency of in-sample policy improvement, we propose competitive policy improvement, which applies an ensemble of competitive policies as the reference policy. In particular, we apply two policies with independently initialized parameters ω^1 and ω^2. Let q_θ^1 and q_θ^2 be the value functions of these two policies respectively. When updating the parameters ω^i for i∈{1,2}, we choose the current best policy as the reference policy, where the superiority is decided according to the current value estimate. In particular, for both ω^1 and ω^2 we consider max_ω_s,a∼[ q_θ (s, π_ω(s)) - τλ (π_ω(s) - π_ω^i^* (s))^2 - τ (1-λ) (π_ω(s) - a )^2 ] , where i^* = _i∈{1, 2}{ q_θ^1 (s, π_ω^1 ), q_θ^2 (s, π_ω^2 ) } . In other words, we only allow a superior reference policy to pull up the performance of the learning policy, and avoids the performance of the learning policy is pulled down by an inferior reference policy. Finally, both critic parameters θ^1 and θ^2 are learned as described in eq:critic-loss. Algorithm <ref> gives the pseudocode of In-Sample Policy Iteration with Competitive reference policy update (ISPI-C). Test time action section At test time, ISPI-C allows selection from the outputs of the two actors: at state s, we calculate two actions and assess them by invoking q_θ^1(s,π_ω^1(s)) and q_θ^2(s,π_ω^2(s)). The action with the higher value is employed to interact with the environment. We provide ablation studies on using competitive actor selection in both train and test time in the experiments. § EXPERIMENT We subject our algorithm to a series of rigorous experimental evaluations. We first present an empirical study in the tabular setting to exemplify ISPI's in-sample optimality. Then, we compare the practical implementations of ISPI, utilizing function approximation, against prior state-of-the-art algorithms in the D4RL benchmark tasks <cit.>, to highlight its superior performance. In addition, we present the resource consumption (GPU memory and training time) associated with different algorithms. Finally, comprehensive analysis of various designs to scrutinize their impact on the algorithm's performance are provided. §.§ In-sample optimality in the tabular setting We first conduct an evaluation of ISPI within a GridWorld environment, where the agent is tasked with navigating from the bottom-left to the goal positioned in the upper-right. The environment's map comprises a 7*7 grid layout, with the agent having access to four actions: . The reward is set to 0 for each movement, with a substantial reward of 1 upon reaching the goal; this incentivizes the agent to minimize the number of steps taken. Each episode is terminated after 100 steps, and γ is set to 0.9. We use an inferior behavior policy to collect the offline data, of which the action probability is at every state. Although the behavior policy is suboptimal, the offline data has a full coverage of the state-action space, in which case a clear algorithm should still be able to identify the optimal path. We consider two baseline algorithms: InAC <cit.>, a method that guarantees to find the in-sample softmax, and a method that employs policy iteration with behavior regularization (BR), which could be viewed as an extension of TD3+BC <cit.> for the discrete action setting. The policies derived from each method were evaluated using two distinct strategies: a greedy strategy, where the action with the highest probability is chosen in accordance with the policy, and a sampling strategy that selects actions based on the policy's probability distribution. As illustrated in <ref>, both ISPI and InAC converge to the oracle across various τ settings when the greedy strategy is implemented. In contrast, BR underperforms overall when a larger τ is applied, as behavior regularization with an inferior policy hampers its ability to identify the optimal path. When the sampling strategy is employed, ISPI uniquely pinpoints the optimal solution across different τ settings. InAC fails in this case as a larger τ value introduces more randomness in the learned policy. §.§ Experiment Results on Continuous Control Problems In this section we provide a suite of results usinf three continuous control tasks from D4RL <cit.>: Mujoco, Antmaze, and Adroit. Mujoco, a benchmark often used in previous studies forms the basis of our experimental framework. Adroit is a high-dimensional robotic manipulation task with sparse rewards. Antmaze, with its sparse reward property, necessitates that the agent learns to discern segments within sub-optimal trajectories and to assemble them, thereby discovering the complete trajectory leading to a rewardable position. Given that the majority of datasets for these tasks contain a substantial volume of sub-optimal or low-quality data, relying solely on behavior-regularization may be detrimental to performance. We compare ISPI with several baselines, including DT <cit.>, TD3+BC <cit.>, CQL <cit.>, IQL <cit.>, POR <cit.>, EDAC <cit.>, Diffusion-QL <cit.>, and InAC <cit.>. The results of baseline methods are either reproduced by executing the official code or sourced directly from the original papers. Unless otherwise specified, all re-executed baselines and our algorithms in the experiments are run with 5 random seeds. The average normalized results of the final 10 evaluations for Mujoco and Adroit, and the final 100 evaluations for Antmaze, are reported. More details of the experiments are provided in the Appendix. Our experiment results, summarized in <ref>, clearly show ISPI's performance dominates the others across all benchmarks. In most Mujoco tasks, ISPI surpasses the extant, widely-utilized algorithms, and it only slightly trails behind the state-of-the-art EDAC method. For Antmaze and Adroit, ISPI's performance is on par with the top-performing methods such as POR and Diffusion-QL. We also provide learning curves of ISPI on Antmaze in <ref>, which demonstrate that ISPI exhibits a stable performance with rapid convergence speed. Learning curves on other domains are provided in the Appendix. §.§ Ablation Studies §.§.§ Effect of Competitive Action Selection We first incorporate ablations on using two competitive policy selection mechanisms in ISPI-C: (1) competitive policy improvement in training, and (2) competitive policy selection at test time. We isolate these mechanisms from ISPI-C, leading to two variants, namely ISPI-C-E (ISPI-C that solely considers selecting the best policy during Evaluation, i.e., the competitor selection mechanism during training is suspended, and both reference policies use each other for regularization) and ISPI-C-T (ISPI-C that only contemplates selecting a superior policy for regularization during Training, i.e., the selection from the output of the two actors during evaluation is prohibited). We provide a thorough evaluation of ISPI-C and its two variants on Mujoco and Antmaze. Table <ref> summarizes the results, which indicate that policy selection mechanisms, employed during both the training and evaluation phase, contribute to ISPI-C's performance. §.§.§ Effect of Hyper Parameters <ref> illustrates the effects of using different hyperparameters τ and λ on the medium dataset of Hopper and HalfCheetah, offering valuable insights for algorithm tuning. The weighting coefficient λ regulates the extent of behavior policy integration into the training and affects the training process. As shown in <ref>a, when λ=0.1 (the blue line), the early-stage performance excels, as the behavior policy assists in locating appropriate actions in the dataset. However, this results in suboptimal final convergence performance, attributable to the excessive behavior policy constraint on performance improvement. For larger values, such as 0.9 (the purple line), the marginal weight of the behavior policy leads to performance increase during training. Unfortunately, the final performance might be poor. This is due to that the policy does not have sufficient behavior cloning guidance, leading to a potential significant distribution shift during the training process. Consequently, in our experiments, we predominantly select a λ value of 0.5 or 0.7 to strike a balance between the reference policy regularization and behavior cloning regularization. The regularization parameter τ plays a crucial role in determining the weightage of the joint regularization relative to the Q-value component. We find that (<ref>b) τ assigned to dataset of higher quality and lower diversity (e.g., expert dataset) ought to be larger than those associated with datasets of lower quality and higher diversity (e.g., medium dataset). §.§ Resource Consumption and Convergence Speed r4.5cm Computational costs. 0.6 c]@c@Runtime (s/epoch) c]@c@GPU Mem. (GB) TD3+BC 7.4 1.4 EDAC 19.6 1.9 Diffusion-QL 39.8 1.5 ISPI-S 8.5 1.4 ISPI-C 19.1 1.4 We evaluate the resource consumption of different algorithms from two aspects: (1) runtime per training epoch (1000 gradient steps); (2) GPU memory consumption. Table <ref> presents runtime and GPU memory usage on hopper-medium. The results show that in addition to its distinguished performance, our method requires fewer resources which could be beneficial for practitioners. Figure <ref> illustrates the convergence of various algorithms after one million training steps on hopper datasets across three seeds. ISPI's convergence speed is comparable to that of TD3+BC, which is recognized for its minimal computational demands. ISPI generally converges more rapidly than the top-performing baselines EDAC and Diffusion-QL, which typically require three million and two million total training steps for convergence, respectively. r0cm < g r a p h i c s > Learning curves for 1M steps. § CONCLUSION In this paper, we propose a a groundbreaking algorithm termed in-sample policy iteration (ISPI), which amalgamates the benefits of both behavior regularized and in-sample algorithms. By iteratively refining the policy used for behavior regularization, ISPI progressively improving itself within the behavior policy's support and provably converges to the in-sample optimal policy. We then propose two practical implementations of ISPI for tackling continuous control tasks, integrating an innovative competitive policy improvement technique. Experimental results on the D4RL benchmark show that two practical implementations of ISPI surpass previous cutting-edge methods in a majority of tasks of various domains, offering both expedited training speed and diminished computational overhead. Nonetheless, our study is not devoid of limitations. For instance, our method's performance with function approximation is contingent upon the selection of two hyperparameters, which may necessitate tuning for optimal results. Future research may include exploring ISPI's potential in resolving offline-to-online tasks by properly relaxing the support constraint during the online fine-tuning. plainnat § PROOFS We first introduce some technical lemmas that will be used in the proof. We consider a k-armed one-step decision making problem. Let Δ be a k-dimensional simplex and =(q(1),…,q(k)) ∈^k be the reward vector. Maximum entropy optimization considers max_π∈Δ π· + τ(π) . The next result characterizes the solution of this problem (Lemma 4 of <cit.>). For τ > 0, let F_τ() = τlog∑_a e^ q(a) / τ , f_τ() = e^ / τ/∑_a e^q(a) / τ = e^ - F_τ()/τ . Then there is F_τ() = max_π∈Δ π· + τ(π) = f_τ()· + τ(f_τ()) . The second result gives the error decomposition of applying the Politex algorithm to compute an optimal policy. This result is adopted from <cit.>. Let π_0 be the uniform policy and consider running the following iterative algorithm on a MDP for t≥ 0, π_t+1(a | s) ∝π_t (a|s) exp( q̂_t (a|s) /τ) , where q̂_t is an estimate of q^π_t. Let ε_t = max_s,a |q^π_t(s,a) - q̂_t (s,a) | be the error of policy evaluation at iteration t. Then v^*(s) - v^π_t(s) ≤1/(1-γ)^2√(2log ||/t) + 2 max_0≤ i ≤ t-1ε_i/1-γ . We use vector and matrix operations to simply the proof. In particular, we use v^π∈^|| and q^π∈^||×||. Let P∈^||||× || be the transition matrix, and P^π∈^||×|| be the transition matrix between states when applying the policy π. We first apply the following error decomposition v^π^* - v^π_k-1 = v^π^* - 1/k∑_i=0^k-1 v^π_i + 1/k∑_i=0^k-1 v^π_i - v^π_k-1 . For the first part, v^π^* - 1/k∑_i=0^k-1 v^π_i = 1/k∑_i=0^k-1 (I - γ P^π^*)^-1 (T^π^* v^π_i - v^π_i) = 1/k∑_i=0^k-1 (I - γ P^π^*)^-1 (M^π^* - M^π_i) q^π_i = 1/k∑_i=0^k-1 (I - γ P^π^*)^-1 (M^π^* - M^π_i) q̂_i + 1/k∑_i=0^k-1 (I - γ P^π^*)^-1 (M^π^* - M^π_i) (q^π_i - q̂_i) ≤1/(1-γ)^2√(2log ||/k) + 1/1-γmax_i‖ε_i ‖_∞ , where <ref> follows by the value difference lemma, <ref> follows by applying the regret bound of mirror descent algorithm for policy optimization. For the second part, 1/k∑_i=0^k-1 v^π_i - v^π_k-1 = 1/k∑_i=0^k-1 (I - γ P^π_k-1)^-1 (v^π_i - T^π_k-1 v^π_i) = 1/k∑_i=0^k-1 (I - γ P^π_k-1)^-1 (M^π_i - M^π_k-1) q^π_i = 1/k∑_i=0^k-1 (I - γ P^π_k-1)^-1 (M^π_i - M^π_k-1) q̂_i + 1/k∑_i=0^k-1 (I - γ P^π_k-1)^-1 (M^π_i - M^π_k-1) ε_i ≤1/1-γmax_i‖ε_i ‖_∞ , where for the last step we use that for any s∈, ∑_i=0^k-1 (π_i - π_k-1)q̂_i ≤ 0. This follows by ∑_i=0^k-1π_k-1q̂_i = ∑_i=0^k-2π_k-1q̂_i + π_k-1q̂_k-1 + τ(π_k-1) - τ(π_k-1) ≥ ∑_i=0^k-2π_k-2q̂_i + π_k-1q̂_k-1 + τ(π_k-2) - τ(π_k-1) ≥ ⋯⋯ ≥ ∑_i=0^k-1π_i q̂_i + τ(π_0) - τ(π_k-1) ≥ ∑_i=0^k-1π_i q̂_i , where <ref> follows by applying <ref> and the definition of π_k-1, <ref> follows by the definition of π_0. Combine the above together finishes the proof. First recall the in-sample optimality equation q_π_^*(s,a) = r(s,a) + γ_s'∼ P(·|s,a)[ max_a': π_(a'|s') > 0 q_π_^*(s', a') ] , which could be viewed as the optimal value of a MDP M_ covered by the behavior policy π_, where M_ only contains transitions starting with (s,a)∈× such that π_(a|s) > 0. Then the result can be proved by two steps. First, note that the ISPI algorithm will never consider actions such that π_(a|s) = 0. This is directly implied by <ref>. Second, we apply <ref> to show the error bound of using ISPI on M_. This finishes the proof. § DETAILED EXPERIMENTAL SETTINGS In this section we provide the complete details of the experiments in our paper. §.§ The effect of different regularizations r6.0cm 5% BC Hyperparameters 0.9 Hyperparameter Value Hidden layers 3 Hidden dim 256 Activation function ReLU Mini-batch size 256 Optimizer Adam Dropout 0.1 Learning rate 3e-4 We analyze the impact on the algorithm when different policies are used as regularization terms. The most straightforward way to obtain policies with different performances is the baseline method X%BC mentioned in DT <cit.>. We set x to 5 in order to make the difference between policies more significant and let the 5%BC policy be the policy used for constraint. In detail, we selectively choose different 5% data for behavioral cloning so that we can get various policies with different performances. First, we sort the trajectories in the dataset by their return (accumulated rewards) and select three different levels of data: top (highest return), middle or bottom (lowest return). Each level of data is sampled at 5% of the total data volume. Then we can train 5%BC using the MLP network via BC and get three 5%BC policies with different performances. We can then use each 5%BC policy as the regularization term of TD3 to implement the TD3+5%BC method. Besides, we also normalize the states and set the regularization parameter α to 2.5. The only difference between the TD3+5%BC and TD3+BC is the action obtained in the regularization term. In TD3+BC, the action used for constraint is directly obtained from the dataset according to the corresponding state in the dataset, while in TD3+5%BC, the action for constraint is sampled using 5%BC. We set the training steps for 5% BC to 5e5 and the training steps for TD3+5%BC to 1M. Table <ref> concludes the hyperparameters of 5% BC. The hyperparameters of TD3+5%BC are the same as those of TD3+BC <cit.>. §.§ Baselines We conduct experiments on the benchmark of D4RL and use Gym-MuJoCo datasets of version v2, Antmaze datasets of version v0, and Adroit datasets of version v1. We compare ISPI with BC, DT <cit.>, TD3+BC <cit.>, CQL <cit.>, IQL <cit.>, POR <cit.>, EDAC <cit.>, Diffusion-QL <cit.> and InAC <cit.>. In Gym-Mujoco tasks, our experimental results are preferentially selected from EDAC, Diffusion-QL papers, or their original papers. If corresponding results are unavailable from these sources, we rerun the code provided by the authors. Specifically, we run it on the expert dataset for POR[<https://github.com/ryanxhr/POR>]. For DT[<https://github.com/kzl/decision-transformer>] and IQL[<https://github.com/ikostrikov/implicit_policy_improvement>], we follow the hyperparameters given by the authors to run on random and expert datasets. For Diffusion-QL[<https://github.com/Zhendong-Wang/Diffusion-Policies-for-Offline-RL>], we set the hyperparameters for the random dataset to be the same as on the medium-replay dataset and the hyperparameters for the expert dataset to be the same as on the medium-expert dataset according to the similarity between these datasets. For InAC[<https://github.com/hwang-ua/inac_pytorch>], we run it on the random dataset. In Antmaze tasks, our experimental results are taken from the Diffusion-QL paper, except for EDAC[<https://github.com/snu-mllab/EDAC>] and POR. The results of EDAC are obtained by running the authors' provided code and setting the Q ensemble number to 10 and η=1.0. Even when we transformed the rewards according to <cit.>, the performance of EDAC on Antmaze still did not perform well, which matches the report in the offline reinforcement learning library CORL <cit.>. As the results in the POR paper are under Antmaze v2, we rerun them under Antmaze v0. In the Adroit task, we rerun the experiment for TD3+BC, DT (with return-to-go set to 3200), POR (with the same parameters as the antmaze tasks), and InAC(with tau set to 0.7). §.§ ISPI To implement our idea, we made slight modifications to TD3+BC[<https://github.com/sfujim/TD3_BC>] to obtain ISPI. For ISPI-S, we select a historical snapshot policy as the reference policy. Specifically, the policy snapshot π^k-2, which is two gradient steps before the current step, is chosen as the reference policy for the current learning policy π^k. ISPI-S is trained similarly to TD3. For ISPI-C, the complete network contains two identical policy networks with different initial parameters, so that two policies with distinct performances can be obtained. The two policy networks in ISPI-C are updated via cross-update to fully utilize the information from both value networks. During training, the value network evaluates actions induced by the two policy networks, and only the higher value action is used to pull up the performance of the learning policy. During evaluation, the two policy networks are also used to select high-value actions to interact with the environment. ISPI only has one more actor compared to TD3+BC and therefore requires less computational overhead than other state-of-the-art offline RL algorithms with complex algorithmic architectures, such as EDAC (an ensemble of Q functions) and Diffusion-QL (Diffusion model). Algorithm <ref> and <ref> gives the pseudocode for ISPI-S and ISPI-C. [!htbp] ISPS-S [!htbp] ISPS-C The loss function of ISPI per actor network has three terms, as shown in <ref> and <ref>.The action of the BC term comes directly from the dataset. <ref> shows the network architecture of ISPI-S and ISPI-C and their individual loss items. According to TD3+BC, we normalize the state of the MuJoCo tasks and use the original rewards in the dataset. For the Antmaze datasets, we ignore the state normalization techniques and transform the rewards in the dataset according to <cit.>. For Adroit datasets, we also do not use state normalization and standardize the rewards according to Diffusion-QL <cit.>. To get the reported results, we average the returns of 10 trajectories with five random seeds evaluated every 5e3 steps for MuJoCo and Adroit, 100 trajectories with five random seeds evaluated every 5e4 steps for Antmaze. In addition, we evaluate the runtime and the memory consumption of different algorithms to train an epoch (1000 gradient update steps). All experiments are run on a GeForce GTX 2080TI GPU. The most critical hyperparameters in ISPI are the weight coefficient λ and the regularization parameter τ. On all datasets, the choice of λ=0.5 or λ=0.7 is the most appropriate so that the two actions (from the behavioral policy β and the reference policy π̅) can be well-weighed to participate in the learning process. As mentioned in the main text, the choice of τ depends heavily on the characteristics of the dataset. For a high-quality dataset, τ should be larger to learn in a close imitation way, and for a high-diversity dataset, the τ should be chosen to be smaller to make the whole learning process more similar to RL. We find that training is more stable and better performance can be achieved on Antmaze when the critic learning rate is set to 1e-3. Also, since Antmaze is a sparse reward domain, we also set the discount factor to 0.995.  <ref> gives our selections of hyperparameters τ and λ on different datasets. Other settings are given in Tabel <ref>. § COMPARISON OF TD3+BC AND ISPI According to the experimental part of the ablation study and the analysis in the <cit.>, α is an important parameter for controlling the constraint strength. TD3+BC is set α to a constant value of 2.5 for each dataset, whereas ISPI chooses the appropriate τ from a set of τ alternatives. We note that the hyperparameter that plays a role in regulating Q and regularization in ISPI is τ, which can essentially be understood as the reciprocal of α in TD3+BC. Therefore, for the convenience of comparison, we rationalize the reciprocal of τ as the parameter α. In this section, we set the α of TD3+BC to be consistent with that of ISPI in order to show that the performance improvement of ISPI mainly comes from amalgamating the benefits of both behavior-regularized and in-sample algorithms. Further, we also compare ISPI with TD3+BC with dynamically changed α <cit.>, which improves TD3+BC by a large margin, to show the superiority of ISPI. The selection of parameters is shown in Table  <ref>. The results for TD3+BC (vanilla), TD3+BC (same α with ISPI), TD3+BC with dynamically changed α and ISPI are shown in Table <ref>. Comparing the variants of TD3+BC with different α choices, it can be found that changing α can indeed improve the performance of TD3+BC. However, compared with TD3+BC (same α), the performance of ISPI is significantly better, which proves the effectiveness of the mechanism for iterative refinement of policy for behavior regularization in ISPI. § MORE EXPERIMENTAL RESULTS §.§ Effect of different actor number We also present the effect of using different numbers of actors in ISPI-C. We can conclude from Figure <ref> that increasing the actor number from 1 to 2 (i.e., introducing the reference policy) can significantly improve the performance of the learning policy. As further introducing reference policy derived from more actors does not bring significant benefits and entails significant resource consumption. Thus, we set the actor number in our method to two. §.§ Learning curves of ISPI The learning curve of ISPI on MuJoCo tasks is shown in Figure <ref>.
http://arxiv.org/abs/2306.08872v1
20230615060650
Neural models for Factual Inconsistency Classification with Explanations
[ "Tathagata Raha", "Mukund Choudhary", "Abhinav Menon", "Harshit Gupta", "KV Aditya Srivatsa", "Manish Gupta", "Vasudeva Varma" ]
cs.CL
[ "cs.CL", "cs.AI" ]
FICLE Factual Inconsistency CLassification with Explanation Raha et al. Neural models for Factual Inconsistency Classification with Explanations Tathagata Raha1, Mukund Choudhary1, Abhinav Menon1, Harshit Gupta1, K V Aditya Srivatsa1, Manish Gupta1,2 (), Vasudeva Varma1 1IIIT-Hyderabad, India; 2Microsoft, Hyderabad, India {tathagata.raha,mukund.choudhary,abhinav.m,harshit.g,k.v.aditya}@research.iiit.ac.in [email protected], [email protected] Received: date / Accepted: date ======================================================================================================================================================================================================================================================================================================================== Factual consistency is one of the most important requirements when editing high quality documents. It is extremely important for automatic text generation systems like summarization, question answering, dialog modeling, and language modeling. Still, automated factual inconsistency detection is rather under-studied. Existing work has focused on (a) finding fake news keeping a knowledge base in context, or (b) detecting broad contradiction (as part of natural language inference literature). However, there has been no work on detecting and explaining types of factual inconsistencies in text, without any knowledge base in context. In this paper, we leverage existing work in linguistics to formally define five types of factual inconsistencies. Based on this categorization, we contribute a novel dataset, (), with ∼8K samples where each sample consists of two sentences (claim and context) annotated with type and span of inconsistency. When the inconsistency relates to an entity type, it is labeled as well at two levels (coarse and fine-grained). Further, we leverage this dataset to train a pipeline of four neural models to predict inconsistency type with explanations, given a (claim, context) sentence pair. Explanations include inconsistent claim fact triple, inconsistent context span, inconsistent claim component, coarse and fine-grained inconsistent entity types. The proposed system first predicts inconsistent spans from claim and context; and then uses them to predict inconsistency types and inconsistent entity types (when inconsistency is due to entities). We experiment with multiple Transformer-based natural language classification as well as generative models, and find that DeBERTa performs the best. Our proposed methods provide a weighted F1 of ∼87% for inconsistency type classification across the five classes. We make the code and dataset publicly available[<https://github.com/blitzprecision/FICLE>]. § INTRODUCTION Although Transformer-based natural language generation models have been shown to be state-of-the-art for several applications like summarization, dialogue generation, question answering, table-to-text, and machine translation, they suffer from several drawbacks of which hallucinatory and inconsistent generation is the most critical <cit.>. Factual inconsistencies in generated text can lead to confusion and a lack of clarity, make the text appear unreliable and untrustworthy, and can create a sense of mistrust among readers. It can lead to inaccurate conclusions and interpretations, and diminishes the overall quality of the text. One approach to tackle this problem is to train robust neural language generation models which produce text with high fidelity and less hallucinations <cit.>. Another approach is to have human annotators post-check the generated text for inconsistencies. Checking all generated output manually is not scalable. Hence, automated factual inconsistency detection and explanations become crucial. Accordingly, there have been several studies in the past which focus on detection of false or fake content. Fake content detection studies <cit.> typically verify facts in claims with respect to an existing knowledge base. However, keeping the knowledge base up-to-date (freshness and completeness) is difficult. Accordingly, there have been other studies in the natural language inference (or textual entailment) community <cit.> where the broad goal is to predict entailment, contradiction or neither. More than a decade back, De Marneffe et al. <cit.> proposed the problem of fine-grained contradiction detection, but (1) they proposed a tiny dataset with 131 examples, (2) they did not propose any learning method, and (3) they did not attempt explanations like localization of inconsistency spans in claim and context. Hence, in this paper, we propose the novel problem of factual inconsistency classification with explanations (). Given a (claim, context) sentence pair, our goal is to predict inconsistency type and explanation (inconsistent claim fact triple, inconsistent context span, inconsistent claim component, coarse and fine-grained inconsistent entity types). Fig. <ref> shows an example of the task. Two recent studies are close to our work: e-SNLI <cit.> and TaxiNLI <cit.>. Unlike detailed structured explanation (including inconsistency localization spans in both claim and context) from our proposed system, e-SNLI <cit.> contains only an unstructured short sentence as an explanation. Unlike five types of inconsistencies detected along with explanations by our proposed system, TaxiNLI <cit.> provides a two-level categorization for the NLI task. Thus, TaxiNLI focuses on NLI and not on inconsistencies specifically. Table <ref> shows a comparison of our dataset with other closely related datasets. In this work, based on linguistic theories, we carefully devise a taxonomic categorization with five inconsistency types: simple, gradable, set-based, negation, taxonomic relations. First, we obtain English (claim, context) sentence pairs from the FEVER dataset <cit.> which have been labeled as contradiction. We get them manually labeled with inconsistency types and other explanations (as shown in Fig. <ref> by four annotators. Overall, the dataset contains 8055 samples labeled with five inconsistency types, 20 coarse inconsistent entity types and 60 fine-grained inconsistent entity types, whenever applicable. We leverage the contributed dataset to train a pipeline of four neural models to predict inconsistency type with explanations: M_1, M_2, M_3 and M_4. Given a (claim, context) sentence pair, M_1 predicts the inconsistent subject-relation-target fact triple ⟨ S,R,T⟩ in the claim and also the inconsistent span in the context. M_2 uses M_1's outputs to predict the inconsistency type and the inconsistent component (subject, relation or target) from the claim. M_3 uses the inconsistent context-span and inconsistent claim component to predict a coarse inconsistent entity type. M_4 leverages both M_3's inputs and outputs to predict fine-grained inconsistent entity type. Overall, the intuition behind this pipeline design is to first predict inconsistent spans from claim and context; and then use them to predict inconsistency types and inconsistent entity types (when inconsistency is due to entities). Fig. <ref> shows the overall system architecture for . We investigate effectiveness of multiple standard Transformer <cit.>-based natural language understanding (NLU) as well as natural language generation (NLG) models as architectures for models M_1, M_2, M_3 and M_4. Specifically, we experiment with models like BERT <cit.>, RoBERTa <cit.> and DeBERTa <cit.> which are popular for NLU tasks. We also experiment with T5 <cit.> and BART <cit.> which are popular in the NLG community. DeBERTa seemed to outperform other models for most of the sub-tasks. Our results show that while inconsistency type classification is relatively easy, accurately detecting context span is still challenging. Overall, in this work, we make the following main contributions. (1) We propose a novel problem of factual inconsistency detection with explanations given a (claim, context) sentence pair. (2) We contribute a novel dataset, , manually annotated with inconsistency type and five other forms of explanations. We make the dataset publicly availabledatafootnote. (3) We experiment with standard Transformer-based NLU and NLG models and propose a baseline pipeline for the task. (4) Our proposed pipeline provides a weighted F1 of ∼87% for inconsistency type classification; weighted F1 of ∼86% and ∼76% for coarse (20-class) and fine-grained (60-class) inconsistent entity-type prediction respectively; and an IoU of ∼94% and ∼65% for claim and context span detection respectively. § RELATED WORK Factual Inconsistency in Natural Language Generations: Popular natural language generation models have been found to generate hallucinatory and inconsistent text <cit.>. Krysinski et al. <cit.> and Cao et al. <cit.> found that around 30% of the summaries generated by state-of-the-art abstractive models were factually inconsistent. There are other summarization studies also which report factual inconsistency of generated summaries <cit.>. Similarly, several studies have pointed out semantic inaccuracy as a major problem with current natural language generation models for free-form text generation <cit.>, data-to-text <cit.>, question-answering <cit.>, dialogue modeling <cit.>, machine translation <cit.>, and news generation <cit.>. Several statistical (like PARENT) and model-based metrics have been proposed to quantify the level of hallucination. Multiple data-related methods and modeling and inference methods have been proposed for mitigating hallucination <cit.>, but their effectiveness is still limited. Hence, automated factual inconsistency detection is critical. Natural Language Inference: Natural language inference (NLI) is the task of determining whether a hypothesis is true (entailment), false (contradiction), or undetermined (neutral) given a premise. NLI is a fundamental problem in natural language understanding and has many applications such as question answering, information extraction, and text summarization. Approaches used for NLI include earlier symbolic and statistical approaches to more recent deep learning approaches <cit.>. There are several datasets and benchmarks for evaluating NLI models, such as the Stanford Natural Language Inference (SNLI) Corpus <cit.>, the Multi-Genre Natural Language Inference (MultiNLI) Corpus <cit.> and Adversarial NLI <cit.>. FEVER <cit.> is another dataset on a related problem of fact verification. Recently there has been work on providing explanations along with the classification label for NLI. e-SNLI <cit.> provides a one-sentence explanation aiming to answer the question: “Why is a pair of sentences in a relation of entailment, neutrality, or contradiction?” Annotators were also asked to highlight the words that they considered essential for the label. NILE <cit.> is a two stage model built on e-SNLI which first generates candidate explanations and then processes explanations to infer the task label. Thorne et al. <cit.> evaluate LIME <cit.> and Anchor explanations <cit.> to predict token annotations that explain the entailment relation in e-SNLI. LIAR-PLUS <cit.> contains political statements labeled as pants-fire, false, mostly-false, half-true, mostly-true, and true. The context and explanation is combined into a “extracted justification” paragraph in this dataset. Atanasova et al. <cit.> experiment with LIAR-PLUS dataset and find that jointly generating justification and predicting the class label together leads to best results. There has also been work on detailed categorization beyond just the two classes: contradiction and entailment. Contradiction <cit.> is a tiny dataset with only 131 examples that provides a taxonomy of 10 contradiction types. Recently, TaxiNLI <cit.> dataset has been proposed with 15 classes for detailed categorization with the entailment and not the contradiction category. Continuing this line of work, in this paper, we contribute a new dataset, , which associates every (claim, context) sentence pair with (1) an inconsistency type (out of five) and (2) detailed explanations (inconsistent span in claim and context, inconsistent claim component, coarse and fine-grained inconsistent entity types). § INCONSISTENCY TYPE CLASSIFICATION Factual inconsistencies in text can occur because of a number of different sentence constructions, some overt and others that are complex to discover even manually. We design a taxonomy of five inconsistency types following non-synonymous lexical relations classified by Saeed <cit.>. The book mentions the following kinds of antonyms: simple, gradable, reverses, converses and taxonomic sisters. To this taxonomy, we added two extra categories, negation and set-based, to capture the 's complexity. Also, we expanded the definition of taxonomic sisters to more relations, and hence rename it to taxonomic relations. Further, since we did not find many examples of reverses and converses in our dataset, we merged them with the simple inconsistency category. Overall, our dataset contains these five different inconsistency types. * Simple: A simple contradiction is a direct contradiction, where the negative of one implies the positive of the other in a pair like pass vs. fail. This also includes actions/ processes that can be reversed or have a reverse direction, like come vs. go and fill vs. empty. Pairs with alternate viewpoints like employer vs. employee and above vs. below are also included in this category. * Gradable: Gradable contradictions include adjectival and relative contradictions, where the positive of one, does not imply the negative of other in a pair like hot vs. cold, least vs. most, or periods of time etc. * Taxonomic relations: We include three kinds of relations in this type: (a) Pairs at the same taxonomic level in the language like red vs. blue which are placed parallel to each other under the English color adjectives hierarchy. (b) When a pair has a more general word (hypernym) and another more specific word which includes the meaning of the first word in the pair (hyponym) like giraffe (hypo) vs. animal (hyper). (c) Pairs with a part-whole relation like nose vs. face and button vs shirt. * Negation: This includes inconsistencies arising out of presence of explicit negation morphemes (e.g. not, except) or a finite verb negating an action (e.g. fail to do X, incapable of X-ing) etc. * Set-based: This includes inconsistent examples where an object contrasts with a list that it is not a part of (e.g. cat vs. bee, ant, wasp). § THE DATASET §.§ Dataset Curation and Pre-processing Our dataset is derived from the FEVER dataset <cit.> using the following processing steps. FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. Every sample in the FEVER dataset contains the claim sentence, evidence (or context) sentence from a Wikipedia URL, a type label (`supports', `refutes' or `not enough info'). Out of these, we leverage only the samples with `refutes' label to build our dataset. We propose a linguistically enriched dataset to help detect inconsistencies and explain them. To this end, the broad requirements are to locate where an inconsistency is present between a claim and a context, and to have a classification scheme for better explainability. §.§ Annotation Details To support detailed inconsistency explanations, we perform comprehensive annotations for each sample in the dataset. The annotations were done in two iterations. The first iteration focused on “syntactic oriented” annotations while the second iteration focused on “semantic oriented” annotations. The annotations were performed using the Label Studio annotation tool[<https://labelstud.io/>] by a group of four annotators (two of which are also authors). The annotators are well versed in English and are Computer Science Bachelors students with a specialization in computational linguistics, in the age group of 20–22 years. Detailed annotation guidelines are in annotationGuidelines.pdf heredatafootnote. Syntactic Oriented Annotations: In this annotation stage, the judges labeled the following syntactic fields per sample. Table <ref> shows examples of each of these fields. (1) Inconsistent Claim Fact Triple: A claim can contain multiple facts. The annotators identified the fact that is inconsistent with the context. Further, the annotators labeled the span of source (S), relation (R) and target (T) within the claim fact. Sometimes, e.g., in case of an intransitive verb, the target was empty. Further, for each of the S, R and T, the annotators also labeled head and modifier separately. The head indicates the main noun (for S and T) or the verb phrase (for R) while the modifier is phrase that serves to modify the meaning of the noun or the verb. (2) Inconsistent Context Span: A span marked in the context sentence which is inconsistent with the claim. (3) Inconsistent Claim Component: This can take six possible values depending on the part of the claim fact triple that is inconsistent with the context: Subject-Head, Subject-Modifier, Relation-Head, Relation-Modifier, Target-Head, Target-Modifier. Semantic Oriented Annotations: In this annotation stage, the annotators labeled the following semantic fields per sample. Table <ref> shows examples of each of these fields. (1) Inconsistency Type: Each sample is annotated with one of the five inconsistency types as discussed in Section <ref>. (2) Coarse Inconsistent Entity Type: When the inconsistency is because of an entity, the annotator also labeled one of the 20 coarse types for the entity causing the inconsistency. The types are action, animal, entertainment, gender, geography, identity, material, name, nationality, organization, others, politics, profession, quantity, reality, relationship, sentiment, sport, technology and time. (3) Fine-grained Inconsistent Entity Type: Further, when the inconsistency is because of an entity, the annotator also labeled one of the 60 fine-grained types for the entity causing the inconsistency. For inconsistency entity type detection, the annotations were performed in two iterations. In the first iteration, the annotators were allowed to annotate the categories (both at coarse and fine-grained level) freely without any limited category set. This was performed on 500 samples. The annotators then discussed and de-duplicated the category names. Some rare categories were merged with frequent ones. This led to a list of 20 coarse and 60 fine-grained entity types (including “others”). In the second iteration, annotators were asked to choose one of these categories. We measured inter-annotator agreement on 500 samples. For source, relation, target and inconsistent context spans, the intersection over union (IoU) was found to be 0.91, 0.83, 0.85 and 0.76 respectively. Further, the Kappa score was found to be 0.78, 0.71 and 0.67 for the inconsistency type, coarse inconsistent entity type and fine-grained inconsistent entity type respectively. §.§ Dataset Statistics The dataset consists of 8055 samples in English with five inconsistency types. The distribution across the five types is as follows: Taxonomic Relations (4842), Negation (1630), Set Based (642), Gradable (526) and Simple (415). There are six possible inconsistent claim components with distribution as follows: Target-Head (3960), Target-Modifier (1529), Relation-Head (951), Relation-Modifier (1534), Source-Head (45), Source-Modifier (36). The dataset contains 20 coarse inconsistent entity types as shown in Fig. <ref>. Further, these are sub-divided into 60 fine-grained entity types. Table <ref> shows average sizes of various fields averaged across samples in the dataset. The dataset was divided into train, valid and test splits in the ratio of 80:10:10. § NEURAL METHODS FOR FACTUAL INCONSISTENCY CLASSIFICATION WITH EXPLANATIONS We leverage the dataset to train models for factual inconsistency classification with explanations. Specifically, given the claim and context sentence, our system does predictions in the following stages: (A) Predict Inconsistent Claim Fact Triple (S,R,T) and Inconsistent Context Span, (B) Predict Inconsistency Type and Inconsistent Claim Component, (C) Predict Coarse and Fine-grained Inconsistent Entity Type. Overall, the system architecture consists of a pipeline of four neural models to predict inconsistency type with explanations: M_1, M_2, M_3 and M_4, and is illustrated in Fig. <ref>. We discuss details of the three stages and the pipeline in this section. Model Architectures We experiment with five pretrained models of which two are natural language generation (NLG) models. Specifically, we finetune Transformer <cit.> encoder based models like BERT <cit.>, RoBERTa <cit.> and DeBERTa <cit.>. We also use two NLG models: BART <cit.> and T5 <cit.> which are popular in the NLG community. BERT (Bidirectional Encoder Representations from Transformers) <cit.> essentially is a transformer encoder with 12 layers, 12 attention heads and 768 dimensions. We used the pre-trained model which has been trained on Books Corpus and Wikipedia using the MLM (masked language model) and the next sentence prediction (NSP) loss functions. RoBERTa <cit.> is a robustly optimized method for pretraining natural language processing (NLP) systems that improves on BERT. RoBERTa was trained with 160GB of text, trained for larger number of iterations up to 500K with batch sizes of 8K and a larger byte-pair encoding (BPE) vocabulary of 50K subword units, without NSP loss. DeBERTa <cit.> is trained using a special attention mechanism where content and position embeddings are disentangled. It also has an enhanced mask decoder which leverages absolute word positions effectively. BART <cit.> is a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. T5 <cit.> is also a Transformer encoder-decoder model pretrained on Colossal Clean Crawled Corpus, and models all NLP tasks in generative form. When encoding input or output for these models, we prepend various semantic units using special tokens like ⟨claim⟩, ⟨context⟩, ⟨source⟩, ⟨relation⟩, ⟨target⟩, ⟨contextSpan⟩, ⟨claimComponent⟩, ⟨type⟩, ⟨coarseEntityType⟩ and ⟨fineEntityType⟩. NLG models (BART and T5) generate the inconsistency type and all explanations, and are trained using cross entropy loss. For NLU models (BERT, RoBERTa, DeBERTa), we prepend input with a [CLS] token and use its semantic representation from the last layer with a dense layer to predict inconsistency type, inconsistent claim component, and entity types with categorical cross entropy loss. With NLU models, source, relation, target, and context span are predicted using start and end token classifiers (using cross entropy loss) as usually done in the question answering literature <cit.>. Stage A: Predict Inconsistent Spans In this stage, we first train models to predict source, relation and target by passing the claim sentence as input to the models. Further, to predict inconsistent context span, we experiment with four different methods as follows. (1) Structure-ignorant: The input is claim and context sentence. The aim is to directly predict inconsistent context span ignoring the “source, relation, target” structure of the claim. (2) Two-step: First step takes claim and context sentences as input, and predicts source, relation and target (SRT). Second step augments source, relation and target to the input along with claim and context, and predicts the inconsistent context span. (3) Multi-task: The input is claim and context sentence. The goal is to jointly predict source, relation, target and inconsistent context span. (4) Oracle-structure: The input is claim and context sentence, and ground truth (source, relation and target). These are all used together to predict inconsistent context span. Stage B: Predict Inconsistency Type and Claim Component This stage assumes that (1) SRT from claim and (2) inconsistent context span have already been predicted. Thus, in this stage, the input is claim, context, predicted SRT and predicted inconsistent context span. Using these inputs, to predict inconsistency type and inconsistent claim component, we experiment with three different methods as follows. (1) Individual: Predict inconsistency type and inconsistent claim component separately. (2) Two-step: First step predicts inconsistent claim component. Second step augments the predicted inconsistent claim component to the input, and predicts inconsistency type. (3) Multi-task: Jointly predict inconsistency type and inconsistent claim component in a multi-task learning setup. Stage C: Predict Inconsistent Entity Types To find inconsistent entity types, we build several models each of which take two main inputs: inconsistent context span and the span from the claim corresponding to the inconsistent claim component. We experiment with the following different models. (1) Individual: Predict coarse and fine-grained inconsistent entity type separately. (2) Two-step: First step predicts coarse inconsistent entity type. Second step augments the predicted coarse inconsistent entity type to the input, and predicts fine-grained type. Further, we also attempt to leverage semantics from entity class names. Hence, we use the NLU models (BERT, RoBERTa, DeBERTa) to obtain embeddings for entity class names, and train NLU models to predict the class name which is most similar to semantic representation (of the [CLS] token) of the input. We use cosine embedding loss to train these models. Specifically, using class (i.e., entity type) embeddings, we train the following models. Note that we cannot train NLG models using class embeddings; thus we perform this experiment using NLU models only. (1) Individual Embedding: Predict coarse and fine-grained inconsistent entity type separately using entity type embeddings. (2) Two-step Embedding: First step predicts coarse inconsistent entity type using class embeddings. Second step augments the predicted coarse inconsistent entity type to the input, and predicts fine-grained type using class embeddings. (3) Two-step Mix: First step predicts coarse inconsistent entity type using class embeddings. Second step augments the predicted coarse inconsistent entity type to the input, and predicts fine-grained type using typical multi-class classification without class embeddings. After experimenting with various model choices for the three stages described in this section, we find that the configuration described in Fig. <ref> provides best results. We also attempted other designs like (1) predicting all outputs (inconsistency type and all explanations) jointly as a 6-task setting using just claim and context as input, (2) identifying claim component only as S, R or T rather than heads versus modifiers. However, these alternate designs did not lead to better results. § EXPERIMENTS AND RESULTS For prediction of spans like source, relation, target, and inconsistent context span, we use exact match (EM) and intersection over union (IoU) metrics. EM is a number from 0 to 1 that specifies the amount of overlap between the predicted and ground truth span in terms of tokens. If the characters of the model's prediction exactly match the characters of ground truth span, EM = 1, otherwise EM = 0. Similarly, IoU measures intersection over union in terms of tokens. For classification tasks like inconsistency type prediction as well as coarse and fine-grained inconsistent entity type prediction, we use metrics like accuracy and weighted F1. Since factual inconsistency classification is a novel task, there are no existing baseline methods to compare with. Source, Relation, Target and Inconsistent Context Span Prediction: Table <ref> shows results for source, relation and target prediction from claim sentences. The table shows that T5 works best except for prediction of relation and target using the exact match metric. Further, Table <ref> shows that surprisingly structure ignorant method is slightly better than the two-step method. Oracle method with DeBERTa expectedly is the best. NLG models (BART and T5) perform much worse compared to NLU models for context span prediction. Lastly, we show results of jointly predicting source, relation, target and inconsistent context span in Table <ref>. The table shows while T5 and BART are better at predicting source, relation and target, DeBERTa is a clear winner in predicting the inconsistent context span. Inconsistency Type and Inconsistent Claim Component Prediction: Tables <ref> and <ref> show the results for the inconsistency type and inconsistent claim component prediction. Note that the two problems are 5-class and 6-class classification respectively. We observe that joint multi-task model outperforms the other two methods. Also, DeBERTa is the best model across all settings. For this best model, the F1 scores for the inconsistency types are as follows: Taxonomic Relations (0.92), Negation (0.86), Set Based (0.65), Gradable (0.78) and Simple (0.81). Inconsistent Entity Type Prediction: Tables <ref> and <ref> show accuracy and weighted F1 for coarse and fine-grained inconsistent entity type prediction respectively. We make the following observations from these tables: (1) DeBERTa outperforms all other models for both the predictions. (2) For coarse inconsistent entity type prediction, the embedding based approach works better than the typical classification approach. This is because there are rich semantics in the entity class names that are effectively leveraged by the embedding based approach. (3) For fine-grained inconsistent entity type prediction, two-step method is better than individual method both with and without embeddings. (4) The two-step mix method where we use embeddings based method to predict coarse inconsistent entity type and then usual 60-class classification for fine-grained types performs the best. Qualitative Analysis To further understand where our model goes wrong, we show the confusion matrix for inconsistency type prediction for our best model in Table <ref>. We observe that the model labels many set-based examples as `taxonomic relations' leading to poor F1 for the set-based class. In general most of the confusion is between `taxonomic relations' and other classes. Amongst the coarse entity types, we found the F1 to be highest for time, action, quantity, nationality and geography entity types, and lowest for animal, relationship, gender, sentiment and technology entity types. Further, for inconsistency spans in the context, we observe that the average length of accurate predictions (3.16) is much smaller than inaccurate predictions (8.54), comparing the lengths of ground truth spans. Further, for inaccurate predictions, we observe that as the length of the inconsistency span increases, the coverage of ground truth tokens by the predicted tokens, decreases on an average. Further, we categorized inaccurate span predictions into 4 buckets (additive, reordered, changed and subtractive). Additive implies more terms compared to ground truth, reordered means same terms but reordered, changed means some new terms were generated by the model, and subtractive means misses out on terms compared to ground truth. We found that ∼91 were of subtractive type, indicating that our inconsistency span predictor model is too terse and can be improved by reducing sampling probability for end of sequence token. Hyper-parameters for Reproducibility: The experiments were run on a machine with four GEFORCE RTX 2080 Ti GPUs. We used a batch size of 16 and the AdamW optimizer <cit.> and trained for 5 epochs for all models. We used the following models: bert-base-uncased, roberta-base, microsoft/deberta-base, facebook/bart-base, and t5-small. Learning rate was set to 1e-4 for BART and T5, and to 1e-5 for other models. More details are available in the codedatafootnote. § CONCLUSION AND FUTURE WORK In this paper, we investigated the problem of detecting and explaining types of factual inconsistencies in text. We contributed a new dataset, , with ∼8K samples with detailed inconsistency labels for (claim, context) pairs. We experimented with multiple natural language understanding and generation models towards the problem. We found that a pipeline of four models which predict inconsistency spans in claim and context followed by inconsistency type prediction and finally inconsistent entity type prediction works the best. Also, we observed that DeBERTa led to the best results. In the future, we plan to extend this work to multi-lingual scenarios. We also plan to extend this work to perform inconsistency detection and localization across multiple sentences given a paragraph. § ETHICAL STATEMENT In this work, we derived a dataset from FEVER dataset[<https://fever.ai/dataset/fever.html>]. Data annotations in FEVER incorporate material from Wikipedia, which is licensed pursuant to the Wikipedia Copyright Policy. These annotations are made available under the license terms described on the applicable Wikipedia article pages, or, where Wikipedia license terms are unavailable, under the Creative Commons Attribution-ShareAlike License (version 3.0), available at this link: <http://creative commons.org/licenses/by-sa/3.0/>. Thus, we made use of the dataset in accordance with its appropriate usage terms. The dataset does not contain any personally identifiable information. Details of the manual annotations are explained in Section <ref> as well as in annotationGuidelines.pdf at <https://github.com/blitzprecision/FICLE>. splncs04
http://arxiv.org/abs/2306.01448v1
20230602111056
Deterministic Approximation of a Stochastic Imitation Dynamics with Memory
[ "Ozgur Aydogmus", "Yun Kang" ]
math.DS
[ "math.DS", "math.PR" ]
ruled γ δ ω λ φ (
http://arxiv.org/abs/2306.09890v2
20230616145917
Studying Generalization on Memory-Based Methods in Continual Learning
[ "Felipe del Rio", "Julio Hurtado", "Cristian Buc", "Alvaro Soto", "Vincenzo Lomonaco" ]
cs.LG
[ "cs.LG" ]
[ Studying Generalization on Memory-Based Methods in Continual Learning equal* Felipe del Riopuc Julio Hurtadounipi,cenia Cristian Buccenia Alvaro Sotopuc,cenia Vincenzo Lomonacounipi pucDepartment of Computer Science, Pontificia Universidad Católica de Chile, Santiago, Chile unipiUniversità di Pisa, Pisa, Italy ceniaCentro Nacional de Inteligencia Artificial (CENIA), Santiago, Chile Felipe del [email protected] Machine Learning, ICML 0.3in ] One of the objectives of Continual Learning is to learn new concepts continually over a stream of experiences and at the same time avoid catastrophic forgetting. To mitigate complete knowledge overwriting, memory-based methods store a percentage of previous data distributions to be used during training. Although these methods produce good results, few studies have tested their out-of-distribution generalization properties, as well as whether these methods overfit the replay memory. In this work, we show that although these methods can help in traditional in-distribution generalization, they can strongly impair out-of-distribution generalization by learning spurious features and correlations. Using a controlled environment, the Synbol benchmark generator <cit.>, we demonstrate that this lack of out-of-distribution generalization mainly occurs in the linear classifier. § INTRODUCTION Continual Learning (CL) aims to develop models and training procedures capable of learning continuously through a stream of data <cit.>. As opposed to the well-studied static setting of feeding the model with independent and identically distributed (IID) data, in CL, each experience has its distribution with a possible drift among tasks. Given this distribution drift, one of the main challenges of CL is catastrophic forgetting <cit.>. The latter refers to the process by which a model forgets to solve previously learned tasks when new experiences come in. In this context, replay-based methods provide a powerful and straightforward tool to counter catastrophic forgetting by storing and revisiting a subset of samples from previously learned tasks. These methods have achieved state-of-the-art results in a wide array of continual learning scenarios and benchmarks <cit.>. Despite successful results, previous works have argued that memory-based methods are prone to overfitting <cit.>. By only storing a subset of previous distributions, the model only reinforces concepts and ideas that are present in the buffer, depending on how much previous distributions are represented. To reinforce useful concepts, the buffer should accurately represent the whole training distribution. However, if the buffer represents only a small percentage of the training distribution, it will start learning spurious correlations and will lose its generalization capabilities. We argue that compositionality is a critical factor for CL. However, spurious correlations in the data can lead the model to learn incorrect compositions of specific concepts, thus impairing generalization. In this paper, we show that, even if a model can learn to identify useful concepts to make a proper classification, the classifier will learn shortcuts that hurt out-of-distribution generalization (OOD); shortcuts that help increase performance in the IID dataset and are amplified by memory-based methods. However, as a result, they increase the generalization gap between IID and OOD examples. In this paper, we develop a controlled setting that tests out-of-distribution generalization beyond the training distribution. We evaluate a basic CNN model on a set of examples that depart from the training distribution by including unseen combinations of latent and target variables. And we show that replay falters in this setting, giving further evidence that replay-based methods have a toll on generalization capabilities not seen on traditional machine learning benchmarks that test only the IID test set. Here we propose an approach to test how OOD and spurious correlations affect memory-based methods and hope our results influence future studies to focus on improving the performance of CL in OOD data. § RELATED WORK Memory-based methods address catastrophic forgetting by incorporating data from previous tasks into the training process <cit.>. Most methods save samples from previous experiences to be used in the current task <cit.>, hoping these can be a good representation of past distributions and maintain performance. Previous research suggests that rehearsal would result in over-fitting, affecting generalization <cit.>. However, other researchers suggest the opposite <cit.>, making this issue still an open question <cit.>. One way to tackle the problem of generalization in machine learning is compositionality <cit.>. Compositional representations refer to decomposing concepts in their sub-parts. Learning these representations is useful, as these can be recombined to create novel concepts or make sense of new experiences <cit.>. Previous works have used compositionality to this effect <cit.>. In CL, uses explicit compositions of neural modules as a way to reuse knowledge from previous tasks to solve new ones and show this approach increases the model's generalization capabilities. § EXPERIMENTAL SETTING The common practice in CL dictates that, for each experience, the distribution of training and test sets follows a similar distribution, and changes in the distributions take place with new experiences. However, in an uncontrolled setting, changes in the distribution of a known experience can occur as it is almost impossible to completely represent the full distribution with a subset of samples, e.g., sample selection bias <cit.>. To tackle this issue, we aim for testing the generalization capabilities of a model in OOD set together with the IID test set. This new set should present similar characteristics to the training set, but with a systematically different distribution, e.g., leaving out some combination of concepts. In CL, we assume that each experience t of the sequence follows a distribution P_t(y_i | x_i, z_i), where y_i is the label, x_i is the input, and z_i are a group of characteristics presented in the input. In a classification task, the objective is to minimize the function ℒ_t (f_Θ(x_i^t), y_i^t) for every experience of the sequence, meaning that the model f must learn parameters Θ that find relevant characteristics from z_i that generalize to solve the current and future task using only information from the input. However, as it has been shown in previous studies <cit.>, it is common that the model uses shortcuts and learns spurious features that help the model solve the task without generalizing to OOD samples. To test this, we follow a strategy used in systematic and compositional generalization research <cit.>, and propose the creation of an OOD test set to quantify the ability of the model to generalize to examples that drift from the training distribution. To test OOD generalization, we must know which attributes z are useful for solving a particular task. Ideally, a model should be able to correctly identify relevant features z_g and irrelevant features z_b. In this paper, we will assume that a model with proper OOD generalization properties can correctly identify those relevant features and combine them to solve the task. In contrast, a model relying on spurious correlations is one that can correctly encode relevant features but incorrectly extracts this information or uses irrelevant features for solving the task. We create a dataset where we control every characteristic z that generates an image. We identify one of these characteristics as the label y, a group of relevant features to solve the task z_g, and the rest as irrelevant features. In order for the model to be effective, it must identify features z_g and combine them to correctly identify y. We expect that even with missing combinations of y, z_g from the training set, a robust model that is able to identify y and z_g independently can extrapolate its knowledge correctly to solve the task despite the absent combinations. For the CL scenario, we create the sequence following a domain incremental setup <cit.>, each experience contains every class and a different domain distribution. Each experience will have a disjoint group of features z_g present. For every experience t, t_1, t_2 ∈{1,2,...,E}, z_t={ẑ_0,ẑ_1,...,ẑ_|z_t|}, where z_t_1∩ z_t_2 = ∅, ∀t_1≠t_2. We create a test set that follows the same distribution P_t(y_i | x_i, z_i) of each experience. In addition to the test set, we built a second set, which we call a generalization set, to test the OOD generalization capabilities of the model. This is achieved by holding out a number of combinations of features-label tuples (z_j,y_j) ∈ G_t from being given to the model during training P_t(z_j,y_j) = 0, ∀ t ∈ E. An example of the training and testing scenario generated can be seen in Fig. <ref>. § EXPERIMENTAL SET-UP §.§ Benchmark We leverage the Synbol benchmark framework <cit.> to quickly create a synthetic dataset composed of images of different characters with various sizes, positions, fonts, colors, and backgrounds. Figure <ref> shows examples of the characters generated. This benchmark allows us to access the relevant features or latent variables z used to generate each image. Thus, allowing us the flexibility to create a task like the one described in the previous section. We use the font prediction task for our experiments, i.e., the font is the task-relevant factor. For the creation of experiences, we use English non-diacritic characters as the feature z to partition the dataset, this choice reduces the similarity between domains. We sample 10,000 images in total, made from 10 fonts and 10 characters. These images are used to create the splits sets, namely train and, IID and OOD test. We create five different CL scenarios by dividing the dataset into different numbers of tasks T, namely T ∈{1, 2, 4, 5, 10}. When T=1, the scenario consists of the static setting and we use it for comparison. Each task t is created by assigning a number of characters to it and assigning all training examples with that character to that task. For example, when T=5, since we only have 10 different characters, each experience contains 2 unique characters. The OOD test set is produced by a distribution of samples that is not seen during training. For this, we set aside one font per character for our generalization test set, producing 10 font/character combinations. This help us understand how well a model can compose information about the character and the font it extrapolates to unseen combinations. §.§ Memory Replay For the CL experiments, we use the Avalanche library <cit.>, and the Experience Replay <cit.> method. Because we want to test generalization capabilities, we only use this simple method with reservoir sampling memory buffer <cit.>. In order to compare IID generalization versus OOD, we vary the size of the memory using 50, 100, 250, 500, and 1000. To run the experiments, we use a 6-layer CNN network which we optimize using the Adam optimization algorithm <cit.> with a learning rate of 4 · 10^-4 and a cross-entropy classification loss. We performed a hyper-parameter search such that we were able to replicate results from the original paper <cit.>. § RESULTS §.§ Continual Learning Generalization The first thing we aim at assessing is how generalization is affected when we use a memory-based method in different CL scenarios. As a baseline, we train the model using the entire training set, which we call static training, and we can observe a gap between the accuracy achieved in the static setting and different levels of memory size in Figure <ref>. As expected, we need a proper representation of the training distribution to achieve similar results, otherwise, it will overfit to a subset of the training set. If we look at the gap between IID and OOD test sets, we see that as the memory size increases the IID performance increases until it achieves similar results to the static training. However, OOD remains lower, displaying a generalization gap between both, as shown in Figure <ref> (right). Also, longer training sequences need larger memories to match the performance of static training. Suggesting that memory replay lacks the means to generalize to OOD data and focuses its performance to know distributions. §.§ Representation Inspection To understand where the spurious correlations are being learned, we test the representations learned for features known to be relevant to the task. We use a linear probe, keeping the model fixed and training it to detect these features. We show that the learned representations have the necessary information to solve the main task, since training the probe with IID and OOD data it obtains good performance in both IID and OOD test sets. This is shown in Figure <ref> in light blue and blue respectively. This suggests that is the classifier the one unable to make use of the information correctly to solve the task. However, when training only with IID data, there is still a big gap between the IID and OOD test set, as shown in Figure <ref>. This gap is similar to one presented in the previous section, and the fact that replay memory size seems not to affect this gap suggests replay is producing its effect by reducing overfitting in the IID set on the classification layer, in contrast to the representation layers of the model. Similar to the previous experiments, we can test how much information about the characters the representation encodes, shown in Figure <ref> for IID (light green) and OOD (green) test sets. The behavior is similar to the font probe, there exists a gap between IID and OOD performance when training with IID data only. Although smaller, a clear gap of generalization exists. This suggests the model is encoding information from the domain, but not confusing spuriously these two features. §.§ Testing for Flat Modeling Looking at the previous results, we can see that the model produces features with the ability to solve and generalize in a compositional way, but it is not doing so. Our hypothesis is that the classifier is to blame for realizing these spurious relationships between character and font in such a way that there is no comprehension of compositionally between features. To test this, we flatten the problem and treat each font-character combination as an independent class. In this way, it is possible to verify if the model is actually learning to represent the combination or if it represents each concept separately so that the classifier can then combine them. In Figure <ref> and <ref> we can observe that the model is able to solve the flatten task, but only when training with the corresponding combination. When training only with the original training set, the accuracy in the OOD test set is zero. However, when training with both sets, IID and OOD, the accuracy of the OOD test set is almost 100%, light red and red respectively. § CONCLUSION AND FUTURE WORK Memory-based methods have shown high performance in various CL scenarios. However, the generalizability of these models is rarely tested. In this work, we show that these methods can only generalize in the limited context provided by the buffer only with enough memory size and that for OOD elements the performance is low. We believe it is essential to expand these types of studies to better comprehend these techniques and then propose alternatives that can generalize out-of-distribution. In future work, we seek to propose new methods that are capable of better-taking advantage of learned representations to increase their ability to generalize. § ACKNOWLEDGEMENT Research partly funded by National Center for Artificial Intelligence CENIA, FB210017, BASAL, ANID, and by PNRR - M4C2 - Investimento 1.3, Partenariato Esteso PE00000013 - "FAIR - Future Artificial Intelligence Research" - Spoke 1 "Human-centered AI", funded by the European Commission under the NextGeneration EU programme. langley00 icml2023 § APPENDIX §.§ Character representation In the results, we show that the model can obtain good results when training a linear probing to identify the character. this shows that the model has the information within the representation to solve the task. However, since we work in a continuous environment, it is important to know when the model obtains the relevant information to solve the task. In Figure <ref>, we can observed that the model is capable of accumulating knowledge. Even though the performance without memory is worse, it is important to note that the model is capable of good performance, both for the IID and OOD tests. §.§ Probing Methodology Using the representations at various stages of the model, we trained a linear probe to test the information stored in these. We used a linear two-layer perceptron, the SGD with momentum training algorithm. We used a grid-search over the hidden layer size, { 16, 32, 64, 128, 256 }, and learning rate { 10^-1, 5·10^-2, 10^-2, 5·10^-3}, for model selection. We use the probe to understand how much information exists in the model representation when trained in different scenarios. Figures <ref> and <ref> show a summary of the results. Here we show more clearly each of the results. Figures <ref> and <ref> show the performance of applying the probe for the font task with the training distribution, IID, and the complete distribution, IID + OOD, respectively. Figures <ref> and <ref> shows the performance of applying probing for the character task with the training distribution, IID, and the complete distribution, IID + OOD, respectively. Figures <ref> and <ref> show the performance of applying probing for the character task with the training distribution, IID, and the complete distribution, IID + OOD, respectively.
http://arxiv.org/abs/2306.04492v1
20230607150210
A Mirror Descent Perspective on Classical and Quantum Blahut-Arimoto Algorithms
[ "Kerry He", "James Saunderson", "Hamza Fawzi" ]
cs.IT
[ "cs.IT", "math.IT", "math.OC", "quant-ph" ]
A Mirror Descent Perspective on Classical and Quantum Blahut-Arimoto Algorithms Kerry HemonashDepartment of Electrical and Computer System Engineering, Monash University, Clayton VIC 3800, Australia. <kerry.he1, [email protected]> James Saundersonmonash Hamza FawzicambridgeDepartment of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB3 0WA, United Kingdom. <[email protected]> ====================================================================================================================================================================================================================================================================================================================================================================== The Blahut-Arimoto algorithm is a well known method to compute classical channel capacities and rate-distortion functions. Recent works have extended this algorithm to compute various quantum analogs of these quantities. In this paper, we show how these Blahut-Arimoto algorithms are special instances of mirror descent, which is a well-studied generalization of gradient descent for constrained convex optimization. Using new convex analysis tools, we show how relative smoothness and strong convexity analysis recovers known sublinear and linear convergence rates for Blahut-Arimoto algorithms. This mirror descent viewpoint allows us to derive related algorithms with similar convergence guarantees to solve problems in information theory for which Blahut-Arimoto-type algorithms are not directly applicable. We apply this framework to compute energy-constrained classical and quantum channel capacities, classical and quantum rate-distortion functions, and approximations of the relative entropy of entanglement, all with provable convergence guarantees. § INTRODUCTION Channel capacities, which quantify the maximum rate at which information can be transmitted through a noisy channel, are typically defined using variational expressions that do not admit closed-form solutions. Many important capacities also have the property of being defined as the solutions of convex optimization problems, and therefore convex optimization techniques can be used to efficiently compute these quantities. One of the most well-known algorithms for estimating classical channel capacities is the Blahut-Arimoto algorithm <cit.>, which is derived from an alternating optimization approach. More recently, several works have extended this method to compute various quantum channel capacities <cit.>. Apart from these Blahut-Arimoto-style approaches, other algorithms proposed to solve for quantum channel capacities include <cit.>. Alternatively, it was shown in <cit.> how certain quantum channel capacities could be formulated as instances of the more general class of quantum relative entropy programs, however there are currently no practical algorithms able to solve large-scale problem instances of this class. Being tailored towards specific applications in information theory and being first devised when modern first-order optimization theory was still in its infancy, it is not obvious how Blahut-Arimoto algorithms are related to more standard optimization algorithms. In <cit.>, it was shown how the Blahut-Arimoto algorithm for classical channel capacities can be interpreted as mirror descent, which is a well-studied type of first-order Bregman proximal method <cit.>, and often regarded as a generalization of projected gradient descent. However, at the time, the generalized convergence analysis associated with mirror descent was unable to be applied to the classical channel capacity problem. Traditionally, convergence analyses for first-order methods require the objective function to have a Lipschitz-continuous gradient. However, many information measures do not satisfy these assumptions. To illustrate this, consider the univariate negative entropy function f(x)=xlog(x), which is often used in the construction of information measures. Clearly, the magnitude of both the first and second derivatives are unbounded as x→0^+, and therefore neither f nor its gradient are Lipschitz-continuous. Recently, notions of relative smoothness and strong convexity have been proposed by <cit.> which extends these convexity properties to be measured with respect to another arbitrary convex function. These assumptions can be applied to a wider class of functions, and have been used to derive generalized convergence rates for Bregman proximal methods including mirror descent. Notably, we will show how this relative convexity analysis can be conveniently applied to several important problems in both classical and quantum information theory. Several works have applied mirror descent to solve other problems in quantum information theory, including quantum state tomography <cit.> and minimization of quantum Rényi divergences <cit.>. However, neither of these works establish convergence rates. This mirror descent interpretation is powerful not only from a theoretical point of view, but also practically as it allows us to connect to many existing tools in first-order optimization literature. A limitation with current Blahut-Arimoto algorithms is that they are unable to effectively deal with general linear constraints, which is required to compute energy-constrained channel capacities or rate-distortion functions. Existing methods typically dualize the constraints by minimizing the Lagrangian and treating the Lagrange multipliers as constants <cit.>, however it is not clear how to choose the Lagrange multipliers to solve for a particular set of constraints. Other works require repeatedly solving a sequence of inner or outer subproblems <cit.>, which can be computationally inefficient. Instead, a natural extension of mirror descent to account for general linear constraints is to use a primal-dual hybrid gradient (PDHG) method <cit.>. This algorithm is a variation of mirror descent which solves saddle-point problems by using alternating mirror descent and ascent steps, and can be applied to linearly constrained convex optimization problems by simultaneously solving for the optimal primal and dual variables of a Lagrangian function. §.§ Main results In this paper, we show how classical <cit.> and quantum <cit.> Blahut-Arimoto algorithms are a specific instance of mirror descent using an entropy kernel function, applied to solve a particular class of optimization problems. These connections between these two algorithms are formally presented in Theorem <ref>. This improves upon previous works <cit.> twofold. First, we extend the connections between the classical Blahut-Arimoto algorithm and mirror descent to the broader class of quantum Blahut-Arimoto algorithms. Second, we augment the interpretation by making connections between known convergence rates of Blahut-Arimoto algorithms with the new relative smoothness and strong convexity analysis <cit.>. Using this mirror descent interpretation of Blahut-Arimoto algorithms, we take advantage of this more general framework to derive algorithms to solve additional problems in information theory. We show how PDHG allows us to solve problems with general linear constraints by performing an additional mirror ascent step to update the Lagrange multipliers every iteration. Additionally, we show how ergodic sublinear convergence of PDHG can be established using relative smoothness analysis. Finally, by using PDHG with different kernel functions, we derive new methods to solve problems which Blahut-Arimoto algorithms have previously not been used to solve, or were unable to efficiently solve. These problems include energy-constrained classical and quantum channel capacities, classical and quantum rate-distortion functions, and the relative entropy of resource. § PRELIMINARIES §.§ Notation Let ℝ̅ℝ∪{+∞} be the extended real numbers, ℕ be the set of positive integers (excluding zero), 𝕍 represent a finite-dimensional inner product space, ℍ^n represent the set of n× n Hermitian matrices with trace inner product XY = [X^† Y] where X^† is the adjoint of X, and ≼ denote the semidefinite order for Hermitian matrices. Also, let ℝ^n_+ denote the non-negative orthant, ℍ^n_+ the set of positive semidefinite Hermitian matrices, and ℝ^n_++ and ℍ^n_++ denote the respective interiors. We denote the n-dimensional probability simplex as Δ_n {p ∈ℝ^n_+ : ∑_i=1^n p_i = 1 }, and set of n× m column stochastic matrices as 𝒬_n,m{ Q∈ℝ^n× m_+ : ∑_i=1^n Q_ij = 1, ∀ j=1,…,m }. We use and to denote the relative interior (see e.g. <cit.>) of a set, and domain of a function respectively. On an n-dimensional Hilbert space ℋ≅ℂ^n, we denote the set of bounded self-adjoint linear operators as ℬ(ℋ) ≅ℍ^n, the set of bounded positive self-adjoint linear operators as ℬ(ℋ)_+ ≅ℍ^n_+, and the set of density matrices as 𝒟(ℋ) {ρ∈ℬ(ℋ)_+ : [ρ] = 1}. Where required, we will use subscripts to clarify that a quantum state exists in a particular system. For instance, we write ρ_A to indicate the density matrix is in system ℋ_A. For a bipartite system ρ_AB∈𝒟(ℋ_A⊗ℋ_B), the partial trace over ℋ_A (see e.g., <cit.>) is denoted _A(ρ_AB). The set of completely positive trace preserving (CPTP) quantum channels (see e.g. <cit.>) with input system ℋ_A and output system ℋ_B is denoted as Φ(ℋ_A, ℋ_B). For a quantum channel 𝒩∈Φ(ℋ_A, ℋ_B), the Stinespring representation is 𝒩(ρ)=_E(Uρ U^†) for some isometry U: ℋ_A →ℋ_B ⊗ℋ_E, where ℋ_E denotes an auxiliary environment system. The corresponding complementary channel 𝒩_c∈Φ(ℋ_A, ℋ_E) is defined as 𝒩_c(ρ) = _B(Uρ U^†). For any linear operator 𝒜:𝕍→𝕍' between inner product spaces, the corresponding adjoint operator 𝒜^†:𝕍'→𝕍 is defined to satisfy y𝒜(x) = 𝒜^†(y)x for all x∈𝕍 and y∈𝕍'. The norm induced by the inner product of a vector space 𝕍 is denoted as _2 √() (i.e., Euclidean norm for ℝ^n, Frobenius norm for ℍ^n). The identity matrix is denoted as 𝕀, and the all ones vector is denoted 1. The largest and smallest eigenvalues of A∈ℍ^n are denoted as λ_max(A) and λ_min(A) respectively. We denote the j-th column of a matrix A∈ℝ^n× m as A_j. §.§ Classical and Quantum Entropies The Shannon entropy for a random variable X with probability distribution x∈Δ_n is H(x)-∑_i=1^n x_i log(x_i), and the Kullback-Leibler (KL) divergence between random variables X and Y with probability distributions x,y∈Δ_n is Hxy∑_i=1^n x_i log(x_i / y_i). For a joint distribution P∈Δ_n× m on random variables X and Y and marginal distribution p∈Δ_m on Y where p_j=∑_i=1^nP_ij for all j, the classical conditional entropy is HXY_P H(P) - H(p). Similarly, the von Neumann entropy for density matrix ρ∈𝒟(ℋ) is S(ρ)-[ρlog(ρ)], and the quantum relative entropy between quantum states ρ,σ∈𝒟(ℋ) is Sρσ[ρ (log(ρ) - log(σ))]. For a bipartite density matrix ρ∈𝒟(ℋ_A⊗ℋ_B), the quantum conditional entropy is SAB_ρ S(ρ) - S(_A(ρ)). These entropies and divergences are defined using the convention that 0log(0)=0 and -xlog(0)=+∞ for all x>0. For pairs of quantum states, this implies that Sρσ=+∞ when ρ⊈σ <cit.>, where denotes the support of a matrix. It is well known that Shannon and von Neumann entropy are concave functions. The KL divergence and quantum relative entropy are jointly convex on their two respective arguments <cit.>. Additionally, classical and quantum conditional entropies are concave with respect to P and ρ respectively <cit.>. § MIRROR DESCENT AND RELATIVE SMOOTHNESS Consider the constrained convex optimization problem _x ∈𝒞 f(x), where f:𝕍→ℝ̅ is a differentiable convex function and 𝒞⊆𝕍 is a convex subset of the domain of f. There are many optimization algorithms available to solve convex optimization problems of this form. However, there are a number of factors one should consider in selecting one for problems in quantum information theory. First, the dimension of problems in quantum information theory scale exponentially with the number of qubits. Therefore, first-order methods, which take steps that are computationally inexpensive, are preferred over second-order methods when low- to medium-accuracy solutions are required. A naïve approach would therefore be to employ projected gradient descent to solve these problems, which alternates between taking a gradient descent step and performing a Euclidean projection onto the feasible set. However, projected gradient descent typically produces iterates on the boundary of the domain, where gradients of information measures are not well-defined, as identified in <cit.>. Moreover, gradient descent methods inherently impose a Euclidean structure on a problem by measuring distances between points using the Euclidean norm. However, probability distributions and density matrices are more naturally compared using relative entropy. A natural question is therefore whether modifying gradient descent to use these specialized divergences will result in an algorithm with faster convergence. Mirror descent resolves these issues by replacing the Euclidean norm with a Bregman divergence, which adapts it to geometries more suitable for given objectives f and sets 𝒞. The Bregman divergence is defined using a special class of kernel functions, which we introduce below. A function φ: 𝕍→ℝ̅ is Legendre if it is proper, lower semicontinuous, strictly convex, and essentially smooth. See e.g., <cit.> for precise definitions of these terms. Using these Legendre functions, we define the Bregman divergence as follows. Consider a Legendre function φ: 𝕍→ℝ̅. The Bregman divergence D_φ : φ×φ→ℝ associated with the kernel function φ is D_φxyφ(x) - (φ(y) + ∇φ(y)x - y). The Bregman divergence is not a metric as it is not necessarily symmetric in its two arguments, nor does it satisfy the triangle inequality. However, it satisfies certain desirable properties. From convexity of φ, it follows that Bregman divergences are non-negative. From strict convexity of φ, it follows that D_φxy=0 if and only if x=y. We introduce a few important Bregman divergences below. The Bregman divergence associated with the energy function φ(x)=x_2^2/2 with domain φ=ℝ^n is the squared Euclidean norm D_φxy = x - y_2^2/2. The Bregman divergence associated with negative Shannon entropy φ(x)=-H(x) with domain φ=ℝ^n_+ is the normalized KL divergence D_φxy= Hxy - ∑_i=1^n (x_i - y_i). The Bregman divergence associated with negative von Neumann entropy φ(ρ)=-S(ρ) with domain φ=ℍ^n_+ is the normalized quantum relative entropy D_φρσ= Sρσ - [ρ - σ]. The Bregman divergence associated with the negative Burg entropy φ(x)=-∑_i=1^n log(x_i) with domain φ=ℝ^n_++ is the Itakura-Saito distance D_φxy = ∑_i=1^n (x_i/y_i - log(x_i/y_i) - 1). The Bregman divergence associated with the negative log determinant function φ(ρ)=-log((ρ)) with domain φ=ℍ^n_++ is the log determinant divergence D_φρσ = [ρσ^-1] - log((ρσ^-1)) - n. Using the Bregman divergence, the mirror descent algorithm is outlined in Algorithm <ref>. This algorithm is practical when the mirror descent update (<ref>) can be efficiently computed. This is the case for certain choices of φ and 𝒞. When φ(x)=x_2^2/2, this recovers projected gradient descent x^k+1 = _𝒞(x^k - t_k ∇ f(x^k)) where _𝒞(y) =_x ∈𝒞x - y_2^2, which can be efficiently computed for certain sets 𝒞. When φ(x)=-H(x) and 𝒞=Δ_n, then x^k+1_i = x^k_i exp(-t_k ∂_i f(x))/∑_j=1^n x^k_j exp(-t_k ∂_j f(x)) for i=1,…,n, where ∂_i f is the partial derivative of f with respect to the i-th coordinate. Similarly, when φ(ρ)=-S(ρ) and 𝒞=𝒟(ℋ), then ρ^k+1 = exp(log(ρ^k) - t_k∇ f(ρ^k))/[exp(log(ρ^k) - t_k∇ f(ρ^k))]. These previous two iterates (<ref>) and (<ref>) are sometimes known as exponentiated or entropic gradient descent in literature. Essential smoothness of φ allows it to act as a kind of barrier to its own domain, and implies that (<ref>) is minimized on the interior of the domain of φ <cit.>. Therefore, certain constraints such as non-negativity can be elegantly encoded into the mirror descent algorithm by using suitable kernel functions such as negative entropies. This also allows mirror descent to avoid the issues identified in <cit.> where gradients are not well-defined on the boundary of the domain. §.§ Relative Smoothness and Strong Convexity Convergence analysis for first-order algorithms has typically been based on the assumption that the objective function f is smooth with respect to a norm ∇ f(x) - ∇ f(y)_* ≤ L x - y, ∀ x,y∈𝒞, for some L>0, where _* is the dual norm, and is strongly convex f(x) ≥ f(y) + ∇ f(y)x - y + 1/μx - y^2, ∀ x,y∈𝒞, for some μ>0. When the objective is smooth, first-order methods are typically able to converge monotonically at a sublinear O(1/k) rate. When the objective is additionally strongly convex, convergence is improved to a linear O(α^k) rate for some α∈(0, 1). However, a drawback with convergence analysis using these assumptions is many objective functions in information theory do not satisfy these properties. Recently, these convexity properties have been extended to relative smoothness and strong convexity <cit.>, which measure convexity relative to any Legendre function. A function f is L-smooth relative to a Legendre function φ on 𝒞 if there exists an L>0 such that Lφ - f is convex on 𝒞. A function f is μ-strongly convex relative to a Legendre function φ on 𝒞 if there exists a μ>0 such that f - μφ is convex on 𝒞. When φ(x)=x^2_2/2, we recover the standard definitions for smoothness and strong convexity with respect to the Euclidean norm. There are many equivalent ways to express relative smoothness and strong convexity, as summarized in <cit.>. Specifically, the following equivalent expressions will be useful, which all arise from equivalent conditions for convexity. Consider functions f and φ which are differentiable on a convex set 𝒞, where φ is Legendre. The following conditions are equivalent: * Lφ - f is convex on 𝒞, * ∇ f(x) - ∇ f(y)x - y≤ L (D_φxy + D_φyx) for all x, y ∈𝒞, * f(x) ≤ f(y) + ∇ f(y)x - y + LD_φxy for all x, y ∈𝒞 Similarly, the following conditions are also equivalent: * f - μφ is convex on 𝒞, * ∇ f(x) - ∇ f(y)x - y≥μ (D_φxy + D_φyx) for all x, y ∈𝒞, * f(x) ≥ f(y) + ∇ f(y)x - y + μ D_φxy for all x, y ∈𝒞 These properties are useful as they can be used to establish convergence rates for Bregman proximal methods. In particular, we can obtain the following global convergence rates for mirror descent. Consider Algorithm <ref> to solve the convex optimization problem (<ref>). Let f^* represent the optimal value of this problem, and x^* be any corresponding optimal point. If f is L-smooth relative to φ and t_k=1/L for all k, then the sequence {f(x^k)} decreases monotonically, and satisfies f(x^k) - f^* ≤L/kD_φx^*x^0, ∀ k∈ℕ. If f is additionally μ-strongly convex relative to φ, then the sequence satisfies f(x^k) - f^* ≤(1 - μ/L)^k LD_φx^*x^0, ∀ k∈ℕ. These results show that just like gradient descent, mirror descent achieves sublinear convergence when the objective is relatively smooth, and linear convergence when the objective is additionally relatively strongly convex. Sometimes when analyzing Bregman proximal methods, it can also be useful to assume the kernel function φ is strongly convex with respect to a norm (we will use this to analyze the PDHG algorithm in Section <ref>). We can show that all kernel functions introduced in Section <ref> are strongly convex over either the probability simplex or set of density matrices. Negative Shannon and von Neumann entropy are 1-strongly convex with respect to the 1- and trace-norm over Δ_m and 𝒟(ℋ) respectively. These are straightforward consequences of the classical and quantum Pinsker inequalities <cit.>. Negative Burg entropy and the negative log determinant function are 1-strongly convex with respect to the Euclidean and Frobenius norm over Δ_m and 𝒟(ℋ) respectively. See Appendix <ref>. § QUANTUM BLAHUT-ARIMOTO ALGORITHMS Blahut-Arimoto algorithms are a type of alternating optimization method which are tailored towards computing channel capacities and related quantities. We define this class of algorithms below. Consider the constrained convex optimization problem (<ref>). A Blahut-Arimoto algorithm solves this problem by defining a bivariate extension g:𝕍×𝕍'→ℝ of the objective function f, which when minimized over the second variable recovers the original function as follows f(x) = min_y∈𝒞' g(x,y), where 𝒞'⊆𝕍' is a convex set. Blahut-Arimoto algorithms solve the equivalent optimization problem min_x∈𝒞 f(x) = min_x∈𝒞min_y∈𝒞' g(x,y) by using the alternating iterates y^k+1 = _y∈𝒞' g(x^k,y), x^k+1 = _x∈𝒞 g(x,y^k+1). Blahut-Arimoto algorithms are practical when minimizing over x and y independently as in (<ref>) is more efficient than solving the original problem (<ref>) directly. We now introduce the specific implementation of Blahut-Arimoto algorithms by <cit.>, which is designed to solve for quantum channel capacities. The quantum Blahut-Arimoto algorithm is a specific instance of the Blahut-Arimoto algorithm where 𝕍=𝕍'=ℬ(ℋ), 𝒞=𝒞'=𝒟(ℋ), and g(x,y) = xℱ(y) + γ Sxy, for some constant γ>0 and continuous function ℱ:𝒟(ℋ)→ℬ(ℋ) which satisfies 0 ≤xℱ(x) - ℱ(y)≤γ Sxy, for all x,y∈𝒟(ℋ). Consider a quantum Blahut-Arimoto algorithm parameterized by constant γ>0 and continuous function ℱ:𝒟(ℋ)→ℬ(ℋ). This algorithm solves constrained convex optimization problems (<ref>) where f(x)=xℱ(x) and 𝒞=𝒟(ℋ), and has iterates (<ref>) of the form y^k+1 = x^k x^k+1 = exp(log(y^k+1) - ℱ(y^k+1)/γ)/[exp(log(y^k+1) - ℱ(y^k+1)/γ)]. These iterates produce a sequence of function values { g(x^k, y^k) } which converge sublinearly to the global optimum. If ℱ additionally satisfies xℱ(x) - ℱ(y)≥ a Sxy, for some a>0, then the sequence of function values converge linearly. The expression (<ref>) is derived without requiring any assumptions on ℱ. Only the second inequality in (<ref>) is required to establish f(x)=xℱ(x) and (<ref>). This particular definition of the quantum Blahut-Arimoto algorithm generalizes all other quantum Blahut-Arimoto algorithms <cit.>. It can similarly be shown to generalize the original classical Blahut-Arimoto algorithms <cit.> by considering diagonal systems, or by comparing to <cit.>. §.§ Equivalence with Mirror Descent The astute reader may already recognize many similarities between the quantum Blahut-Arimoto algorithm and mirror descent. However, it is not immediately clear if the two algorithms are identical, or if one algorithm is a generalization of the other. In the following, we will show that the quantum Blahut-Arimoto framework is in fact a special instance of mirror descent. We will do this by first showing how the class of problems that the quantum Blahut-Arimoto algorithm can solve is restricted by the assumptions on ℱ specified in Definition <ref>. For a convex set 𝒞⊆𝕍, consider a function f:𝒞→ℝ which can be expressed as f(x) xℱ(x), for ℱ:𝒞→𝕍. If ℱ satisfies xℱ(x)-ℱ(y)≥ 0, for all x,y∈𝒞, then f is convex and ℱ(x)∈∂ f(x) for all x∈𝒞, where ∂ f(x){ g∈𝕍 : f(x) ≥ f(y) + gx-y, ∀ y∈𝒞} denotes the subgradient of f at x. Using (<ref>) and (<ref>), it is straightfoward to show that for all x,y∈𝒞 f(x) = xℱ(x) ≥xℱ(y) = yℱ(y) + x - yℱ(y) = f(y) + x - yℱ(y). This is precisely the definition for ℱ(y) to be a subgradient of f at y. Convexity of f follows from the existence of a subgradient for all x∈𝒞. To show this, let z=λ x + (1-λ) y for some x,y∈𝒞 and λ∈[0,1]. Then using (<ref>) we can show that f(x)≥ f(z) + (1-λ) ℱ(z)x-y, and f(y)≥ f(z) + λℱ(z)y-x. Combining λ times the first inequality and (1-λ) times the second inequality gives λ f(x) + (1-λ)f(y) ≥ f(z), which concludes the proof. For a convex set 𝒞, consider a function f:𝒞→ℝ and operator ℱ:𝒞→𝕍 such that ℱ(x)∈∂ f(x) for all x∈𝒞. If ℱ is continuous, then f is continuously differentiable with ∇ f(x) = ℱ(x) for all x∈𝒞. We will prove the result by contradiction. Assume that there exists x∈𝒞 such that ∂ f(x) is set-valued. Then there exists a g∈∂ f(x) such that g-ℱ(x)ε>0. As ℱ is continuous, there exists a t>0 and z=x+t(g-ℱ(x)) such that z∈𝒞 and ℱ(z) - ℱ(x) < ε. We can show that ℱ(z)-ℱ(x) ≥1/z-xz-xℱ(z)-ℱ(x) ≥1/z-xz-xg-ℱ(x) = g-ℱ(x) =ε. The first inequality uses the Cauchy–Schwarz inequality, the second inequality uses monotonicity of subdifferentials (see, e.g., <cit.>), and the equality uses the fact that the two vectors are positive scalar multiples of each other. However, this contradicts our assumption that z was chosen to satisfy ℱ(z) - ℱ(x) < ε. Therefore, the subdifferential ∂ f must be single-valued for all x∈𝒞, from which the desired result follows from <cit.>. Therefore, for the quantum Blahut-Arimoto algorithm to have meaningful convergence results, the previous two propositions show that ℱ=∇ f must be satisfied, and therefore the objective must satisfy the identity f(x) = x∇ f(x), ∀ x∈𝒞. From this, we can also recognize that the bivariate function (<ref>) is precisely the function being minimized in the mirror descent update (<ref>) for kernel function φ()=-S(). Similarly, the Blahut-Arimoto updates (<ref>) are also identical to entropic gradient descent (<ref>). A well known result is that if a function is positively homogeneous of degree 1, then it satisfies the property (<ref>). This identity is known as Euler's identity <cit.>. The converse is not necessarily true unless the function is defined on a cone <cit.>. We now show that the assumptions (<ref>) and (<ref>) are identical to relative smoothness and strong convexity whenever the objective function satisfies (<ref>). For a convex set 𝒞, let f:𝒞→ℝ be a function which satisfies (<ref>), and φ be a Legendre function. Then f is L-smooth relative to φ on 𝒞 if and only if x∇ f(x) - ∇ f(y)≤ L D_φxy, for all x,y∈𝒞. Similarly, f is μ-strongly convex relative to φ on 𝒞 if and only if x∇ f(x) - ∇ f(y)≥μ D_φxy, for all x,y∈𝒞. First, assume f is L-smooth relative to φ. We can show that for all x,y∈𝒞 x∇ f(x) - ∇ f(y) = f(x) - f(y) - ∇ f(y)x - y ≤ L( φ(x) - φ(y) - ∇φ(y)x - y) = LD_φxy, where the first equality uses (<ref>), the inequality uses the first-order condition for convexity of Lφ - f, and the last equality uses the definition of a Bregman divergence. This recovers (<ref>) as desired. To show the converse, we can simply add together a copy of (<ref>) with another copy of itself with the roles of x and y exchanged to obtain x - y∇ f(x) - ∇ f(y)≤ L (D_φxy + D_φxy), for any x,y∈𝒞 This is precisely the same expression as Proposition <ref>(a-ii), and therefore implies relative smoothness of f, as desired. The proof for relative strong convexity follows from exactly the same arguments. The proof for the direction showing how (<ref>) and (<ref>) imply relative smoothness and strong convexity respectively do not require the function to satisfy (<ref>), and is true for any function. Therefore, (<ref>) and (<ref>) are in general stronger assumptions than relative smoothness and strong convexity. For example, (<ref>) does not hold when f(x)=φ(x)=x_2^2/2 and L=1. We summarize these results in the following theorem. Consider a quantum Blahut-Arimoto algorithm parameterized by constant γ>0 and continuous function ℱ:𝒟(ℋ)→ℬ(ℋ). The quantum Blahut-Arimoto iterates (<ref>) are equivalent to mirror descent iterates (<ref>) applied to solving the constrained convex optimization problem (<ref>) where * f(x)=x∇ f(x) and ∇ f(x) = ℱ(x) for all x∈𝒞, * f is γ-smooth relative to -S on 𝒞, * 𝒞=𝒟(ℋ), φ(x)=-S(x), and t_k=1/γ for all k. If additionally (<ref>) holds, then f is also a-strongly convex relative to -S. This follows from Propositions <ref> and <ref>, and Corollary <ref>. The sublinear and linear convergence rates from Propositions <ref> and <ref> for mirror descent and the quantum Blahut-Arimoto algorithm follow from equivalent assumptions made on f and ℱ respectively. Therefore, we have shown that Blahut-Arimoto algorithms are a special case of mirror descent in two regards. First, they specifically use Shannon or von Neumann entropy as the kernel function φ, and optimize over the set of probability distributions or density matrices. Second, the objective functions all satisfy (<ref>). In Section <ref>, we introduce the relative entropy of resource, whose objective does not satisfy (<ref>), nor is it smooth relative to von Neumann entropy. Therefore, existing Blahut-Arimoto frameworks are not directly applicable to this problem. Instead, we will show how the mirror descent framework using a different kernel function φ provides an implementable method to compute the relative entropy of resource. A converse implication to Theorem <ref> is that, by generalizing φ and 𝒞 to arbitrary Legendre functions and convex sets respectively, mirror descent on relatively smooth objective functions which satisfy (<ref>) admit an interpretation as a Blahut-Arimoto algorithm. However, it is unclear whether this provides any additional insight in solving certain problems over the more powerful mirror descent generalization. § PRIMAL-DUAL HYBRID GRADIENT When computing energy-constrained channel capacities or related quantities such as rate-distortion functions, we are required to optimize over the intersection of the probability simplex (density matrices) together with additional linear constraints. These problems can be expressed as the following variation of the constrained convex optimization problem (<ref>) _x∈𝒞 f(x) b_1 - 𝒜_1(x) ∈𝒦 b_2 - 𝒜_2(x) = 0, where 𝒦⊆𝕍_1 is a proper cone (see e.g., <cit.>), b_1∈𝕍_1, b_2∈𝕍_2, and 𝒜_1:𝕍→𝕍_1 and 𝒜_2:𝕍→𝕍_2 are linear operators. Blahut-Arimoto algorithms have traditionally been unable to effectively handle these additional constraints, and typically dualize the constraints to avoid the issue. One possible method of solving (<ref>) is to use mirror descent. However, recall that mirror descent is only practical when the updates (<ref>) can be efficiently computed. For example, for the kernel function φ=-H, the mirror descent step no longer has the analytic expression (<ref>) for general linear constraints, and must be computed numerically. Alternatively, instead of a primal first-order method, we can consider using a primal-dual first-order method instead. These methods find the primal-dual pair (x^*, z^*) which solves the saddle-point problem _x∈𝒞sup_z∈𝒵 ℒ(x, z) f(x) + z𝒜(x) - b. where ℒ is the Lagrangian and z is the dual variable of (<ref>), 𝒵={(λ, ν)∈𝒦^*×𝕍_2} where 𝒦^* is the dual cone of 𝒦, and we have defined the concatenated operator 𝒜=(𝒜_1, 𝒜_2):𝕍→𝕍_1 ×𝕍_2 and vector b=(b_1, b_2)∈𝕍_1 ×𝕍_2 to simplify notation. Solving the saddle-point problem (<ref>) is equivalent to solving the original problem (<ref>) in the sense that ℒ(x^*, z^*)=f(x^*)=f^*, where f^* is the optimal value of (<ref>). An extension of Bregman proximal methods to solve these types of saddle-point problems is PDHG <cit.>. There are several variations of PDHG. In Algorithm <ref>, we introduce a slight modification of the main algorithms presented in <cit.>. We make a few remarks about the PDHG algorithm. First, note that the primal update is precisely the mirror descent update (<ref>) performed on the Lagrangian, and the dual update is a mirror ascent update on the Lagrangian using the energy function as the kernel function, which recovers projected gradient descent. This dual step is efficient whenever Euclidean projection onto the dual cone 𝒦^* can be done efficiently. This is the case for the non-negative orthant 𝒦=𝒦^*ℝ_+^n which yields _ℝ_+^n(x) = x_+ max(x, 0), or the positive semidefinite cone 𝒦=𝒦^*ℍ_+^n _ℍ_+^n(X) = X_+ Umax(Λ, 0)U^†, where X∈ℍ^n has diagonalization X=UΛ U^†, and we have defined max(, ) to be taken elementwise. Second, constraints are handled in two different ways. Constraints which can be efficiently minimized over in the mirror descent update are encoded in 𝒞, whereas all other constraints are numerically handled through the dual updates. Like mirror descent, relative smoothness is a core assumption which we use to establish convergence of Algorithm <ref>. Using this, we are able to establish the following ergodic sublinear convergence rate, which is the standard basic convergence result for PDHG algorithms without further assumptions <cit.>. Consider Algorithm <ref> to solve the convex optimization problem (<ref>). If f is L-smooth relative to φ, θ_k=1 for all k, and τ_k=τ and γ_k=γ are chosen such that ( 1/τ - L ) D_φxx' + 1/2γz - z'_2^2 ≥z - z'𝒜(x - x'), for all x,x'∈𝒞 and z,z'∈𝒵, then the iterates (x^k, z^k) satisfy ℒ(x_avg^k, z) - ℒ(x, z_avg^k) ≤1/k( 1/τ D_φxx^0 + 1/2γz - z^0^2_2 ), for all x∈𝒞, z∈𝒵, and k∈ℕ, where x_avg^k = 1/k∑_i=0^k-1 x^i and z_avg^k = 1/k∑_i=0^k-1z^i. We refer to the proof for Proposition <ref>, which provides convergence rates for a generalized algorithm. Consider choosing any optimal primal-dual solution (x, z)=(x^*, z^*) for (<ref>). As (x^*, z^*) is a saddle-point, we know that the left hand side of (<ref>) is non-negative and therefore converges to zero sublinearly. Additionally, this expression equals zero if and only if (x_avg^k, z_avg^k) is itself a saddle-point. We refer the reader to e.g., <cit.> for additional convergence properties of PDHG under suitable assumptions. The condition (<ref>) is equivalent to ( 1/τ - L ) 1/γ D_φxx'≥1/2𝒜(x-x')_2^2. for all x,x'∈𝒞. In general, it is not clear whether there exist step sizes that satisfy this condition. If however φ is 1-strongly convex with respect to some norm , then a sufficient condition to satisfy (<ref>) is for the step sizes τ and γ to satisfy ( 1/τ - L ) 1/γ≥𝒜^2, where 𝒜 = sup{z𝒜(x) : x≤1, z_2≤1 }. This inequality can always be achieved by choosing τ and γ to be sufficiently small. Therefore, this inequality implies the existence of a suitable step size and provides a way to choose a suitable step size, whenever φ is strongly convex with respect to a norm. There are a few limitations with the PDHG algorithm. We do not address all of these limitations in this paper, and leave some of them for future work. First, as a primal-dual method, not all primal iterates generated by PDHG are guaranteed to be feasible. Depending on how difficult it is to perform projections into the feasible set, this may make computing an explicit optimality gap difficult. Second, the primal-dual step size ratio has a significant impact on the empirical convergence rates of PDHG. Some works <cit.> suggest backtracking methods to adaptively select this ratio, however it is not straightforward how to generalize these heuristics in a way which maintains convergence guarantees. Third, choosing step sizes which satisfy the assumption (<ref>) or (<ref>) can be nontrivial. We detail a backtracking procedure which adaptively chooses these step sizes, and provide convergence results, in Appendix <ref>. § APPLICATIONS In this section, we will show how the mirror descent framework can be used to solve various problems in information theory. We focus on problems which Blahut-Arimoto algorithms have not been applied to, or are unable to solve efficiently. For each of these applications, we will prove that the objective functions are smooth relative to some suitable kernel function, which subsequently allows us to guarantee convergence rates when applying mirror descent or PDHG when suitable step sizes are used. All experiments were run on MATLAB using an AMD Ryzen 5 3600 CPU with 16GB of RAM. We use the backtracking variation of PDHG introduced in Appendix <ref>, with backtracking parameters α=0.75 and θ=1.01. The ratio of primal and dual step sizes κτ_k/γ_k was tuned for each problem, and will be specified for each problem with their results. The PDHG algorithm is terminated when the progress made by the primal and dual iterates slows down below a chosen threshold, as measured by D_φx^kx^k-1/τ_k max{1, x^k_∞} + z^k - z^k-1_2^2/2σ_kmax{1, z^k_∞}≤ 10^-7, where _∞ represents the maximum absolute value of the elements of the vector or matrix. This heuristic is based on the form (<ref>), combined with normalizing terms which measure the divergences relative to the magnitudes of the primal and dual variables. We compare the computation times and absolute tolerances of PDHG to off-the-shelf convex solvers, including YALMIP <cit.> with the ECOS <cit.> solver for classical problems, and CVXQUAD <cit.> with the SDPT3 <cit.> solver for quantum problems. Reported computation times for the off-the-shelf solvers include both the modelling time by YALMIP or CVX as well as the solver time of ECOS or SDPT3. Reported tolerances are computed either using information from the off-the-shelf solver, or by running PDHG for a sufficiently high number of iterations if the off-the-shelf solver fails to solve or cannot solve to high accuracy. As PDHG is a first-order method, we are primarily interested in how well we can quickly obtain low- to medium-accuracy solutions for large-scale problems. Therefore, we present results for a range of problem dimensions, and choose a stopping criteria (<ref>) which achieves a suitable tradeoff between accuracy and computation time. We note that when comparing the computation times between PDHG and the generalized solvers, that ECOS and SDPT3 are both second-order interior point methods which are known to rapidly converge to high accuracy solutions when sufficiently close to the optimum. Therefore, these methods take roughly the same amount of time to reach medium-accuracy solutions as they do to reach the high-accuracy solutions we report in the results. Throughout this section, random probability distributions and classical channels were sampled using a a uniform distribution, then normalizing as required. Unless otherwise stated, random density matrices and quantum channels were generated using the MATLAB package QETLAB <cit.>. §.§ Energy-constrained Channel Capacities We will consider three types of classical and quantum channel capacities with an arbitrary number of energy (or any other linear) constraints. First, consider the discrete input alphabet 𝒳={ 1, …, m } where each letter is sent according to a probability distribution p∈Δ_m, a classical channel with m inputs and n outputs represented by Q∈𝒬_m,n, and l energy constraints on the channel represented by A∈ℝ^l × m_+ and b∈ℝ^l_+. The energy-constrained classical channel capacity is given by _p ∈Δ_m I_c(p)∑_j=1^m∑_i=1^n p_jQ_ijlog(Q_ij/∑_k=1^m p_kQ_ik) Ap ≤ b where I_c(p) is the classical mutual information and has partial derivatives ∂ I_c/∂ p_j = HQ_jQp - 1. This function can be shown to be concave by expressing it as I_c(p) = H(Qp) - ∑_j=1^m p_j H(Q_j), then noting that Shannon entropy is concave, and the second term is linear in p. As Δ_m is a convex set and all energy constraints are linear, we conclude that (<ref>) is a convex optimization problem. Again consider the discrete input alphabet 𝒳 with input distribution p and energy constraints represented by A and b, however now we are interested in communicating the input signal through a quantum channel 𝒩∈Φ(ℋ_A, ℋ_B). To do this, each letter is represented by a quantum state i ↦|i⟩⟨i| for an orthonormal basis {|i⟩} defined on ℋ_X. An encoder ℰ∈Φ(ℋ_X, ℋ_A) is used to map these classical states to a fixed quantum state alphabet |i⟩⟨i|↦ρ_i. A decoder, which can perform joint output measurements, is then used to recover the original message. The maximum information that can be transmitted through this system is known as the energy-constrained classical-quantum (cq) channel capacity <cit.>, and is given by _p ∈Δ_m χ(p) S(𝒩(∑_j=1^m p_jρ_j)) - ∑_j=1^m p_jS(𝒩(ρ_j)) Ap ≤ b where χ(p) is the Holevo information. To find the partial derivative of χ with respect to p_j, we note that from linearity of the quantum channel 𝒩, the derivative of the first term can be found from a straightforward application of Lemma <ref>(b) to find 𝖣S(𝒩(ρ))[𝒩(ρ_j)], where ρ=∑_j=1^mp_jρ_j. The second term is linear in p_j. Therefore, ∂χ/∂ p_j = -[𝒩(ρ_j)(log(𝒩(ρ)) + 𝕀)] - S(𝒩(ρ_j)) = [𝒩(ρ_j)(log(𝒩(ρ_j)-log(𝒩(ρ)) - 𝕀)] = S𝒩(ρ_j)𝒩(ρ) - 1, where the last equality uses the fact that 𝒩 is trace preserving and ρ_j is a density matrix with unit trace. Similar to classical mutual information, Holevo information can be shown to be concave by noting that von Neumann entropy is concave and the second term is linear. As Δ_m is a convex set and all energy constraints are linear, (<ref>) is a convex optimization problem. Now consider the case where the quantum state alphabet is not fixed, input states can be entangled, and the sender and receiver share an unlimited number of entangled states prior to sending messages through the channel. Energy constraints are now represented by l positive observables A_i∈ℬ(ℋ_A)_+ and upper bounds b_i≥0. The maximum information that can be transmitted through this system is the energy-constrained entanglement-assisted (ea) channel capacity <cit.>, and is given by _ρ ∈𝒟(ℋ_A) I_q(ρ) S(ρ) + S(𝒩(ρ)) - S(𝒩_c(ρ)) [A_i ρ] ≤ b_i, i=1,…,l where I_q(ρ) is the quantum mutual information. To find the gradient of quantum mutual information, we again recognize that 𝒩 and 𝒩_c are linear operators, and apply Corollaries <ref> and <ref> to find ∇ I_q(ρ) = -log(ρ) - 𝕀 -𝒩^†(log(𝒩(ρ)) + 𝕀) + 𝒩_c^†(log(𝒩_c(ρ)) + 𝕀) = -log(ρ) - 𝒩^†(log(𝒩(ρ))) + 𝒩_c^†(log(𝒩_c(ρ))) - 𝕀. where the second equality uses the fact that the adjoint of trace preserving linear operators are unital <cit.>. It can be shown that quantum mutual information is concave by expressing it as I_q(ρ) = SBE_Uρ U^† + S(𝒩(ρ)). where U is the isometry corresponding to the Stinespring representation 𝒩, and noting that both quantum conditional entropy and von Neumann entropy are concave in ρ. As 𝒟(ℋ_A) is a convex set and all energy constraints are linear, it follows that (<ref>) is also a convex optimization problem. We will now show that negative classical and quantum mutual information and Holevo information are all relatively smooth with respect to Shannon and von Neumann entropy. We first recall the following channel parameters as introduced in <cit.>. For a classical channel Q ∈𝒬_n,m, the contraction coefficient ζ_Q is defined as ζ_Q = sup{HQpQq/Hpq : p,q ∈Δ_m, p ≠ q }, and the expansion coefficient η_Q is defined as η_Q = inf{HQpQq/Hpq : p,q ∈Δ_m, p ≠ q }. Similarly, for a quantum channel 𝒩∈Φ(ℋ_A, ℋ_B), the contraction coefficient ζ_𝒩 is defined as ζ_𝒩 = sup{S𝒩(ρ)𝒩(σ)/Sρσ : ρ, σ∈𝒟(ℋ_A), ρ≠σ}, and the expansion coefficient η_𝒩 is defined as η_𝒩 = inf{S𝒩(ρ)𝒩(σ)/Sρσ : ρ, σ∈𝒟(ℋ_A), ρ≠σ}. We now show how these channel coefficients are related to relative smoothness and strong convexity. Consider a classical channel Q ∈𝒬_m,n, a quantum channel 𝒩∈Φ(ℋ_A, ℋ_B), and an encoding channel ℰ∈Φ(ℋ_X, ℋ_A). * Negative classical mutual information -I_c is ζ_Q-smooth and η_Q-strongly convex relative to negative Shannon entropy -H() on Δ_m. * Negative Holevo information -χ is ζ_(𝒩∘ℰ)-smooth and η_(𝒩∘ℰ)-strongly convex relative to negative Shannon entropy -H() on Δ_m. * Negative quantum mutual information -I_q is (1+ζ_𝒩-η_𝒩_c)-smooth and (1+η_𝒩-ζ_𝒩_c)-strongly convex relative to negative von Neumann entropy -S() on 𝒟(ℋ_A). To show part (a), using the expression for the gradient of classical mutual information (<ref>), it is possible to show that for any probability distributions p,q∈Δ_m that p∇ I_c(q) - ∇ I_c(p) = HQpQq, and therefore using the definition of channel coefficients η_Q Hpq≤p∇ I_c(q) - ∇ I_c(p)≤ζ_Q Hpq. The desired result then follows from Remark <ref>. Parts (b) and (c) can be established using essentially the same proof. Alternatively, we can use the results from <cit.> combined with Remark <ref>. Due to monotonicity <cit.> and non-negativity of classical and quantum relative entropy, the channel coefficients always satisfy the bounds 0≤η_()≤ζ_()≤1. Therefore, negative classical mutual information and Holevo information are always at least 1-smooth relative to negative Shannon entropy, and negative quantum mutual information is always at least 2-smooth relative to negative von Neumann entropy. Using these relative smoothness properties combined with strong convexity of Shannon entropy and von Neumann entropy from Proposition <ref> to ensure the existence of suitable step sizes, we can therefore apply PDHG (with or without backtracking) to solve for each energy-constrained channel capacity while achieving the ergodic sublinear convergence guarantees provided by Proposition <ref>. To implement PDHG for the energy-constrained classical and cq channel capacity, we use the kernel function φ()=-H(), primal domain 𝒞=Δ_m, and all energy-constraints are dualized. This results in the following primal-dual iterates (p^k, λ^k) λ̅^k+1 = λ^k + θ_k (λ^k - λ^k-1) p^k+1_j = p̅_j^k+1/∑_j=1^m p̅_j^k+1, j=1,…,m λ^k+1 = (λ^k + γ_k (Ap^k+1 - b))_+, where p̅_j^k+1 = p^k_j exp(τ_k (∂_j f(p^k) - A_jλ̅^k+1), λ∈ℝ^l_+ is the dual variable corresponding to the energy constraints, ∂_j f is the partial derivative of f with respect to the j-th coordinate, A_j is the j-th column of A, and we define either f I_c for the classical channel capacity or fχ for the cq channel capacity. Similarly, for the energy-constrained ea channel capacity, we use kernel function φ=-S, primal domain 𝒞=𝒟(ℋ_A), and all energy constraints are dualized. The resulting PDHG iterates (ρ^k, λ^k) are of the form λ̅^k+1 = λ^k + θ_k (λ^k - λ^k-1) ρ^k+1 = ρ̅^k+1/[ρ̅^k+1] λ^k+1_i = (λ^k_i + γ_k ([A_iρ] - b_i))_+, i=1…,l, where ρ̅^k+1 = exp(log(ρ^k) + τ_k (∇ I_q(ρ^k) - ∑_i=1^l λ̅_i^k+1A_i ) ) λ∈ℝ^l_+ is the dual variable corresponding to the energy constraints. We remark that the primal iterates (<ref>) are identical to the Blahut-Arimoto methods for unconstrained channel capacities <cit.>, and PDHG essentially augments these existing Blahut-Arimoto methods for constrained channel capacities with a single additional dual ascent step which adaptively solves for the Lagrange multipliers for specific energy bounds b. We show experimental results for the computation of energy-constrained variants of the classical channel capacity in Table <ref>, the cq channel capacity in Table <ref>, and the ea channel capacity in Table <ref>. All classical and quantum channels were randomly generated. For the ea channel capacity, quantum channels are generated using a random Stinespring representation where the auxiliary environment system ℋ_E is half the dimension of the output system ℋ_B, by sampling random isometries from a normal distribution then normalizing as required. For classical and cq channel capacities, the energy constraint matrix A was randomly generated using a uniform random distribution between 0 and 1. For the ea channel capacity, observables A_i are generated by scaling random density matrices so that their trace is equal to the dimension of the channel. Energy bounds b are uniformly random generated between 0 and 1. If an infeasible set of energy constraints was sampled or if none of the constraints were active at the optimal solution, we rejected it and resampled the energy constraints. Results show that PDHG is able to solve to medium-accuracy solutions up to 500 times faster than generalized solvers. Additionally, PDHG is able to successfully solve large-scale problem instances that the off-the-shelf solvers were unable to solve due to lack of memory. §.§ Rate-Distortion Functions Consider the classical channel setup that was defined in Section <ref> for the classical channel capacity problem (<ref>). Let us define a distortion matrix δ∈ℝ^n× m_+ where δ_ij represents the distortion of producing output i from input j, and P∈Δ_n× m be the joint distribution where P_ij is the probability of obtaining output i from input j. The classical rate-distortion function R_c(D) for an input probability distribution p∈Δ_m quantifies how much a signal can be compressed by a lossy channel while remaining under a maximum allowable distortion D≥0 of the signal, and is given by <cit.> _P∈Δ_n× m I_c(P) ∑_j=1^m ∑_i=1^n P_ijlog(P_ij/∑_k=1^m p_jP_ik) ∑_i=1^m P_ij = p_j, j=1,…,m, ∑_j=1^m ∑_i=1^n P_ijδ_ij≤ D, where with slight abuse of notation, I_c(P) is the same classical mutual information function introduced in (<ref>), but we now treat the input distribution p as fixed problem data, and the joint distribution P as the variable. Note that (<ref>) is expressed differently but is equivalent to the original problem defined in <cit.>. The partial derivatives of classical mutual information with respect to the joint distribution is ∂ I_c/∂ P_ij = log(P_ij) - log( ∑_k=1^m p_jP_ik). Now consider the ea quantum channel setup that was defined in Section <ref> for the ea channel capacity problem. Given a rank-n input state ρ_A=∑_i=1^n λ_i |a_i⟩⟨a_i| where λ_i > 0 for all i=1,…,n, define the purified state |ψ⟩∈ℋ_A ⊗ℋ_R as |ψ⟩ = ∑_i=1^n √(λ_i)|a_i⟩⊗|r_i⟩. Here, ℋ_R is a reference system, ℋ_R≥ n, and {|r_i⟩} is some orthonormal basis on ℋ_R. Let Δ∈ℬ(ℋ_B⊗ℋ_R)_+ be a positive semidefinite distortion observable. The ea rate distortion function R_q(D) for maximum allowable distortion D≥0 and input state ρ_A∈𝒟(ℋ_A) is <cit.> _ρ_BR∈𝒟(ℋ_B⊗ℋ_R) I_q(ρ_BR) S(ρ_A) + S(_R(ρ_BR)) - S(ρ_BR) _B(ρ_BR) = ρ_R, Δρ_BR≤ D. where ρ_R_A(|ψ⟩⟨ψ|). Again with a slight abuse of notation, I_q(𝒩) is the quantum mutual information function introduced in (<ref>), but we now treat the input state ρ_A as fixed problem data, and the bipartite output state ρ_BR as a variable. Using Corollaries <ref> and <ref>, and recognizing that the adjoint of the partial trace operator is (_1)^†(X_2) = 𝕀_1 ⊗ X_2, the gradient of quantum mutual information with respect to ρ_BR is ∇ I_q(ρ_BR) = log(ρ_BR) - log(_R(ρ_BR)) ⊗𝕀_R. Note that classical and quantum mutual information can be expressed as I_c(P) = H(p) - HXY_P, I_q(ρ_BR) = S(ρ_R) - SRB_ρ_BR. As classical and quantum conditional entropy are concave in P and ρ_BR respectively, both classical and quantum mutual information are convex functions in P and ρ_BR, respectively. Given that all constraints are linear and that Δ_n,m and 𝒟(ℋ_B⊗ℋ_R) are convex sets, both (<ref>) and (<ref>) are convex optimization problems. We now show that the rate-distortion problems share similar relative smoothness properties to their channel capacity counterparts. Consider any input distribution p∈Δ_m and input state ρ_A∈𝒟(ℋ_A). The following relative smoothness properties hold: * Classical mutual information I_c is 1-smooth relative to negative Shannon entropy -H() on Δ_n× m. * Quantum mutual information I_q is 1-smooth relative to negative von Neumann entropy -S() on 𝒟(ℋ_B⊗ℋ_R). We will only show the proof for (b), as (a) can be established using essentially the same method. Using the fact that partial trace and tensor product operators are adjoint to each other, we can show ∇ I_q(ρ_BR) - ∇ I_q(σ_BR)ρ_BR = Sρ_BRσ_BR - S_R(ρ_BR)_R(σ_BR) ≤ Sρ_BRσ_BR. Using Remark <ref> gives the desired result. We can therefore use PDHG to solve for the rate-distortion functions while guaranteeing ergodic sublinear convergence from Proposition <ref>, where strong convexity of Shannon and von Neumann entropy guarantees the existence of suitable step sizes to achieve this rate. To implement PDHG for the classical rate distortion function, we use kernel function φ()=-H(), primal domain 𝒞={ P∈Δ_m,n : ∑_i=1^m P_ij = p_j }, and dualize the distortion constraint (<ref>). This results in the primal-dual iterates (P^k, λ^k) generated by λ̅^k+1 = λ^k + θ_k (λ^k - λ^k-1) P^k+1_ij = p_jP̅_ij^k+1/∑_i=1^m P̅_ij^k+1, [ i=1,…,n,; j=1,…,m. ] λ^k+1 = (λ^k + γ_k (∑_j=1^m ∑_i=1^n P_ijδ_ij - D ))_+, where P̅_ij^k+1 = P^k_ijexp(-τ_k (∂_ij I_c(P^k) + λδ_ij), λ∈ℝ_+ is the dual variable corresponding to the distortion constraint and ∂_ij I_c is the partial derivative of I_c with respect to the (i,j)-th coordinate. If τ_k=1, then the primal iterate (<ref>) is identical to the original step proposed by Blahut <cit.> up to a change of variables. This implies that mirror descent applied to the Lagrangian for a fixed Lagrange multiplier λ recovers the original Blahut-Arimoto algorithm for classical rate-distortion functions. Moreover, like for the energy-constrained channel capacities, PDHG has a natural interpretation of including an additional dual ascent step to adaptively find the Lagrange multiplier which solves for a given distortion D. To implement PDHG for the quantum rate-distortion function, we use kernel function φ()=-S(), primal domain 𝒞=𝒟(ℋ_B⊗ℋ_R), and dualize both the partial trace (<ref>) and distortion constraint (<ref>). This gives us the primal-dual iterates (ρ_BR^k, ν^k, λ^k) generated by ν̅^k+1 = ν^k + θ_k (ν^k - ν^k-1) λ̅^k+1 = λ^k + θ_k (λ^k - λ^k-1) ρ^k+1_BR = ρ̅_BR^k+1/[ρ̅_BR^k+1] ν^k+1 = ν^k + γ_k (_B(ρ_BR^k+1) - ρ_R) λ^k+1 = (λ^k + γ_k (Δρ_BR^k+1 - D))_+, where ρ̅_BR^k+1 = exp(log(ρ^k_BR) - τ_k(∇ I_q(ρ^k_BR) + 𝕀⊗ν̅^k+1 + λ̅^k+1Δ)), ν∈ℬ(ℋ_R) is the dual variable corresponding to the partial trace constraint, and λ∈ℝ_+ is the dual variable corresponding to the distortion constraint. We show experimental results for the computation of the classical rate-distortion with Hamming distortion δ=11^⊤-𝕀 in Table <ref>, and the quantum rate-distortion with entanglement fidelity distortion Δ=𝕀-|ψ⟩⟨ψ| in Table <ref>. All experiments use D=0.5, and all input states are randomly generated. Similar to the channel capacity experiments, the results show that PDHG solves up to 1000 times faster than generalized methods to low- to medium-accuracy solutions, and are able to effectively scale to large scale problem dimensions. §.§ Relative Entropy of Resource Quantum resource theories <cit.> aim to categorize quantum resources that can be generated using a set of permissible physical operations. The set of states which can be generated are called the free states 𝒮(ℋ) associated with a Hilbert space. An important measure used to quantify a quantum resource is the relative entropy of resource of a state ρ∈𝒟(ℋ), which is defined as min_σ∈𝒮(ℋ) Sρσ. Note that as ρ is fixed problem data, this is equivalent to S(ρ) + min_σ∈𝒮(ℋ) g(σ). where g(σ) -[ρlog(σ)]. To find the gradient of quantum relative entropy with respect to the second argument, first note that due to linearity of the trace operator, the directional derivative along V∈ℍ^n can be found using Lemma <ref>(a) as 𝖣g(σ)[V] = -ρU [f^[1](Λ) ⊙ (U^† V U)] U^†. = -VU [f^[1](Λ) ⊙ (U^†ρ U)] U^† where σ has diagonalization σ=UΛ U^† and f^[1](Λ) is the first divided difference matrix associated with f(x)=log(x). For the second equality, we recognize that XY⊙ Z =Y⊙ XZ for X,Z∈ℍ^n and Y∈ℝ^n× n. It follows that the gradient of quantum relative entropy with respect to σ is _σ Sρσ = ∇ g(σ) = -U [f^[1](Λ) ⊙ (U^†ρ U)] U^†. As quantum relative entropy is jointly convex, if the set of free states 𝒮(ℋ) is convex, then the relative entropy of resource is a convex optimization problem. Before deriving a suitable algorithm for the relative entropy of resource, we will first show that the von Neumann entropy is not a suitable kernel to compute for this quantity. Quantum relative entropy Sρ with fixed first argument ρ∈𝒟(ℋ) is not smooth relative to von Neumann entropy. It suffices to study the univariate scenario where f(x)=-alog(x), φ(x)=xlog(x), and a,x≥0. The rest of the proof follows from <cit.>. Therefore, to apply mirror descent or PDHG, we need to find a different suitable kernel function. To do this, we will present a generalized result which can be applied to problems with similar objectives. Consider a function f:ℝ_++→ℝ which is operator convex, i.e., for all n∈ℕ, X,Y∈ℍ^n_++ and λ∈[0,1], f(λ X + (1-λ)Y) ≼λ f(X) + (1 - λ) f(Y). Then for any A≽0, the function g(X) = [Af(X)] is λ_max(A)-smooth and λ_min(A)-strongly convex relative [f()] on ℍ^n_++. We first recognize that as the trace of the product of two positive semidefinite matrices is always non-negative, it follows from operator convexity of f that g is convex for all A≽0. Therefore, as λ_max(A)𝕀-A≽0, it follows that [(λ_max(A)𝕀-A)f(X)] = λ_max(A)[f(X)] - [Af(X)], must also be convex, which by definition recovers the result for relative smoothness. Similarly, noting that A-λ_min(A)𝕀≽0 and applying a similar argument recovers the relative strong convexity result. Trace functions are convenient to use as kernel functions as finding an expression for the mirror descent iterates (<ref>) reduces to solving a set of univariate equations. Specifically, assuming φ(X)=[h(X)] for some h:ℝ→ℝ then mirror descent iterates are of the form X^k+1 = (h')^-1[h'(X^k) - t_k∇ f(X^k)] This is similar to the case when φ is separable, i.e., of the form φ(x)=∑_i=1^nφ_i(x_i), as discussed in <cit.>. Here, we apply essentially the same principle but to the eigenvalues of the variable X. We now show how these results can be applied specifically to the relative entropy of resource problem. Quantum relative entropy Sρ with fixed first argument ρ∈𝒟(ℋ) is λ_max(ρ)-smooth and λ_min(ρ)-strongly convex relative to the negative log determinant -log(()) on 𝒟(ℋ). This follows from recognizing f(x)=log(x) is operator concave <cit.>, the identity log((σ))=[log(σ)], and Theorem <ref>. We expect that if 𝒮(ℋ) can be characterized using constraints of the form (<ref>) and (<ref>), we can use Algorithm <ref> to solve for the relative entropy of resource. One important example is when we consider 𝒮(ℋ) to be the set of all separable states, in which case (<ref>) recovers the relative entropy of entanglement. Although the set of separable states is convex, it is well-known that it is NP-hard to determine if a general quantum state belongs to the set <cit.>. Instead, a relaxation that is commonly used is the positive partial transpose (PPT) criterion <cit.> 𝖯𝖯𝖳(ℋ_A⊗ℋ_B) = {ρ∈𝒟(ℋ_A⊗ℋ_B) : ρ^T_B≽ 0 }, where ρ^T_B denotes the partial transpose with respect to system ℋ_B. Membership of 𝖯𝖯𝖳 is a necessary condition to be a separable state. For 2×2 and 2×3 Hilbert systems, this is also a sufficient condition <cit.>. Importantly, the partial transpose is a linear operation, and therefore Algorithm <ref> can be used to solve for the relative entropy of entanglement. To implement this, we use kernel function φ()=-log(()), primal domain 𝒞=𝒟(ℋ_A⊗ℋ_B), and the PPT criterion is dualized. Before we introduce the full set of PDHG iterates, we make a few comments on computing the primal step, which can be expressed as σ^k+1 = _σ∈𝒟(ℋ_A⊗ℋ_B){∇ g(σ^k) - (Z̅^k+1)^T_Bσ - 1/τ_kD_φσσ^k}, where φ()=-log(()). By solving for the KKT conditions and using (<ref>) for h(x)=-log(x), we obtain the following expression σ^k+1 = [ [σ^k]^-1 + τ_k( ∇ g(σ^k) - (Z̅^k+1)^T_B ) + ν^k+1𝕀 ]^-1, where ν^k+1∈ℝ is a Lagrange multiplier chosen such that σ^k+1∈𝒟(ℋ_A⊗ℋ_B) is satisfied. Note that unlike the case when φ()=-S(), here we need to numerically solve for this ν^k+1. To satisfy the unit trace constraint, ν^k+1 must satisfy ∑_i=1^nm1/λ_i^k+1 + ν^k+1 = 1, where λ_i^k+1 are the eigenvalues of σ̅^k+1 = [σ^k]^-1 + τ_k(∇ g(σ^k) - (Z̅^k+1)^T_B) To satisfy the positivity constraint, ν^k+1 must also satisfy ν^k+1 > -min_i{λ_i^k+1}. Over this interval, the function on the left-hand side of (<ref>) is monotonically decreasing, and takes all values in the open interval (0, +∞). Therefore, there will always be a unique solution to the equation, which we can efficiently solve for using algorithms such as Newton's method or the bisection method. More details about solving this univariate equation using Newton's method can be found in <cit.>. Therefore, we obtain the PDHG iterates (σ^k, Z^k) of the form Z̅^k+1 = Z^k + θ_k (Z^k - Z^k-1) σ^k+1 = [σ̅^k+1 + ν^k+1𝕀 ]^-1 Z^k+1 = (Z^k - γ_k (Z^k+1)^T_B)_+, where σ̅^k+1 is given by (<ref>), ν^k+1∈ℝ is the largest root of (<ref>), and Z∈ℬ(ℋ_A⊗ℋ_B)_+ is the dual variable corresponding to the PPT constraint. It is also possible to implement PDHG by splitting the constraints so that we have a primal domain 𝒞=ℬ(ℋ_A⊗ℋ_B)_+, and dualizing both the PPT and unit trace constraint. This would allow us to directly compute the primal step without having to numerically compute for the Lagrange multiplier ν^k+1. However, the negative log determinant would no longer be strongly-convex on 𝒞, and therefore it is not clear whether there exists suitable step sizes which satisfy (<ref>) which is required to guarantee convergence rates. We show experimental results for the computation of the approximate relative entropy of entanglement over PPT states in Table <ref>. The equation (<ref>) is solved using the Newton's method. All states ρ are randomly generated. Results show PDHG solves up to 90 times faster to medium-accuracy solutions than generalized methods. We remark that the convergence rate of PDHG appears to be related to how well conditioned the state ρ is, and that randomly generated states at higher dimensions are more likely to be ill-conditioned. Propositions <ref> and Corollary <ref> tell us mirror descent applied to the REE problem will achieve faster linear convergence the more well-conditioned ρ is, and empirically it seems that PDHG shares similar convergence properties. § CONCLUDING REMARKS In this work, we have shown that classical and quantum Blahut-Arimoto algorithms can be interpreted as a special application of the mirror descent algorithm, and that existing convergence results are recovered under relative smoothness and strong convexity analysis. This interpretation allows us to extend these algorithms to other applications in information theory, either by using different kernel functions or algorithmic variations such as PDHG which allow us to solve problems with arbitrary linear constraints. As mirror descent is a very general framework, we believe that it can be applied to many other convex optimization problems in information theory. We also believe that compared to the alternating optimization interpretation traditionally used to derive Blahut-Arimoto algorithms <cit.>, the mirror descent interpretation allows for a more straightforward implementation and generalization to other problems, as all it requires is the computation of the objective function's gradient, rather than trying to find a suitable bivariate extension function of which we are not aware of any straightforward way of doing for general problems. The main difficulty in implementing mirror descent (and, similarly, Blahut-Aimoro algorithms following a framework similar to <cit.>) on new problems is that identifying problems which are relatively smooth with respect to a suitable kernel function, or, conversely, determining a suitable kernel function with respect to which a problem is relatively smooth, is a non-trivial task in general. Some works <cit.> have established weak convergence results of mirror descent without requiring Lipschitz gradient nor relative smoothness properties of the objective function. However, if guarantees on convergence rates to the global optimum are desired then these additional assumptions on the objective may be required. Every convex function is clearly 1-smooth and 1-strongly convex relative to itself. However, this does not lead to a practical algorithm as solving the mirror descent iterates becomes identical to solving the original problem. Recently, <cit.> proposed an online mirror descent method with a negative log-determinant kernel function for the maximum-likelihood quantum state tomography problem. However it is not immediately obvious if the objective function is relatively smooth. We note that in <cit.>, it was shown how for the classical analog that the maximum likelihood function was relatively smooth with respect to the negative Burg entropy. As a possible avenue to find problems for which we can implement mirror descent, we recall that Theorem <ref> introduced a generalized method of establishing relative smoothness and strong convexity of a class of functions X↦[Af(X)] for operator convex functions f. In Remark <ref> we also show how mirror descent iterates can be efficiently computed for these functions. We are therefore interested in whether there exists other applications which can utilize these tools. For example, the standard divergences introduced in <cit.> are defined using operator functions, and can all be analyzed using Theorem <ref> by noting that f(x)=x^α is operator convex for α∈[-1,0]∪[1,2] and operator concave for α∈[0,1] <cit.>. All of the problems studied in this paper were convex optimization problems. However, there are several important related problems which are posed as non-convex problems. For example, there is interest in computing the cq channel capacity over both the input probability distribution and quantum state alphabet. However, Holevo information is convex in the quantum state alphabet, making the objective function non-convex. Other similar problems include the classical <cit.> and quantum <cit.> information bottleneck functions, which involve non-convex inequality constraints. One common way to find local solutions to general non-convex problems is through a convex-concave decomposition, of which there exist variations which utilize proximal gradient iterations <cit.>. Therefore, it will be interesting to see if existing algorithms which solve for these quantities <cit.> share the same interpretation or can be improved by a Bregman proximal convex-concave decomposition or similar method. § BACKTRACKING PRIMAL-DUAL HYBRID GRADIENT An implementation issue with PDHG is that it can be difficult to determine suitable step sizes τ and γ which satisfy (<ref>). If the kernel function φ is strongly convex, then the simplified condition (<ref>) provides an easier way to choose these step sizes. However, obtaining a tight bound on the relative smoothness parameter L or computing A may be non-trivial tasks. Even when these constants can be easily computed, step sizes obtained from (<ref>) may be too conservative. To resolve this issue, we introduce a backtracking method summarized in Algrotihm <ref> which adaptively chooses step sizes to satisfy conditions required for convergence. Notably, the condition does not require us to know L or 𝒜, and is therefore easily computable. This algorithm is based on the backtracking method introduced in <cit.> which accounts for unknown 𝒜. Our algorithm also accounts for unknown L by using a similar approach as <cit.>. Note that the backtracking exit criterion <ref> can be interpreted as a combination of condition (<ref>) and relative smoothness as characterized by Proposition <ref>(a-iii). Backtracking primal-dual hybrid gradient We now present the following convergence result for the backtracking algorithm. Consider Algorithm <ref> to solve the convex optimization problem (<ref>). If f is L-smooth relative to φ and φ is strongly convex with respect to some norm, then the step sizes (τ_k, γ_k) are bounded below by τ_k ≥τ_minmin{τ_-1, α( √(L^2κ^2/4𝒜^4 + κ/𝒜^2) - Lκ/2𝒜^2) }, γ_k ≥γ_minτ_min / κ, where κτ_-1 / γ_-1. It also follows that the backtracking procedure will always terminate. As the backtracking exit condition (<ref>) is the sum of (<ref>) and the relative smoothness condition in Proposition <ref>(a-iii), a sufficient condition for the exit condition to hold is for these two conditions to hold independently. As f is L-smooth relative to φ, a sufficient condition is therefore for (<ref>) to hold. The remaining proof follows from a similar argument as <cit.>. Consider Algorithm <ref> to solve the convex optimization problem (<ref>). Let f^* represent the optimal value of this problem and (x^*, z^*) be any corresponding optimal primal-dual solution. If f is L-smooth relative to φ, then the iterates (x^k, z^k) satisfy ℒ(x_avg^k, z) - ℒ(x, z_avg^k) ≤1/kτ_min( D_φxx^0 + 1/2κz - z^0^2_2 ), for all x∈𝒞, z∈𝒵, and k∈ℕ, where τ_min=min_i{τ_i}, x_avg^k = 1/∑_i=0^k-1τ_i-1∑_i=0^k-1τ_i-1 x^i, and z_avg^k = 1/∑_i=0^k-1τ_i-1∑_i=0^k-1τ_i-1z^i. We will only sketch this proof, as it follows mostly from the same arguments as <cit.>. Using the Bregman proximal inequality <cit.>, we establish that ∇ f(x^k) + 𝒜^†(z^k+1)x^k+1 - x≤1/τ_k (D_φxx^k - D_φx^k+1x^k - D_φxx^k+1), for all x∈𝒞, and z - z^k+1𝒜(x^k+1) - b≤1/γ_kz^k+1 - z^kz - z^k+1 , for all z∈𝒵. Using this second result, we can use a similar argument as <cit.> to show that z^k+1 - z^k+1𝒜(x^k) - b≤1/2γ_k (z^k+1 - z^k^2_2 - z^k+1 - z^k+1^2_2 - z^k+1 - z^k^2_2). Now, subtracting f(x) for an arbitrary x∈𝒞 from both sides of the exit condition (<ref>), using convexity of f, and substituting in (<ref>) gives f(x^k+1) - f(x) + z^k+1𝒜(x^k+1 - x) ≤1/τ_k (D_φxx^k - D_φxx^k+1) + 1/2γ_kz^k+1 - z^k+1_2^2 - z^k+1 - z^k+1𝒜(x^k+1 - x^k). By combining (<ref>)–(<ref>) and using some algebraic manipulation, we can obtain ℒ(x^k+1, z) - ℒ(x, z^k+1) ≤1/τ_k (D_φxx^k - D_φxx^k+1) + 1/2γ_k (z - z^k^2_2 - z - z^k+1^2_2). The rest of the proof follows from <cit.>. § MATRIX-VALUED GRADIENTS To use mirror descent to solve problems in quantum information theory, we require expressions for the gradients of functions with matrix-valued inputs. We introduce the tools from matrix analysis which allow us to compute these. Let f: ℝ→ℝ̅ be a continuous extended-real-valued function, and X∈ℍ^n be a Hermitian matrix with spectral decomposition X=∑_iλ_iv_iv_i^†. The function f can be extended to matrices X as follows f(X) ∑_i=1^nf(λ_i)v_iv_i^†. The matrix logarithms used to define von Neumann entropy and quantum relative entropy use precisely this definition of primary matrix functions. In particular, von Neumann entropy can be defined as S(X)=[f(X)] where f(x)=-xlog(x). Let f: ℝ→ℝ̅ be a continuously differentiable scalar-valued function with derivative f' defined on an open real interval (a, b). Consider X, V ∈ℍ^n where X has diagonalization X=UΛ U^† with Λ=(λ_1,…,λ_n), and λ_i∈(a, b) for all i=1,…,n. * The directional derivative of f(X) along V is 𝖣f(X)[V] = U [ f^[1](Λ) ⊙ (U^† V U) ] U^†, where ⊙ represents the Hadamard or element-wise product and f^[1](Λ) is the first-divided difference matrix whose (i, j)-th entry is given by f^[1](λ_i, λ_j) where f^[1](λ, μ) = f(λ) - f(μ)/λ - μ, if λ≠μ, f^[1](λ, λ) = f'(λ). * The directional derivative of the trace functional g(X)= [f(X)] along V is 𝖣g(X)[V] = [f'(X) V]. The following corollaries are straightforward consequences of the gradient being defined by 𝖣g(X)[V] = ∇ g(X)V for all V∈ℍ^n. For X∈ f, the gradient of the trace functional g(X)= [f(X)] is ∇ g(X) = f'(X). Let 𝒜:ℍ^n→ℍ^m be an affine operator such that 𝒜(X) = ℒ(X) + C where ℒ:ℍ^n→ℍ^m is a linear operator and C∈ℍ^m is a constant matrix. For X∈ h, the gradient of h(X) = [f(𝒜(X))] is ∇ h(X) = ℒ^†(f'(𝒜(X))). § PROOF FOR PROPOSITION <REF> Now to prove the desired result, we will only show the proof for the negative log determinant function φ(X)=[f(X)] for f(x)=-log(x), as it generalizes the Burg entropy. Using the second derivative of the log determinant <cit.>, φ is μ-strongly convex with respect to the Frobenius norm only if 𝖣^2 φ(X)[V, V] = [X^-1VX^-1V] ≥μ, for all X∈𝒟(ℋ) and V ∈{ V∈ℬ(ℋ) : V_2 = 1 }. Note that for all density matrices, 0≼ X≼𝕀, and therefore X^-1≽𝕀 and VX^-1V≽0 for all symmetric V. It then follows from the fact that the trace of two positive semidefinite matrices are always non-negative that 𝖣^2 φ(X)[V, V] ≥[VX^-1V] ≥[VV] = V_2^2 = 1. Therefore, μ=1 satisfies the inequality, which concludes the proof. IEEEtran
http://arxiv.org/abs/2306.08505v1
20230614134123
DiffuDetox: A Mixed Diffusion Model for Text Detoxification
[ "Griffin Floto", "Mohammad Mahdi Abdollah Pour", "Parsa Farinneya", "Zhenwei Tang", "Ali Pesaranghader", "Manasa Bharadwaj", "Scott Sanner" ]
cs.CL
[ "cs.CL", "cs.LG" ]
Qubit efficient quantum algorithms for the vehicle routing problem on quantum computers of the NISQ era Dimitris G. Angelakis^1,2,3 July 31, 2023 ======================================================================================================= Text detoxification is a conditional text generation task aiming to remove offensive content from toxic text. It is highly useful for online forums and social media, where offensive content is frequently encountered. Intuitively, there are diverse ways to detoxify sentences while preserving their meanings, and we can select from detoxified sentences before displaying text to users. Conditional diffusion models are particularly suitable for this task given their demonstrated higher generative diversity than existing conditional text generation models based on language models. Nonetheless, text fluency declines when they are trained with insufficient data, which is the case for this task. In this work, we propose DiffuDetox[<https://github.com/D3Mlab/diffu-detox>], a mixed conditional and unconditional diffusion model for text detoxification. The conditional model takes toxic text as the condition and reduces its toxicity, yielding a diverse set of detoxified sentences. The unconditional model is trained to recover the input text, which allows the introduction of additional fluent text for training and thus ensures text fluency. Extensive experimental results and in-depth analysis demonstrate the effectiveness of our proposed DiffuDetox. § INTRODUCTION Toxic texts with offensive and abusive words are frequently encountered in online forums and social media. Such a harmful online environment can lead to mental health problems <cit.>, which motivates considerable research efforts <cit.> in text detoxification, i.e., a conditional text generation task aiming to remove offensive content from sentences while preserving their meanings. Intuitively, there exist diverse ways to detoxify a given sentence. As shown in Table <ref>, some detoxified sentences are the results of simply removing or replacing the toxic word, e.g., Detoxified 1 and 2, which may cause loss of information or lower text fluency. While other candidates, e.g., Detoxified 3, can reach human-level text detoxification performance with satisfactory fluency and content preservation. Therefore, if a diverse collection of detoxified sentences are given, we can select the most fluent and preservative one to maximize user experience. To do so, we resort to textual conditional diffusion models <cit.> because they are shown to be capable of generating more diverse sets of candidates compared to existing solutions based on transformers <cit.>, e.g., GPT2 <cit.>. Given their demonstrated high generative diversity, diffusion models are particularly suitable for this task. Nevertheless, previous textual conditional diffusion models <cit.> are not directly applicable to text detoxification due to the scarcity of text detoxification data. Given that text detoxification is a relatively new field and the high cost of human annotations, the available text detoxification data is on the order of 1e^-1 to 1e^-2 of datasets used for other tasks with textual conditional diffusion models <cit.>. To this end, we introduce DiffuDetox, a mixed conditional and unconditional diffusion model for text detoxification. In particular, the conditional model takes toxic text as a condition and through a Markov chain of diffusion steps, yields a diverse set of detoxified sentences. On the other hand, the unconditional model is trained to recover any given input text exactly. That allows us to introduce additional fluent text to be reconstructed by the unconditional model, which is used to improve the fluency of the conditionally generated detoxified sentences. In this way, the resulting diffusion model can maintain a diverse collection of detoxified candidates with satisfactory sentence fluency and content preservation. Extensive experimental results and in-depth discussions demonstrate the effectiveness of DiffuDetox for text detoxification. Our main contributions are summarized in two folds: 1) To the best of our knowledge, we are the first to approach text detoxification with diffusion models, which can maintain a rich collection of detoxified sentences by their high generative diversity; 2) We propose a mixed diffusion model for text detoxification, where the conditional model reduces text toxicity and the unconditional model improves text fluency. § RELATED WORK §.§ Text Detoxification Previous text detoxification efforts fall into two main categories, supervised and unsupervised. The unsupervised methods are built on a set of toxic and a set of non-toxic texts without one-to-one mappings between them. Representative methods include Mask&Infill <cit.>, DRG-Template/Retrieve <cit.>, DLSM <cit.>, SST <cit.>, CondBERT and ParaGeDi <cit.>. In contrast, the supervised methods are built on parallel datasets in which one-to-one mappings between toxic and non-toxic texts are explicitly provided. ParaDetox <cit.> is a well-established method within this category, which fine-tunes BART <cit.> on their parallel data. §.§ Textual Diffusion Models Diffusion probabilistic models are deep generative models with Markov chains of diffusion steps to recover the noise slowly added to data <cit.>. Recently, diffusion models have shown impressive performance on continuous domains such as image and audio generation <cit.>, sparking interest in using these models in discrete spaces like text. Some textual diffusion models use a discrete diffusion process that operates on word tokens <cit.>, whereas other methods convert text to embeddings, and then treat text as continuous variables <cit.>. Although textual diffusion models have proved to be effective in various text generation tasks with rich data <cit.>, they have not yet been applied to tasks with fewer training samples, such as text detoxification in our case. <cit.> are the first to exploit unconditional diffusion models for conditional generation, while their method is limited to images and is not aiming for introducing additional data under the low-data setting. § METHODOLOGY As the overall framework of DiffuDetox shown in Figure <ref> details, our proposed diffusion model for text detoxification improves text fluency in the low-training data regime by using a mixture of a conditional and unconditional diffusion model. We overview diffusion models before discussing DiffuDetox in detail. §.§ Diffusion Models Diffusion is a generative modeling paradigm that can be understood as a denoising algorithm <cit.>. Noise is gradually added to data samples, while the diffusion model is trained to reverse the process and recover the original data. The framework can be described as a Markov process with T steps, where the original data exist at t=0. Given a sample x_0, the so-called forward process gradually adds noise to the data points, i.e., the blue arrows in Figure <ref>. The noisy sample can be described by: q(x_t | x_t-1) := 𝒩(x_t; √(1-β_t)x_t, β_tI) where the variance schedule parameters β_1,⋯,β_T are selected such that β_t ∈ [0,1] and β_0 is close to 0 and β_T is close to 1 <cit.>. This ensures that when t ≈ 0, the data has little noise added to it, while when t≈ T, the data is identical to a sample from a standard Gaussian distribution. The reverse process then attempts to remove the noise that was added in the forward process and is parameterized by θ as: p_θ(x_t-1 | x_t) := 𝒩(x_t-1; μ_θ(x_t,t), σ_tI) where the predictive model μ_θ is: μ_θ := 1/√(α_t)(x_t - β_t/√(1 - α̅_t)ϵ_θ(x_t, t)) which depends on time-dependent coefficients α := 1 - β_t, α̅_t := ∏_s=1^t α_s. In Eq. (<ref>), ϵ_θ is interpreted as predicting the noise that was added to x_t. To optimize the log-likelihood of this model, a simplified training objective is used which reduces the problem to: ℒ = 𝔼_t,x_0, ϵ [‖ϵ - ϵ_θ(√(α̅_t)x_0 + √(1 - α̅_t)ϵ, t) ‖^2] After training, samples are generated by beginning with pure noise from a standard Gaussian distribution, which is then gradually denoised T times by the learned reverse process. §.§ DiffuDetox: A Mixed Diffusion Model for Text Detoxification The task of text detoxification can be viewed as generating a non-toxic sentence, conditioned on a toxic input sentence. The goal is to ensure that the semantics and content of the text are preserved after detoxification, while ensuring that the generated text is fluent. With this interpretation <cit.>, we can apply a conditional diffusion model that generated non-toxic text, when conditioned on a toxic sentence. A conditional diffusion model is modified such that the reverse process is now p_θ(x_t-1 | x_t, c), and the predictive model is ϵ_θ(x_t, c, t). This model can be interpreted as mapping sequences to sequences in a non-autoregressive manner. To apply this model to textual data, sentences are tokenized and converted to a stack of embeddings which are then taken to be x_0 in the diffusion process. When sampling, embeddings that are generated by the diffusion model are converted to tokens by a shallow single-layer decoder. While diffusion models have high sample diversity which can be used to generate a large number of candidate items, the fluency of the samples is degraded when trained on a smaller dataset. We propose to use a combination of the conditional model diffusion model as well as an unconditional model to tackle this problem. The conditional model is used to detoxify text, whereas the unconditional model can be used to guide the sampling process towards higher quality samples <cit.>. The models are combined in a manner that is inspired by the gradient of an implicit classifier p^i(c|x) ∝ p(x|c) / p(x) such that the following linear combination of the models is used for sampling: ϵ̅_θ(x,c) = (1+w) ϵ_θ(x,c) - w ϵ_θ(x) § EXPERIMENTS §.§ Experimental Settings Datasets. We conduct our experiments upon a well-established benchmarking dataset ParaDetox[<https://huggingface.co/datasets/SkolkovoInstitute/paradetox>] <cit.>, which provides human-annotated one-to-one mappings of toxic and non-toxic sentence pairs from 20,437 paraphrases of 12,610 toxic sentences. We use the same data split of <cit.> with 671 testing sentences for fair performance comparisons. We further consider the BookCorpus <cit.>, MNLI <cit.>, and WikiAuto <cit.>, datasets as additional data for unconditional diffusion model training. Evaluation Metrics. We follow the well-established text detoxification work <cit.> to evaluate DiffuDetox with BLEU, Style Accuracy (STA), Content Preservation (SIM), Fluency (FL), and J score. In particular, STA and FL are computed with pre-trained classifiers <cit.> to measure the non-toxicity and fluency of a given sentence, respectively. And we compute SIM using cosine similarity between the input and the generated detoxified text with the model of <cit.>. Moreover, we compute J score <cit.> as the averaged multiplication of STA, SIM, and FL, which is highly correlated with human evaluation as shown by <cit.>. Implementation Details. We implement our mixed conditional and unconditional models with a single diffusion model where c=∅ for the unconditional case. During training, the conditional model is selected with probability φ=0.8, and the unconditional model is trained using the non-toxic sentences sampled from the ParaDetox dataset and the additional dataset with equal probabilities. We use the union of the BookCorpus, WikiAuto, and MNLI as the additional dataset. In the test stage, we select the best samples from a candidate set of 20 using the J score. The reported results are from a model trained for 1e^5 steps with a batch size of 32, and the mixture weighting parameter w in Eq. (<ref>) is set to 5. We use the text detoxification methods listed in Section <ref> as baselines. §.§ Experimental Results Performance Comparison. We have two key observations from the results shown in Table <ref>. Firstly, our proposed DiffuDetox outperforms most baseline methods on most evaluation metrics, and it is reaching state-of-the-art performance by outperforming ParaDetox on two metrics, demonstrating the effectiveness of our proposed method. Another observation is that DiffuDetox achieves a higher J score than human-level text detoxification. Note that the J score has been shown to be highly correlated with human annotations <cit.>. This human-level performance of DiffuDetox shows its promise to be deployed in real-world text detoxification scenarios to facilitate users in online forums and social media. Moreover, such results are achieved by selecting from the diverse collection of detoxified sentences generated by diffusion models, which reveals their high generative diversity and the suitability of being applied to text detoxification. Examples of detoxified sentences generated by DiffuDetox can be found in Appendix <ref>. Ablation Study. We conduct ablations study to investigate the effectiveness of the unconditional model. Since the unconditional model allows the introduction of the additional fluent text, the ablation study can provide insights into the effect of both the unconditional model and the introduced additional data. As shown in Table <ref>, the model named Conditional represents DiffuDetox without the unconditional component. We observe that the addition of the unconditional model improves all the metrics. In particular, text fluency achieves the most significant performance gain. More importantly, the addition of the unconditional model pushes the diffusion model over the human baseline for the J score. Such results demonstrate the effectiveness of the unconditional model and the introduced additional fluent text in improving text fluency and overall performance. § CONCLUSION In this paper, we approach the text detoxification task with diffusion models for their demonstrated high generative diversity. We introduced DiffuDetox, a mixed conditional and unconditional diffusion model, where the conditional part reduces toxicity whereas the unconditional part ensures fluency. Experimental results show DiffuDetox achieves human-level text detoxification performance, making it promising to be applied in real-world text detoxification systems to benefit users. § LIMITATIONS AND FUTURE WORK One limitation of our method is that sampling requires sampling both a conditional and a unconditional model, which results in slower inference times. On the other hand, progressive distillation <cit.> provides an attractive solution to this problem. Another limitation is that <cit.> show that the diversity of generative models is degraded as w increases. Ideally we would be able to have a model that improves upon the fluency as well as the model diversity. As for future work, we will leverage advanced large language models as the base architecture for training diffusion models to compete with high performance auto-regressive models. Additionally, we will investigate modifications to diffusion models that are inherent to discrete data. § ETHICS STATEMENT Potential Misuse: DiffuDetox can hypothetically be used to obtain toxic sentences from non-toxic sentences. However, the effectiveness of such a scenario should be investigated. Environmental Cost: We note that while our work required extensive experiments to draw sound conclusions, future work will be able to draw on these insights and need not run as many large-scale comparisons. Models in production may be trained once using the most promising settings. § ACKNOWLEDGEMENTS We would like to acknowledge that this work was supported by LG Electronics, Toronto AI Lab Grant Ref No. 2022-1473. acl_natbib § APPENDIX Table <ref> shows examples of toxic texts with DiffuDetox paraphrases and human references. DiffuDetox is able to achieve human-level paraphrasing performance as evaluated quantitively in Section <ref>.
http://arxiv.org/abs/2306.01902v1
20230602201919
Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation
[ "Zhengyue Zhao", "Jinhao Duan", "Xing Hu", "Kaidi Xu", "Chenan Wang", "Rui Zhang", "Zidong Du", "Qi Guo", "Yunji Chen" ]
cs.CV
[ "cs.CV" ]
Intense squeezed light from lasers with sharply nonlinear gain at optical frequencies Linh Nguyen^1, Jamison Sloan^2, Nicholas Rivera^1,3, Marin Soljačić^1 July 31, 2023 ===================================================================================== Diffusion models have demonstrated remarkable performance in image generation tasks, paving the way for powerful AIGC applications. However, these widely-used generative models can also raise security and privacy concerns, such as copyright infringement, and sensitive data leakage. To tackle these issues, we propose a method, Unlearnable Diffusion Perturbation, to safeguard images from unauthorized exploitation. Our approach involves designing an algorithm to generate sample-wise perturbation noise for each image to be protected. This imperceptible protective noise makes the data almost unlearnable for diffusion models, i.e., diffusion models trained or fine-tuned on the protected data cannot generate high-quality and diverse images related to the protected training data. Theoretically, we frame this as a max-min optimization problem and introduce EUDP, a noise scheduler-based method to enhance the effectiveness of the protective noise. We evaluate our methods on both Denoising Diffusion Probabilistic Model and Latent Diffusion Models, demonstrating that training diffusion models on the protected data lead to a significant reduction in the quality of the generated images. Especially, the experimental results on Stable Diffusion demonstrate that our method effectively safeguards images from being used to train Diffusion Models in various tasks, such as training specific objects and styles. This achievement holds significant importance in real-world scenarios, as it contributes to the protection of privacy and copyright against AI-generated content. § INTRODUCTION In recent years, generative models such as GANs <cit.> and VAEs <cit.> have made significant strides in image synthesis tasks. Notably, Denoising Diffusion Probabilistic Model (DDPM) <cit.> and other diffusion models <cit.> have surpassed GANs in performance <cit.>, becoming the state-of-the-art image synthesis method. Furthermore, the use of Latent Diffusion Models (LDM) <cit.> has enhanced the ability to generate high-resolution images and perform multi-model tasks, such as text-to-image generation, leading to diverse AI-for-Art applications such as Stable Diffusion and MidJourney. While training a high-performance diffusion model from scratch remains a costly endeavor, there are effective fine-tuning techniques such as Textual Inversion <cit.> and DreamBooth <cit.> that allow for personalized diffusion models to be trained from large pre-trained models with minimal training overhead and small datasets. However, the development of diffusion-based image synthesis methods has also given rise to security and privacy concerns. Abused unauthorized data exploitation is one major issue for generative models. For example, a pre-trained diffusion model can be fine-tuned with several personal facial images to generate fake images that could be harmful to the owner. Additionally, using artworks for training diffusion models could result in copyright infringement, dampening artists' creative enthusiasm. While artists may want to share their work on social networks, they may not want their work to be used for unauthorized exploitation, such as training a diffusion model <cit.>. Therefore, it is imperative to protect artworks without hindering their normal usage. While some research has already focused on protecting the copyright of artists using methods such as AdvDM <cit.> and Glaze <cit.>, these methods do not focus on the training process of diffusion models. AdvDM focuses on the inference stage of diffusion, while Glaze attacks the image feature extractor in Text-to-Image generative models and specifically targets the style mimicry task. There has been limited work toward protecting images from unauthorized exploitation during the training process of diffusion models. One possible approach is to add small protective noise to the images to generate unlearnable examples for diffusion models, making it difficult for diffusion models to learn the features of the protected images. This can help prevent the abuse of private data and safeguard privacy. To this end, we propose Unlearnable Diffusion Perturbation (UDP), an unlearning method designed to protect images from being utilized to train a high-performance diffusion model by adding perturbation protective noise to images. An example of the utilization of UDP is shown in Figure <ref>. The purpose of protective noise is to disrupt the training process of the diffusion models, and the generation of the protective noise can be defined as a max-min optimization problem. Specifically, the inner minimization denotes the optimization of model parameters during training a diffusion model, while the outer maximization is to generate protective noise to disrupt the training process. The scale of the protective noise is limited by a pre-defined bound to ensure that the added noise is imperceptible to human eyes. In this scenario, the protected images may have minimal differences compared to the original images but the diffusion model trained on these protected images is unable to generate the expected high-quality images. Additionally, we observe that modifying the sampling scheme of timesteps during the maximization process based on the noise scheduler of the diffusion process can further strengthen the protective effect of the noise. Building upon this observation, we introduce an improved method, Enhanced Unlearnable Diffusion Perturbation (EUDP) [<https://github.com/ZhengyueZhao/EUDP>]. Our contributions can be summarized as follows: * We introduce the concept of unlearnable examples for diffusion models and propose UDP, a poison attack specifically designed to generate unlearnable examples for diffusion models by solving a max-min optimization problem. * We propose EUDP, an improved method for the generation of unlearnable examples to make the protection more effective in preventing the diffusion model from learning the protected images. * We evaluated the proposed UDP and EUDP methods with DDPM on CIFAR-10 <cit.> and LDM on real-world datasets such as WikiArt <cit.> for various generative tasks including unconditional image synthesis, text-to-image generation, and image-to-image style transfer. The results of experiments demonstrate the effectiveness of our methods and show the practical application value of our methods in real-world scenarios. § BACKGROUND & RELATED WORK §.§ Diffusion Models Denoising Diffusion Probabilistic Model (DDPM) <cit.> is an image-generative diffusion model including a forward diffusion process and a reverse denoising process. In the forward process, an image _0 is gradually perturbated with random Gaussian noise for T steps and finally turns into random noise. In the reverse process, an image is conversely generated by gradually removing noise. Images at step t in the forward process can be expressed by images at step t-1: _t=√(α_t)_t-1+√(β_t), where α_t and β_t=1-α_t are usually pre-defined parameters, and following standard Gaussian distribution 𝒩(,). Noise _ denoised in the reverse process is learned by minimizing the following simplified loss function: ℒ_simple=𝔼_t,_0,[‖ - _(√(α̅_t)_0 + √(1-α̅_t), t) ‖^2 ] Latent Diffusion Models (LDM) <cit.> transfer diffusion processes from pixel space to latent space and introduce a cross-attention layer into model architecture to generate high-resolution images with general conditioning inputs. Latent noise denoised in the reverse process is learned by minimizing the following LDM loss function: ℒ_LDM=𝔼_t,_0,[‖ - _(_t, t, τ_()) ‖^2 ] Compared with Eq. <ref>, an image is encoded into a latent vector by a pre-trained auto encoder ℰ() and the output of the reverse process is decoded into an image by a decoder 𝒟(). A conditioning input y could be a set of text or an image and is encoded into a conditioning vector by a domain-specific encoder τ_. To achieve the unlearnable examples for diffusion models, the aim of our UDP and EUDP is to hinder the optimization of loss in Eq. <ref> and  <ref>. §.§ Unlearnable Examples for Classification Models Recent research <cit.> show that poison attack on neural networks based classification models can effectively decrease the accuracy of classifiers on the test set. Unlearnable examples are generated by adding imperceptible adversarial noise to a clean dataset and classifiers trained on unlearnable examples fail to generalize to the unseen samples. The main approaches to generate unlearnable examples include error-minimization and error-maximization. Error-Minimization Noise The basic idea of error-minimization noise <cit.> is to reduce the training loss of classifiers considering that less loss corresponds to less knowledge to learn. Moreover, the error-minimization noise is generated by solving a bi-level min-min optimization problem, where the inner minimization is to find the bounded protected noise that minimizes the classification loss, while the outer minimization finds model parameters that also minimize the loss of the classifier. Error-Maximization Noise  <cit.> shows that error-maximization noise generated during adversarial training is also significant to poor the performance of classifiers. The generation process of error-maximization noise is the same as adversarial training, which solves a bi-level min-max optimization problem. Different from the error-minimization noise, inner maximization finds noise that maximizes the loss of the classifier. However, there is no general definition of unlearnable examples for generative models, which may hold greater practical significance compared to classification models due to the privacy concerns brought by AIGC. §.§ Security & Privacy Protection for Diffusion Models Adversarial Attacks for Diffusion Models Adversarial attacks <cit.> have been introduced to diffusion models recently: <cit.> presents a targeted adversarial attack algorithm towards Latent Diffusion Models including encoder attack and diffusion attack. The encoder attack is to generate adversarial perturbations by optimizing the distance between the latent code of perturbated samples ℰ(+) and a target latent vector z_targ while the diffusion attack optimizes the distance between samples generated by LDM and target images. This method can effectively prevent images from being modified by diffusion models in an image-to-image fashion. Liang et al. propose AdvDM <cit.>, an untargeted adversarial examples generation method for diffusion models through Monte Carlo. AdvDM randomly samples different timesteps and latent variables to iteratively upgrade the adversarial noise. Though adversarial examples generated by AdvDM can partially disrupt the training of diffusion models, incorporating the training process of diffusion models into the generation of protective noise can greatly enhance this disruption. Privacy Protection for Generative Models There is an increasing demand for privacy protection for generative models because of the fast development of image synthesis methods.  <cit.> achieves the goal of preventing Diffusion Models from generating privacy-sensitive images by erasing the concepts within the models. Recent detection methods for DeepFakes <cit.> can also help prevent the abuse of generative models, especially Diffusion Models, by distinguishing whether an image is generated by them. A more direct approach is to prevent images from being used to train image generative models.  <cit.> proposes Glaze, a targeted adversarial attack on the feature extractor of text-to-image models, protecting artworks from style mimicry by misleading the match between text feature vectors and image feature vectors. In comparison to our method, this work does not specifically target the diffusion process, but focuses on text-to-image generative models and specifically addresses style transfer tasks. § METHODOLOGY §.§ Threat Model We first introduce the detailed protection scenario in the context of generative model training. We consider two parties involved in the process: the Image Exploiter for diffusion model training and the Image Protector for the IP owner. We aim to design Image Protector to prevent the selected images from being utilized to train or fine-tune high-quality image models but with no harm to the available public data from being utilized. Specifically, we explain the workflow of Image Exploiter and Image Protector as follows: Image Exploiter The Image Exploiter creates a training dataset for image generative models training or finetuning based on gathered public image resources. In practical settings, fine-tuning Diffusion Models usually includes two types: training specific objects or training specific styles. The former involves fine-tuning the model using a few images of a specific object to generate various images related to that object, while the latter involves training the model using a few images of a specific style for tasks such as style transfer. Generally, fine-tuning LDM requires the following knowledge: (1) a pre-trained LDM model, including the model structure and all parameters; (2) images of specific objects or styles, while IP of some styles or objects may be owned by other parties and unauthorized data exploitation should be eliminated. Image Protector The Image Protector aims to prevent their images from being used to train or fine-tune high-quality image generative models (such as LDM) while still making them publicly available. Specifically, the Image Protector wishes to prevent their images from being used to generate false images of specific objects (DeepFake) or images of specific styles (copyright infringement). We assume that the Image Protector possesses the following knowledge: (1) the images that need to be protected; (2) the image generative models that the Image Exploiter may use, including the models' structures and parameters. This assumption is practically significant due to the fact that many current pre-trained LDMs are fine-tuned on widely used LDMs such as stable-diffusion-v1-4. Additionally, we conducted transferability experiments on LDMs to demonstrate that this assumption can be relaxed. §.§ Problem Formalization To achieve the goal of protecting the selected images from unauthorized exploitation, we propose the unlearning methodology that generates the unlearnable noises on protected data so that the generative model trained on the protected data has poor generation abilities and is unable to produce high-quality images. We formalize the normal diffusion model training and the protective noise generation process as follows. Normal diffusion model training: For a clean training image dataset ∈ S_c, which follows the distribution ∼ q^c(), an image generator model G_() is trained to generate images following the distribution ∼ p^c_() that is as close as possible to q^c(). The distance between these two distributions D(p_^c(),q^c()) can be evaluated by using some distance metrics, such as KL divergence. Protective Noise Generation: By adding a small amount of protective noise ^u (bounded by ρ_u) to the clean training dataset, we obtain unlearnable examples: ^u=+^u where ^u ∈ S_u and follow the distribution ^u∼ q^u(). For the image generator model G_(^u) trained on unlearnable data, the generated images follow the distribution ^u∼ p_^u(). max_‖^u‖≤ρ_uD(p^u_(),q^c()) To achieve the unlearnable effect, the design objective is to increase the distance in Eq. <ref> between the distribution of generated images and the distribution of clean training data as much as possible by adjusting the protective noise ^u. §.§ Unlearnable Diffusion Perturbation We propose the Unleanable Difussion Perturbation (UDP) method for protective noise generation. For DDPM, the training images follow the distribution q(), while the generated images follow the distribution p_ (), with the optimization objective being the cross-entropy between these two distributions: min_ℒ_CE=min_𝔼_q(_0)[-logp_(_0)]≤min_ℒ_VLB=min_𝔼_q(_0:T)[logq(_1:T|_0)/p_ (_0:T)] To address the optimization objective in Eq. <ref>, we can approximate the optimization of the cross-entropy by optimizing the variational bound. To reduce the image generation quality of DDPM, we can achieve this by maximizing the loss function during the training process. Similarly, we transform the problem of maximizing the optimal value of the cross-entropy in Eq. <ref> into finding the maximum value of the minimum of the variational bound: max_min_ℒ_CE =max_min_𝔼_q_(_0)[-logp_,(_0)] ≤max_min_ℒ_VLB=max_min_𝔼_q_(_0:T)[logq_(_1:T|_0)/p_, (_0:T)] Similar to the training of DDPM, we expand the terms of ℒ_VLB in Eq. <ref> and replace it with the simplified loss function ℒ_simple in Eq. <ref>. Then the optimization objective is to solve the following bi-level max-min optimization problem: ^u=max_‖^u_i‖≤ρ_umin_( ∑__i𝔼__i,t_n∼ p_(), t_n∼𝒰(0,T)ℒ_t_n(f_(_i,t_n+^u_i))) Where ^u represents the protective noise added to the training data , and its L_∞ norm is constrained by the preset value ρ_u to limit the impact of the protective noise on the original image. Considering that direct solution to the max-min problem in Eq. <ref> is difficult, an iterative method of updating and ^u_i alternatively can be used for optimization: -η·∇_ℒ_t_n( f_(_i,t_n+^u_i) ) ^u_i ^u_i+λ·sign( ∇_ℒ_t_n( f_(_i,t_n+^u_i) )) Where _i is randomly sampled from the training data and t is randomly sampled following uniform distribution 𝒰(0,T). Then unlearnable examples ^u can be achieved by adding ^u to clean images following Eq. <ref>. The implementation of UDP is demonstrated in Appendix  <ref>. §.§ Enhanced Unlearnable Diffusion Perturbation Considering multiple iterative steps of DDPM in its forward process, we make the following two observations to help further improve the protective effect of UDP. Observation 1: Decay Effect of Protective Noises Though all the iterative steps (t from 0 to T) are considered when computing protective noise, the protective noise can only be added to the image in the first step (t=0). This causes the protective noise to be gradually overwhelmed in the forward process of DDPM: _t+^u_t=√(α_t)_0+√(1-α_t)_t+√(α_t)^u_0 Eq. <ref> shows that as t increases during the forward process, the impact of the protective noise will gradually decrease with decay coefficient √(α_t). Since larger adversarial noise is more likely to have a greater impact on the model, the protective effect of the protective noise will decrease with the increase of t. Observation 2: Varying Importance of Timestep Different step t have different effects on the training process of DDPM since the scale of the noise added at each step t during the forward process is different. |∇_t _t| = |_t - _t-1| = ( √(α_t-1)-√(α_t))_0+√(α_t-1-α_t)_t Eq. <ref> illustrates the variation of the same image between two adjacent steps in the forward process, which implies that the scale of added-noise at step t is √(α_t-1-α_t). Following this, we can focus more on those t with larger added noise when computing the protective noise considering that a step t with a larger scale of added noise in the forward process may have a greater impact on the quality of the generated images. Given the above two observations, we propose Enhanced Unlearnable Diffusion Perturbation (EUDP), a method for computing protective noise with sampling timesteps based on the value of the production of the decay coefficient of protective noise √(α_t) and the scale of added noise in the forward process √(α_t-1-α_t). Specifically, when solving the max optimization problem for the parameter ^u in Eq. <ref>, the uniform distribution of t is replaced with 𝒫_EUDP(t), a distribution based on the production √(α_t)·√(α_t-1-α_t): ^u=max_‖^u_i‖≤ρ_u( ∑__i𝔼__i,t_n∼ p_(), t_n∼𝒫_EUDP(t)ℒ_t_n(f_(_i,t_n+^u_i))) where 𝒫_EUDP(t)=√(α_t)·√(α_t-1-α_t)/∑_t(√(α_t)·√(α_t-1-α_t)) Here the sampling of timesteps in the minimization of in Eq. <ref> is still following a uniform distribution. Protective noise ^u and model parameter in the bi-level max-min optimization is solved by the iterative method in Eq. <ref>. The pseudo-code of EUDP of DDPM and LDM is demonstrated in Appendix  <ref>. § EXPERIMENTS We evaluate our method on both DDPM and LDM. For DDPM experiments, we conduct a complete training (from scratch) on CIFAR-10 <cit.> and utilize quantitative evaluations with metrics FID <cit.>, Precision, and Recall <cit.> to assess the quality of generated images. For LDM experiments, we fine-tune a widely used pre-trained LDM stable-diffusion-v1-4 <cit.> with datasets provided by DreamBooth <cit.> and WikiArt <cit.>. In LDM experiments, we qualitatively demonstrate the effectiveness of our method through image visualization results. Specific diffusion training and perturbation parameters are shown in Appendix  <ref> and  <ref>. More visualization results and transferability studies are shown in Appendix  <ref>. r0.4 < g r a p h i c s > Quality of images generated by DDPM trained on EUDP CIFAR-10 with different protection ratio. FID increases while Precision and Recall decrease as the protection ratio increases. §.§ Unlearnable Examples for DDPM We compare the quality of images generated by DDPM trained on different CIFAR-10, including clean, random noise added, AdvDM <cit.>, UDP, and EUDP. We evaluate each DDPM by generating 50,000 images and testing their FID, Precision, and Recall, where a smaller FID, larger Precision, and Recall indicate higher quality of generated images. The results are demonstrated in Table <ref>, which show that the FID of images generated by DDPM trained on the UDP dataset is significantly increased compared to random noise and AdvDM, while the Precision and Recall values are significantly reduced. This suggests that the UDP method effectively reduces the quality of generated images and successfully prevents the protected dataset from being used to train high-quality generative models. In addition, by using the EUDP method for protecting noise calculation, the quality of generated images is further reduced, indicating that EUDP can efficiently generate protective noise and achieve more effective results for protecting images. We evaluate the protective effect at different protection ratios. First, we randomly added noise to the dataset in proportion. Specifically, we randomly divided the CIFAR-10 into two sub-datasets in proportion: one clean and one protected. Images in the protected dataset are added with protected noise generated by EUDP. We combined the two sub-datasets to form the full training set with different protection ratios. The results in Figure <ref> show that the protective effect increases as the protection ratio increases, and the protective effect is significant when the protection ratio reached 50%. We then add class-wise protective noise to the dataset. For the images of the 10 classes in the CIFAR-10, we only protect images with some of the classes. Table <ref> shows the quality of the generated images of the corresponding protected and clean classes with class-wise added protective noise. The results show that the quality of the generated images of the protected classes is poor, while the quality of the generated images of the clean classes is similar to those generated on the clean CIFAR-10. We evaluate the robustness of protective noise. Specifically, we performed a series of transformations on the EUDP CIFAR-10, including adding random noise, quantizing image pixel values, Gaussian blur, and JPEG. Experimental results in Appendix  <ref> show that while JPEG and Gaussian blur could improve the quality of generated images to a certain extent, they can not restore the quality of generated images trained on clean data. §.§ Unlearnable Examples for LDM For the task of training an object, we fine-tune the LDM using the DreamBooth dataset with fine-tuning methods Textual Inversion and DreamBooth. As shown in Figure <ref>, the LDM fine-tuned on the clean dataset is able to generate images that match the given prompt. However, the LDM fine-tuned on the EUDP dataset failed to generate the expected images. Notably, images generated by Textual Inversion have nearly no features of the training images, whereas those generated by DreamBooth are of low quality and repeat the training set, failing to meet the prompt. This demonstrates that our method successfully protects images of specific objects from being used to train a high-quality LDM. Moreover, Figure <ref> shows that protecting a specific label hardly affects the generative quality of label fine-tuning on the clean dataset. In the scenario of training a style, we apply DreamBooth to fine-tune the LDM using the WikiArt dataset, evaluating it on both text-to-image and image-to-image (style transfer) tasks. Specifically, we selected six paintings by a particular artist (such as Monet) for style training. As shown in Figure <ref> and Figure <ref>, LDMs trained on the clean dataset are able to generate images with a specific artist's style and perform style transfer. Conversely, LDMs trained on the unlearnable dataset are unable to generate images with the corresponding style or convert the source image to a specific style. This experiment demonstrates that our method successfully protects specific styles, preventing infringement of copyright by unauthorized data exploitation such as style mimicry. For the transferability study, we use unlearnable examples generated based on stable-diffusion-v1-4 to fine-tune other LDMs such as stable-diffusion-v1-1 and stable-diffusion-v1-5. Results in Appendix  <ref> show that the protective noise is still valid in these situations. § CONCLUSION In this paper, we propose image-protection methods UDP and EUDP for diffusion image generative models. UDP adds small protective noise to training images by solving a max-min problem, making the protected images unusable for training high-quality diffusion models. This protects the privacy and copyright of the image owners. Through experiments on two basic diffusion models, DDPM and LDM, we demonstrate that our method can successfully achieve the goal of making the protected images "unlearnable" and has practical applications. Broader Impacts Our method aims to protect personal data from unauthorized exploitation, but similar techniques could be employed for harmful purposes. For instance, a poisoning attack can apply the generated protection noise to public datasets that result in a degradation of the image dataset quality, which could hinder the conventional training of generative diffusion models. Limitation Although we demonstrate the effectiveness of our method in black-box scenarios in Appendix  <ref>, a white-box environment is required to achieve the best protective effect. Furthermore, generating protection noise through a bi-level iterative optimization takes more time compared to simple adversarial noise. § ACKNOWLEDGEMENT This work is partially supported by the National Key R&D Program of China (under Grant 2021ZD0110102), the NSF of China (under Grants 62002338, 61925208, U22A2028, 62222214, 62102399, U19B2019), CAS Project for Young Scientists in Basic Research (YSBR-029), Youth Innovation Promotion Association CAS and Xplore Prize. plainnat § IMPLEMENTATION DETAILS §.§ Implementation of UDP We demonstrate the pseudo-code of Unlearnable Diffusion Perturbation for DDPM and LDM in Algorithm <ref> and Algorithm <ref> respectively. As shown in Eq. <ref> and Eq. <ref>, the bi-level max-min optimization problem can be solved by alternatively optimizing the parameters of the model and the protective noise ^u. Specifically, the parameters are firstly optimized by minimizing the loss for K steps with learning rate η and then the perturbations ^u are literately calculated by maximizing the loss for M steps with perturbation rate λ. After each step of perturbation calculation, ^u will be clipped with the noise scale ρ_u. This bi-level optimization process will be iterated for N times until the perturbation can effectively protect images. There K, M, N, η, and ρ_u are pre-defined parameters and the perturbation rate λ is typically set to one-tenth of the noise scale ρ_u. The implementations of UDP for DDPM and UDP for LDM are similar, with the main difference being that the loss in LDM (Eq. <ref>) is computed in the latent space and includes conditioning guidance compared with DDPM (Eq. <ref>). UTF8gbsn UTF8gbsn §.§ Implementation of EUDP The pseudo-code of Enhanced Unlearnable Diffusion Perturbation for DDPM and LDM is described in Algorithm <ref> and Algorithm <ref> respectively. The implementation of EUDP is similar to UDP, but the sampling of timesteps t during the optimization of protective noise ^u following the distribution of 𝒫_EUDP(t) as shown in Eq. <ref> instead of the uniform distribution 𝒰(0,T). UTF8gbsn UTF8gbsn § EXPERIMENT SETTINGS §.§ Parameters of Diffusion Models We train a DDPM from scratch and fine-tune an LDM with a pre-trained model. For DDPM training on CIFAR-10, we set the batch size to 128, the learning rate to 0.0001, and the number of epochs to 2000 (∼ 80k steps in total). The noise scheduler in the diffusion process is the widely used linear scheduler with β_0=0.0001 and β_T=0.02 where T=1000. For LDM, we utilize stable-diffusion-v1-4 [<https://huggingface.co/CompVis/stable-diffusion-v1-4>] as the base model and fine-tune it with different parameters for DreamBooth <cit.> and Textual Inversion <cit.>. Specifically, we fine-tune the model with DreamBooth with batch size set to 1, learning rate set to 4×10^-6, and steps to 400 and Textual Inversion with batch size set to 1, learning rate set to 5×10^-4, and steps to 5000. §.§ Parameters of UDP & EUDP For UDP and EUDP for DDPM in Algorithm <ref> and <ref>, we set parameters N to 100, K to 1000, M to 10, and η to 0.0001. Due to the significant training cost, we do not conduct systematic ablation experiments on these parameters. Empirically, we find that setting N× M×λ to 100 times of the scale of the noise ρ_u can ensure the protective effect of the perturbation. For experiments of LDM, we set N to 40, K to 20, M to 25, and η to 5×10^-6 in Algorithm <ref> and  <ref>. The noise scale of protective noise in all experiments of LDM is 16/255. §.§ Other Settings We use the diffuser library [<https://github.com/huggingface/diffusers>] for the training and inference of diffusion models and utilize this code [<https://github.com/openai/guided-diffusion/tree/main/evaluations>] to evaluate the quality of generated images with FID, Precision, and Recall. For the baseline AdvDM <cit.> in the quantitative experiments of DDPM, we set the iteration steps to 128 to ensure sufficient generation of adversarial noise. § MORE RESULTS §.§ Robustness Study We evaluate the effectiveness of the protective noise after being interfered with by natural perturbations, including adding random noise, quantification, JPEG, and Gaussian blur. Specifically, the added random noise has a scale of 16/255. Quantization involves reducing an 8-bit image to a 6-bit image. Gaussian blur is performed using a filter kernel size of 4x4 with σ set to 16/255. JPEG compression and decompression are carried out using the "imencode" and "imdecode" functions from the OpenCV2 library [<https://github.com/abidrahmank/OpenCV2-Python-Tutorials>]. Results in Table <ref> show that JPEG compression and Gaussian blurring can improve the quality of generated images to some extent, but they are difficult to achieve the same level of high-quality generated images as the original clean dataset. §.§ Transferability Study We conduct transferability studies to evaluate the effectiveness of our methods in black-box conditions. We first asses the transferability of the noise scheduler of EUDP for DDPM. Protective noise is generated by EUDP with β_0=0.0001 and β_T=0.02 and the protected images are tested with other noise schedulers. Results demonstrated in Table <ref> indicate that changing the noise scheduler does not significantly diminish the effectiveness of the protection noise. Additionally, it can be observed from the experimental results that the EUDP method, corresponding to a noise scheduler with faster noise adding (larger β_T) during the training process, exhibits a greater improvement in protection compared to the UDP method, which also confirms the validity of our observations in Section <ref>. For real-world conditions, we conduct transferability studies of our methods for LDM. We first examine the transferability of EUDP between different LDMs. Specifically, the protective noise is generated with stable-diffusion-v1-4 and the protected images are learned by pre-trained LDMs stable-diffusion-v1-1 [<https://huggingface.co/CompVis/stable-diffusion-v1-1>], stable-diffusion-v1-5[<https://huggingface.co/runwayml/stable-diffusion-v1-5>], and Counterfeit-V2.5[<https://huggingface.co/gsdf/Counterfeit-V2.5>]. Results in Figure <ref> show that protective noise generated by a specific model remains effective in protecting images from being learned by other models. Furthermore, when the model used for training is closer to the model for protection noise generation, the protective effect is better (stable-diffusion-v1-1). Conversely, when there is a significant difference between the two models, the protection effect is weaker (Counterfeit-V2.5). Figure <ref> demonstrates that the text prompt used for protective noise generation can be different from the text for fine-tuning diffusion models since our methods mainly focus on the diffusion process instead of the text encoder. §.§ More Visualization Here we show some more visualization results of protected images, text-to-image, and style mimicry. We demonstrate the clean artworks and EUDP protected artworks of different artists including Monet, Picasso, and van Gogh in Figure <ref>, text-to-image with specific styles in Figure <ref>, and style mimicry (image-to-image) in Figure <ref>, respectively.
http://arxiv.org/abs/2306.02829v1
20230605122431
Dynamic Calculations of Magnetic Field and Implications on Spin Polarization and Spin Alignment in Heavy Ion Collisions
[ "Hui Li", "Xiao-Liang Xia", "Xu-Guang Huang", "Huan Zhong Huang" ]
nucl-th
[ "nucl-th" ]
[email protected] Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Fudan University, Shanghai, China [email protected] Department of Physics and Center for Field Theory and Particle Physics, Fudan University, Shanghai 200433, China [email protected] Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Fudan University, Shanghai, China Department of Physics and Center for Field Theory and Particle Physics, Fudan University, Shanghai 200433, China Shanghai Research Center for Theoretical Nuclear Physics, NSFC and Fudan University, Shanghai 200438, China [email protected] Key Laboratory of Nuclear Physics and Ion-beam Application (MOE), Fudan University, Shanghai, China Department of Physics and Astronomy, University of California, Los Angeles, CA 90095, USA Magnetic field plays a crucial role in various novel phenomena in heavy-ion collisions. We solve the Maxwell equations numerically in a medium with time-dependent electric conductivity by using the Finite-Difference Time-Domain (FDTD) algorithm. We investigate the time evolution of magnetic fields in two scenarios with different electric conductivities at collision energies ranging from = 7.7 to 200 GeV. Our results suggest that the magnetic field may not persist long enough to induce a significant splitting between the global spin polarizations of Λ and Λ̅ at freeze-out stage. However, our results do not rule out the possibility of the magnetic field influencing the spin (anti-)alignment of vector mesons. Dynamic Calculations of Magnetic Field and Implications on Spin Polarization and Spin Alignment in Heavy Ion Collisions Huan Zhong Huang Received 21 February, 2023; accepted 5 June, 2023 ======================================================================================================================= § INTRODUCTION In non-central relativistic heavy-ion collisions, two positively charged nuclei collide with non-zero impact parameters, resulting in the generation of a large magnetic field. This magnetic field can reach 10^18 Gauss in Au + Au collisions at =200 GeV at RHIC and 10^19 Gauss in Pb + Pb collisions at =5020 GeV at LHC <cit.>. The effect of this strong magnetic field on the Quark-Gluon Plasma (QGP) has attracted much attention due to its potential impacts on many novel phenomena, such as the chiral magnetic effect <cit.>, the spin polarization of hyperons <cit.>, the spin alignment of vector mesons <cit.>, the charge-dependent directed flow <cit.>, and the Breit-Wheeler process of dilepton production <cit.> in heavy-ion collisions. When making theoretical predictions about the aforementioned effects, a crucial question to be addressed is how the magnetic field evolves over time. In particular, it is important to determine whether the lifetime of the magnetic field is sufficiently long to maintain significant field strength leading to observable effects. In general, simulations of the magnetic field evolution can be carried out using the following steps. Before the collision, the charge density of the two colliding nuclei can be initialized by utilizing the Wood-Saxon distribution or by sampling the charge position in the nucleus using the Monte-Carlo Glauber model <cit.>. After the collision, the two charged nuclei pass through each other like two instantaneous currents. Currently, several approaches exist to simulate the collision process. The simplest method is to assume that the two nuclei pass through each other transparently or to incorporate the charge stopping effect using empirical formulae <cit.>. A more sophisticated approach involves simulating the entire collision process through transport models <cit.>. Once the motion of electric charge is determined, the magnetic field can be calculated using analytical formulae. These methods have been widely employed in previous studies to investigate the evolution of magnetic fields in heavy-ion collisions <cit.>. Previous simulations have shown that the strong magnetic field produced by the colliding nuclei rapidly decays with time in vacuum <cit.>. The lifetime of the magnetic field is primarily determined by fast-moving spectators, and the strong magnetic field only exists during the early stage of the collision. However, the time evolution of the magnetic field can be significantly modified when taking into account the response of the QGP, which is a charge-conducting medium. In this case, when the magnetic field begins to decrease, the induced Faraday currents in the QGP considerably slow down the damping of the magnetic field. Analytical formulae have demonstrated that the damping of the magnetic field in a constant conductive medium can be significantly delayed <cit.>. However, those analytical formulae only apply to the case of a constant conductivity, which is unrealistic because the conductivity only exists after the collision and the value varies as the QGP medium expands. Therefore, it is essential to numerically calculate the magnetic field. Numerical results can overcome the limitations of analytical calculations and provide unambiguous solutions for time-dependent conductive medium. As a result, numerical results can serve as a more accurate reference for final state observations that are sensitive to the evolution of the magnetic field. It is worth noting that some studies have also simulated the magnetic field by numerically solving the Maxwell equations with an electric conductivity <cit.> and by combining the magnetic field with the electromagnetic response of the QGP medium <cit.>. This paper presents a numerical study of the time evolution of magnetic fields with time-dependent electric conductivities at = 7.7–200 GeV. To solve the Maxwell equations, we utilize the Finite-Difference Time-Domain (FDTD) algorithm <cit.>. The paper is organized as follows: Sec. <ref> introduces the analytical formulae. Sec. <ref> describes the numerical model setup of the charge density, charge current, and the electric conductivity. Sec. <ref> describes the numerical method. Sec. <ref> presents the results and discusses the impact on the spin polarization and the spin alignment. Finally, Sec. <ref> concludes the results. § LIMITATION OF ANALYTICAL FORMULA We consider the electromagnetic field which is generated by external current of two moving nuclei and evolves in a conductive medium created in heavy-ion collisions. The electromagnetic field is governed by Maxwell equations: ∇· = ρ, ∇· = 0, ∇× = -∂_t , ∇× = + σ + ∂_t , where ρ and are the charge density and the charge current, and σ is the electric conductivity of the medium. For a point charge q moving with a constant velocity $̌,ρandare: ρ(t,) = qδ^(3)[-_q(t)], (t,) = qδ̌^(3)[-_q(t)], whereis the position of the field point and_q(t)is the position of the point charge at timet. If the conductivityσis a constant, the magnetic field has been rigorously derived as follows <cit.>: (t,) =γ×̌/Δ^3/2(1+γσ/2||̌√(Δ))e^A, whereγ=1/√(1-^̌2)is Lorentz contraction factor,≡-_q(t)is the position difference between the field point and the point charge at timet,Δ≡R^2+(γ·̌)^2, andA≡-γσ(γ·̌+||̌√(Δ))/2. Ifσis set to zero, the above formula can recover to the electromagnetic field in vacuum which can be expressed by the Lienard-Wiechert potential: (t,) =γ×̌/[R^2+(γ·̌)^2]^3/2. Because the Maxwell equations (<ref>–<ref>) satisfy the principle of superposition, the formulae (<ref>) and (<ref>) can also be applied to charge distributions rather than just a point charge. Therefore, the formulae have been widely used in the literature <cit.> to calculate the magnetic field generated by nuclei in heavy-ion collisions. However, Eq. (<ref>) is valid only if 1) the point charge moves with a constant velocity, and 2) the conductivityσis constant (fort∈[-∞,∞]). Unfortunately, neither of these conditions is realistic in heavy-ion collisions. First, when the collision occurs, charged particles slow down, and the velocities keep changing during the subsequent cascade scattering. Second, the QGP is produced after the collision, which means that the conductivityσis non-zero only aftert = 0(the time when the collision happens) and the value ofσvaries with time. For these reasons, it is important to develop a numerical method which can solve the Maxwell equations under more complicate and more realistic conditions ofρ,, andσ. In this paper, we focus on studying the influence of time-dependentσon the evolution of magnetic field. § MODEL SETUP §.§ Charge density and current In heavy-ion collisions, the external electric current arises from the contribution of protons in the fast moving nuclei. In this case, we consider two nuclei, which are moving along+zand-zaxis with velocityv_z, and their projections on thex-yplane are centered at(x=±b/2, y=0), respectively, withbbeing the impact parameter. In the rest frame of a nucleus, the charge distribution can be described by the Wood-Saxon distribution: f(r) = N_0/1+exp[(r-R)/a], whereRis the nuclear radius,ais the surface thickness, andN_0is a normalization factor determined by4π∫f(r)r^2dr = Ze. Take the gold nucleus as an example, we haveZ=79,R=6.38fm,a=0.535fm, thereforeN_0≈0.0679 e/fm^3. Then, it is straightforward to derive the charge density and current of the two moving nuclei by a Lorentz boost from Eq. (<ref>), which leads to ρ^±(t,x,y,z) = γ f(√((x∓ b/2)^2+y^2+γ^2(z∓ v_zt)^2)), j_x^±(t,x,y,z) = 0, j_y^±(t,x,y,z) = 0, j_z^±(t,x,y,z) = γ v_z f(√((x∓ b/2)^2+y^2+γ^2(z∓ v_zt)^2)), where the±sign overρandjon the left side indicates the direction of nucleus' motion alongzaxis, the velocityv_z = √(γ^2 - 1) / γ, withγ= / (2m_N)andm_N=938MeV. The total charge density and current are given as follows: ρ(t,x,y,z) = ρ^+(t,x,y,z) + ρ^-(t,x,y,z), (t,x,y,z) = ^+(t,x,y,z) + ^-(t,x,y,z). Eqs. (<ref>) and (<ref>) can describe the charge and current distributions before the collision exactly when the two nuclei are moving at a constant velocity. After the collision, the two nuclei are “wounded”, and some charged particles are stopped to collide with each other. This causes dynamic changes in the charge and current distributions. However, the main goal of this paper is to investigate how the time behavior of the magnetic field is influenced by the time-dependentσ. As a simplification, we currently assume that the two nuclei pass through each other and continue moving with their original velocity, so the charge and current distributions in Eqs. (<ref>) and (<ref>) are unchanged after the collision. This allows us to compare our numerical results with the analytical results obtained by Eq. (<ref>) under the same conditions ofρand, so that we can focus on studying the influence of the time-dependentσ. §.§ Electric conductivity Generally, Eq. (<ref>) is not a realistic description of the electromagnetic response of QGP matter because it assumes a constant conductivity. In reality, the QGP matter exists only after the collision, and the conductivity is time-dependent during the expansion of the system. To provide a more realistic description of the evolution of the magnetic field, it is necessary to consider a time-dependent electric conductivity. In this study, we consider two scenarios for the electric conductivity. In the first scenario, the conductivity is absent before the collision, and it appears to be constant after the collision. Thus, we can introduce aθ(t)function to describe it, σ = σ_0 θ(t). In this equation, if the constant conductivityσ_0were not multiplied by theθ(t)function, the formula (<ref>) would be valid for calculating the magnetic field. However, as we will show in Sec. <ref>, even with such a minor modification on the electric conductivity, the time behavior of the magnetic field becomes very different. In the second scenario, we consider the electric conductivity to be absent before the collision, and after the collision the electric conductivity depends on time via σ = σ_0 θ(t)/(1 + t / t_0)^1/3. The denominator in this equation accounts that the conductivity decreases as the QGP medium expands <cit.>. Thus, this scenario provides a more relativistic description of the magnetic field's time behavior in heavy-ion collisions. § NUMERICAL METHOD In the aforementioned scenarios in Eqs. (<ref>) and (<ref>),σis time dependent, therefore the analytical results in Eq. (<ref>) is not applicable, and the Maxwell equations (<ref>–<ref>) need to be solved numerically. Becauseσis zero before the collision, and the two nuclei move linearly with constant velocity, the electromagnetic field att ≤0can be analytically calculated by the Lienard-Wiechert formula as given by Eq. (<ref>). This provides the initial condition of the electromagnetic field att = 0. Once the initial condition is given, the electromagnetic field att ≥0is calculated by numerically solving the Maxwell equations (<ref>–<ref>). We use the FDTD algorithm <cit.> to solve the Maxwell equations. In detail, electric and magnetic fields are discretized on the Yee's grid, and the updating format forandcan be constructed by discretizing Eqs. (<ref>) and (<ref>) with a finite time step, as follows (t + Δ t) - (t)/Δ t = - ∇×(t+Δ t/2), and (t + Δ t) - (t)/Δ t + σ(t + Δ t) + (t)/2 = ∇×(t+Δ t/2) - (t+Δ t/2). The Yee's grid provides a high-accuracy method to calculate∇×and∇×. As time evolves,andare updated alternately. For example, ifis initially known at timetandis initially known at timet+Δt/2, then one can use the values of(t+Δt/2)and Eq. (<ref>) to updatefromttot + Δt; and after(t + Δt)is obtained, one can use Eq. (<ref>) to updatefromt + Δt/2tot + 3Δt/2. This algorithm provides higher accuracy than the regular first-order difference method. § NUMERICAL RESULTS Using the numerical method described in Sec. <ref>, we calculate the magnetic field by solving the Maxwell equations (<ref>–<ref>) under the conditions ofσ= 0,σ= σ_0 θ(t), andσ= σ_0 θ(t)/(1 + t / t_0)^1/3, respectively. As a verification of our numerical method, we have checked that our numerical solution forσ= 0matches the analytical result by Eq. (<ref>). We also calculate the magnetic field under the condition ofσ= σ_0using the analytical formula (<ref>) for comparison. In all the results presented in this section, the values ofσ_0andt_0are set to beσ_0= 5.8MeV andt_0 = 0.5fm/c, which are taken from Ref. <cit.>. §.§ σ = σ_0 θ(t) vs σ = σ_0 Figure <ref> displays the time evolution of the magnetic field in the out-of-plane direction (B_y) at the center of collision (𝐱 = 0) in Au+Au collisions for energies ranging from 7.7 to 200 GeV with impact parameterb = 7fm. The results ofσ= σ_0 θ(t)are calculated using the numerical algorithm described in Sec. <ref>, while the results ofσ= σ_0are calculated using the analytical formula given by Eq. (<ref>). The magnetic field in vacuum (σ= 0) is also shown as a baseline. In general, the presence of electric conductivity delays the decreasing of the magnetic field. However, the time behavior of the magnetic field under the condition ofσ= σ_0 θ(t)is very different from that ofσ= σ_0. In Figure <ref> we can see that, in the case ofσ= σ_0(namely,σis constant at botht < 0andt > 0), the magnitude of magnetic field is different from the vacuum baseline since a very early time. On the other hand, in the case ofσ= σ_0 θ(t), the difference between the magnetic field and the vacuum baseline is negligible at early time stages (t < 1fm/c for 200 GeV ort < 3fm/c for 7.7 GeV). This is because thatσexists only after the collision and it needs some time to build the effect on delaying the magnetic field's decay. Only at very late time stage (t > 7fm/c), when the evolution system has “forgotten” whetherσis zero or not beforet = 0, the curves ofσ= σ_0 θ(t)and ofσ= σ_0converge. In the middle time stage, the magnitude of the magnetic field is ranked in the order:B[vacuum] < B[σ=σ_0θ(t)] < B[σ=σ_0]. Our results indicate that the analytical formula (<ref>) significantly overestimates the magnetic field in the early and middle time stage compared to the numerical results. The difference between the analytical and numerical results arises from theθ(t)function introduced in Eq. (<ref>). It is important to note that the conductivity is absent att < 0in realistic collisions, therefore the formula (<ref>) is not applicable. This remarks the importance of considering time-dependentσand solving the Maxwell equations numerically. At the late time stage, although the analytical results agree well with the numerical ones, the magnetic field has become very small and has little impact on final observables. §.§ σ = σ_0 θ(t) vs σ = σ_0 θ(t) / (1 + t / t_0)^1/3 The electric conductivity in heavy-ion collisions is a time-dependent quantity due to the expansion of the QGP. Therefore, we consider a more realistic scenario where the electric conductivity decreases with time as given by Eq. (<ref>). Figure <ref> shows the corresponding results, which are compared to the results under the conditions ofσ= σ_0 θ(t)andσ= 0. We see again that, in both scenarios ofσ= σ_0 θ(t)and ofσ=σ_0θ(t) / (1 + t / t_0)^1/3, the magnitude of the magnetic field does not obviously diverge from the vacuum baseline at the early time stage. At later time, the differences is manifested, and we see thatB[vacuum] < B[σ=σ_0θ(t) / (1 + t / t_0)^1/3] < B[σ=σ_0θ(t)]. Needless to say, the decreasing conductivity has smaller effect on delaying the magnetic field's decay than a constant one. Nevertheless, Figure <ref> shows that the magnitude of the magnetic field withσ=σ_0θ(t) / (1 + t / t_0)^1/3are more close to the one ofσ=σ_0θ(t)than to the vacuum baseline, especially at high energies. This suggests that the even if the conductivity decreases, it still has an obvious effect on delaying the damping of the magnetic field. However, this effect is only significant in late time stage, when the magnetic field has already decreased. §.§ Impact on the spin polarization Now let us discuss the impact of the magnetic field on the splitting between the global spin polarizations ofΛandΛ̅. The magnetic-field-induced global spin polarization ofΛandΛ̅can be calculated using the following formula <cit.> P_Λ/Λ̅ = ±μ_ΛB/T, whereμ_Λis the magnetic moment ofΛand is equal to-0.613μ_N, withμ_Nbeing the nuclear magneton, andTis the temperature when the hyperon spin is “freezed”. We shall use the hadronization temperatureT≈155MeV as an estimate. Then the splitting between theΛandΛ̅global spin polarizations is given by P_Λ̅-P_Λ = 0.0826eB/m_π^2. Based on the numerical results presented in Figure <ref>, the magnitude of the magnetic field at late time is of the order ofeB_y∼10^-3–10^-2 m_π^2, which is significantly smaller than the initial values att=0. Therefore, the effect of the magnetic field on the global spin polarizations ofΛandΛ̅is negligible, as the splitting can be no larger than0.1%. This is consistent with the recent STAR data <cit.> which puts an upper limit ofP_Λ̅-P_Λ < 0.24%at=19.6 GeV andP_Λ̅-P_Λ < 0.35%at=27 GeV. In conclusion, our results suggest that the magnetic field is not sufficiently long-lived to provide a distinguishable splitting between theΛandΛ̅global spin polarizations under the current experimental accuracy; similar results were obtained also in Ref. <cit.>. §.§ Impact on the spin alignment The magnetic field also plays an important role in the spin (anti-)alignment of vector mesons. For vector mesons such asϕandK^*0, the spins of the constituent quarks in the meson have a lager chance to be anti-algined [i.e. the(|↑↓⟩+|↓↑⟩)/√(2)state] than to be aligned (|↑↑⟩or|↓↓⟩state) in an external magnetic field <cit.>. This effect can be explored experimentally by measuring the spin-density matrix elementρ_00. We note thatρ_00is a frame dependent quantity. The following formulae show theρ_00with respect tox,y, andzaxis, respectively <cit.>: ρ_00^(x) = 1-P_x^qP_x^q̅+P_y^qP_y^q̅+P_z^qP_z^q̅/3+𝐏_q·𝐏_q̅, ρ_00^(y) = 1-P_y^qP_y^q̅+P_x^qP_x^q̅+P_z^qP_z^q̅/3+𝐏_q·𝐏_q̅, ρ_00^(z) = 1-P_z^qP_z^q̅+P_x^qP_x^q̅+P_y^qP_y^q̅/3+𝐏_q·𝐏_q̅. where(P_x^q, P_y^q, P_z^q)and(P_x^q̅, P_y^q̅, P_z^q̅)are spin polarization vectors of the constituent quark and anti-quark, respectively. Our results have shown that the global spin polarization induced by the magnetic field is a small amount (<0.1%), therefore one may expect that the contribution from the magnetic field to the spin alignment (measured viaρ_00-1/3, which is proportional to the square of the magnetic field) will be even smaller. However, it should be realized that our calculations do not take into account the fluctuations in the charge density and current. Therefore, the results should be interpreted as the averaged magnetic field, which suggest that the average values such as⟨P_q ⟩and⟨P_q̅ ⟩are small, but do not imply that the correlation betweenP_qandP_q̅is small. Instead, when a vector meson is formed by combination of a quark and an anti-quark, the distance between the quarks should be small enough, thusP_qandP_q̅, which arise from the fluctuation of magnetic field, are highly correlated. This can lead to a massive contribution toρ_00. Therefore, our results do not rule out the possible effect of the magnetic field on the spin (anti-)alignment of vector mesons. For the same reason, the spin alignment of vector mesons can also arise from the fluctuation of other fields such as vorticity <cit.>, temperture gradient <cit.>, shear tensor <cit.>, and strong-force field <cit.>. Finally, it is important to note that, if the spin alignment is mainly contributed by fluctuations, then the value ofρ_00is not constrained by the value of global or localΛpolarizations. This may explain the significant value of|ρ_00-1/3|in the experimental data <cit.>, whereas the global or localΛpolarizations are much smaller <cit.>. § SUMMARY In this study, we present a numerical method to solve the Maxwell equations and investigate the evolution of magnetic field in heavy-ion collisions. We also discuss the impact of the magnetic field on the spin polarizations ofΛandΛ̅as well as the spin alignment of vector mesons. We demonstrate that although the electric conductivity can delay the decay of the magnetic field, this effect has been overestimated by the analytical formula which assumes a constant conductivity. After taking into account that the conductivity only exists after the collision, we find that the magnetic field is not sufficiently long-lived to induce a significant splitting between the global spin polarizations ofΛandΛ̅. On the other hand, the spin alignment of vector meson is a measure of correlation between the spin polarizations of quark and anti-quark, instead of the spin polarization being squared solely. Therefore, although the averaged spin polarization induced by the magnetic field is very small, our results do not rule out the possibility that the fluctuations of the magnetic field, as well as other fields, can have a significant contribution to the spin alignment of vector meson. We thank Dmitri Kharzeev and Oleg Teryaev for useful comments on the retreat on Spin Dynamics, Vorticity, Chirality and magnetic field workshop. This work was supported by the NSFC through Grants No. 11835002, No. 12147101, No. 12225502 and No. 12075061, the National Key Research and Development Program of China through Grant No. 2022YFA1604900, and the Natural Science Foundation of Shanghai through Grant No. 20ZR1404100. H. L was also supported by the China Postdoctoral Science Foundation 2019M661333. apsrev4-2
http://arxiv.org/abs/2306.10428v1
20230617213021
Differentially Private Histogram, Predecessor, and Set Cardinality under Continual Observation
[ "Monika Henzinger", "A. R. Sricharan", "Teresa Anna Steiner" ]
cs.DS
[ "cs.DS", "cs.CR" ]
game[1][htb]
http://arxiv.org/abs/2306.09793v1
20230616120501
Non-locality of the energy density for all single-photon states
[ "Maxime Federico", "Hans-Rudolf Jauslin" ]
quant-ph
[ "quant-ph" ]
APS/123-QED Laboratoire Interdisciplinaire Carnot de Bourgogne, CNRS - Université de Bourgogne, UMR 6303, BP 47870, 21078 Dijon, France The non-locality is a well-established property of single-photon states. It has been demonstrated theoretically using various approaches. In this article, we propose a demonstration based on the electromagnetic energy density observable and on the anti-local property of the frequency operator Ω=c(-Δ)^1/2. The present proof is completely general for all single-photon states while earlier proofs in the literature were limited to particular cases, either with some uniform localization condition or with some particular electric and magnetic localization restrictions. Non-locality of the energy density for all single-photon states M. Federico, H. R. Jauslin July 31, 2023 =============================================================== § INTRODUCTION Localization of photons, together with the existence of a position operator and consequently of a position wavefunction for photons, have maintained a vivid debate in the physics community since the early days of quantum mechanics <cit.>. Regarding the question of localization, a consensus has been reached and many works have pointed out that photons cannot be spatially localized <cit.>. However, following the experimental development of on demand single-photon sources <cit.> one can ask how it is possible to even produce single-photons in a controlled way without contradicting causality, or how close experimentally produced photons can be to perfect single-photon states. These questions have restarted the interest into the localization of photons in general <cit.> and especially into finding bounds or limits on either the localizability of single-photons or the fidelity with respect to perfect single-photon states. We recall the definition of a localized state that we will use as formulated by Knight <cit.>: it is a state which cannot be detected by any means, outside its volume of localization i.e. for any local observable, a localized state will give the same expectation value as the vacuum state, outside its volume of localization. For example, let |φ⟩ be a spatially localized state in a volume 𝒱_s, and Ô(𝒱_d) a local observable which can probe the state |φ⟩ over a volume 𝒱_d. Since |φ⟩ is localized, the expectation value of Ô(𝒱_d) can give the following results (illustrated in FIG. <ref> (a) and (b)): * ⟨φ|Ô(𝒱_d)|φ⟩=⟨∅|Ô(𝒱_d)|∅⟩ for 𝒱_d∩𝒱_s={ø}, * ⟨φ|Ô(𝒱_d)|φ⟩≠⟨∅|Ô(𝒱_d)|∅⟩ for 𝒱_d∩𝒱_s≠{ø}. The contraposition implies that a non-localized state is a state for which there exists at least one local observable whose expectation value is not equal to that of the vacuum in at least one point outside the volume of localization (FIG <ref> (b_1,2)). In this article, we will provide a demonstration of the non-localization of any single-photon states, based on the measurement of the mean value of the local energy density in any finite volume. It will be inspired by the pioneering works of I. and Z. Białynicki-Birula <cit.> and on the anti-local property of the frequency operator Ω <cit.> (see Appendices <ref> and <ref>). The frequency operator Ω is defined as the unique positive selfadjoint operator satisfying Ω^2=-c^2Δ <cit.> (see Appendix <ref>), where Δ is the Laplace operator. Another important operator that will be used in the following is the helicity operator Λ <cit.>, defined by c∇×=ΩΛ=ΛΩ (see Appendix <ref>). Its spectrum for transverse fields is {±1} and it allows to decompose any transverse field v⃗ into its positive and negative helicity parts v⃗=v⃗^(h+)+v⃗^(h-) where v⃗^(h±)=(1+Λ)v⃗/2 and Λv⃗^(h±)=±v⃗^(h±). The article is structured as follows: we briefly review the non-localization of photons, before recalling how one can describe single-photon states using position space representations. We then discuss the notion of local observable and select a particular one to create a detection model able to locally probe single-photons. We will afterward formulate our proof of the non-locality of photons and discuss how it is more general, to our knowledge, than what can be found in the literature. We will finally illustrate the non-locality with two examples. § NON-LOCALIZATION IN THE LITERATURE We start with a brief review of some existing results about the non-localization of single photons. First, we mention the theorem by Knight <cit.> which is valid for bosons in general. It shows that any bosonic state having a finite number of quantum excitations cannot be local. Practically, it means that there exists at least one local observable which does not satisfy Knight's localization criterion. Since Knight's result is very general, it applies in particular for single-photon states. Our result however, will provide a concrete example of a local observable for which the non-locality is explicit. We will thus show a particular case of Knight's theorem in the single-photon context. Other results about localization based on the energy density were formulated in the works of I. and Z. Białynicki-Birula <cit.>. In <cit.>, the particuar case of uniform localization of photons was treated (i.e. only radial properties were considered) using the Paley-Wiener theorem <cit.>. Single-photons with an exponential falloff of the form exp(-Ar), A>0 cannot exist but, weaker falloffs are allowed and an example corresponding to a falloff of the form exp(-A√(r)) was given. In general, in order to fulfill the constraint from the Paley-Wiener theorem, a quasi-exponential localization is possible with a falloff exp(-Ar^γ), where γ<1. Our result will be more general since it is valid for any photons and not only those with uniform localization. The advantage of the argument of <cit.> is that it provides a localization limit and gives a concrete example of a solution of Maxwell's equations approching that limit. In <cit.>, the authors introduced the notions of electrically and magnetically localized states and using a proof of the non-locality of the helicity operator Λ, they showed that even electrically or magnetically localized states cannot be considered as local if one uses the energy density observable. Both the uniform localization restriction of <cit.> and the electric or magnetic localization of <cit.> are particular conditions that are not satisfied for general single-photon states. Thus the arguments of <cit.> and <cit.> do not provide a complete proof of the non-locality of the energy density for all single-photon states. The goal of the present article is to provide a proof of the non-locality for all single-photon states, which completes the partial arguments of <cit.> and <cit.>. An abstract proof of the non-localization of bosons has also been given by De Bièvre <cit.> where Knight's theorem is extended and the proof involves Weyl operators instead of time-dependant correlation functions. It was shown by Licht in <cit.> that strictly localized states can be constructed. They are superpositions of infinitely many photons, like e.g. in coherent states <cit.>. We can also mention works providing bounds for the localization of single-photons <cit.>. In <cit.> it is shown that there exist cylindrical functions for which a Gaussian falloff is possible in the waist plane only, making the localization stronger than the exponential limit shown in <cit.>. More recently in <cit.>, some classes of strictly localized states are constructed (that are not single-photon states) so that they approach single-photon states as close as possible. § REPRESENTATION OF A GENERAL SINGLE-PHOTON STATE In the quantum optics literature <cit.>, photons are often constructed using plane waves, which are not adapted to discuss their localization in position space. To overcome this issue, one can construct wavepackets from the general plane waves decomposition <cit.> or directly quantize the field using pulses of arbitrary shapes <cit.>. In this article we will follow the latter choice, and taking advantage of the result shown in <cit.>, one can switch back and forth between two equivalent position representations of the quantum theory. The first one, called the Landau-Peierls (LP) representation <cit.>, is quite close to the standard plane waves quantization since it also takes the electric field and the vector potential as canonical variables, but in position space. The second one is what we call the Białynicki-Birula (BB) representation <cit.> that takes as canonical variables the electric and magnetic fields. The LP representation is based on the following complex field ψ⃗=√(ε_0/2ħ)[Ω^1/2A⃗-iΩ^-1/2E⃗], where Ω^±1/2 are selfadjoint operators constructed from the frequency operator introduced before (see Appendix <ref>), and A⃗ is the vector potential in the Coulomb gauge. The LP field is an element of the Hilbert space of square-integrable functions from which one can construct a Fock space of states and creation-annihilation operators directly on arbitrary pulse shaped functions <cit.>: B̂_ψ⃗|∅⟩ =0, B̂^†_ψ⃗|∅⟩ =|*⟩ψ⃗, satisfying the general bosonic commutation relations <cit.> [ B̂_ψ⃗,B̂^†_ψ⃗']=⟨*|ψ⃗|*⟩ψ⃗'_LP=∫_ℝ^3d^3x ψ⃗^⋆·ψ⃗'. The state |*⟩ψ⃗ constructed in (<ref>) represents a single-photon state carried by the classical pulse shaped field ψ⃗. Its quantum dynamics is determined by the classical dynamics of the classical pulse <cit.> according to |*⟩ψ⃗(t)=B̂^†_ψ⃗(t)|∅⟩, where ψ⃗(t) is a solution of the following complex representation of Maxwell's equations i∂ψ⃗/∂ t =Ωψ⃗, ∇·ψ⃗(t) =0. This formulation of the quantum theory is related to the standard quantization using plane waves through the simple isomorphism which transforms ψ⃗ in the momentum space representation z(k⃗,σ): ψ⃗(x⃗) =∫_ℝ^3d^3k∑_σ=±ϕ⃗_k⃗,σ(x⃗)z(k⃗,σ), z(k⃗,σ) =∫_ℝ^3d^3x ϕ⃗_k⃗,σ^⋆(x⃗)·ψ⃗(x⃗), where ϕ⃗_k⃗,σ are the plane waves of wave vector k⃗ and polarization σ (see Appendix <ref>). Creation-annihilation operators can thus be developped on the plane waves basis B̂^†_ψ⃗ =∫_ℝ^3d^3k∑_σ=±z(k⃗,σ)B̂^†_ϕ⃗_k⃗,σ, B̂_ψ⃗ =∫_ℝ^3d^3k∑_σ=±z^⋆(k⃗,σ)B̂_ϕ⃗_k⃗,σ, where B̂^†_ϕ⃗_k⃗,σ and B̂_ϕ⃗_k⃗,σ are the plane wave creation-annihilation operators in the position representation i.e. the analog of the standard â^†_k⃗,σ and â_k⃗,σ in momentum space <cit.>. The BB representation can be constructed from the LP representation using the isomorphism ℐ <cit.> as F⃗ =ℐψ⃗=i√(ħ)Ω^1/2ψ⃗. Expressed in terms of the real electromagnetic variables, it takes the form F⃗=√(ε_0/2)( E⃗+icΛB⃗), where we have used the relation Ω=cΛ∇× (see Appendix <ref>) and the definition of the vector potential ∇×A⃗=B⃗. The construction of creation-annihilation operators for the BB representation is directly given by the isomorphism ℐ Ĉ_F⃗ =ℐB̂_ψ⃗ℐ^-1, Ĉ_F⃗^† =ℐB̂^†_ψ⃗ℐ^-1. They satisfy the following commutation relation [ Ĉ_F⃗,Ĉ^†_F⃗']=⟨*|F⃗|*⟩F⃗'_BB=∫_ℝ^3d^3x F⃗^⋆·Ω^-1F⃗'. We have used another letter to refer to creation-annihilation operators in the BB representation to emphasize that they do not act on the same Hilbert space <cit.> as one can see with the two different scalar products in the commutation relations (<ref>) and (<ref>), defining two Hilbert spaces ℋ_LP ={ψ⃗ | ∇·ψ⃗=0, ⟨*|ψ⃗|*⟩ψ⃗'_LP<∞}, ℋ_BB ={F⃗ | ∇·F⃗=0, ⟨*|F⃗|*⟩F⃗'_BB<∞}. The BB scalar product (<ref>) has the property that it is Lorentz invariant. § DETECTION MODEL — LOCAL ENERGY OBSERVABLE We remark that in the quantum field theory of the electromagnetic field the localization properties cannot be established by looking only at the spatial properties of the state functions. One has to consider the joint representation of the states and the local observables. This can be done by considering e.g. mean values or correlation functions. The fact that the LP spatial properties do not correspond to the physically measurable properties had already been stated by W. Pauli <cit.> and we will give an explicit example of this particularity in Section <ref>. To show the non-locality of single-photon states, it is enough to find one particular local observable for which Knight's localization cirterion does not hold. An operator is said to be a local observable if it represents a physical measurement which can be made with an instrument well localized in space. We can assume that Ê⃗(x⃗) and B⃗̂⃗(x⃗) are local since in practice they can be measured by instruments involving e.g, localized charged particles or magnetic moments and thus possibly designed as small as required. Any operator that can be written as a point-wise function of Ê⃗(x⃗) and B⃗̂⃗(x⃗) is thus considered to be a local observable too. In this section, we introduce the local observable of the energy density that we will use to show the non-locality, and which can represent some actual detectors. It was e.g. already defined in <cit.>, ℰ̂_em(x⃗)=ε_0/2:(Ê⃗^2(x⃗)+c^2B⃗̂⃗^2(x⃗)):, where :· : stands for the normal ordering. If one considers an experimental set up close to what is done in <cit.>, the photons that are produced are “long” i.e. carried by a pulse with a slowly varying envelope. Consequently, the associated pulse described in space, is much bigger than any actual detector. It means that the detector, can probe the photon field only partially, without “seeing” the full state at the same time: 𝒱_s≫𝒱_d. Moreover, there exists efficient single-photon detectors e.g. superconducting nanowires <cit.> that are sensitive to the electromagnetic energy. The detector is prepared at a temperature that is slightly below the critical tempeature T_c of the superconducting nanowire, so that it has no resistance. When a photon triggers the detector, there is a local absoption of energy, heating up the detector above T_c and yielding a measurable resistance that signals the detection of a photon. What is measured in such experiments, can thus be modelled by the local energy density ℰ̂_em(x⃗) integrated over the volume of the detector 𝒱_d i.e. ℰ̂_𝒱_d=∫_𝒱_d d^3x ℰ̂_em(x⃗). The expectation value of this operator for a general single-photon state in the LP representation, |*⟩1ph=B̂_ψ⃗^†|∅⟩ for any ψ⃗∈ℋ_LP or equivalently written <cit.> in the BB representation |*⟩1ph=Ĉ_F⃗^†|∅⟩ for F⃗=ℐψ⃗∈ℋ_BB, was computed in <cit.>. It can be written as ⟨ℰ̂_em(x⃗)⟩_|*⟩1ph =ħ*Ω^1/2ψ⃗^(h+)(x⃗)^2+ħ*Ω^1/2ψ⃗^(h-)(x⃗)^2 =*F⃗^(h+)(x⃗)^2+*F⃗^(h-)(x⃗)^2≥0. We will show in the next section that this result implies the non-locality of single-photons. § PROOF OF THE NON-LOCALITY OF SINGLE-PHOTON STATES The result obtained above for the mean value of the energy density operator is clearly greater or equal than zero for any single-photon pulse i.e. for any function ψ⃗ representating the single-photon state (we will make the proof using the LP representation but the BB representation can be used equivalently, the isomorphism ℐ guarantees the equivalence and the physical results are not affected by the choice of representation). In this section, we will show that the average local energy is strictly different from zero at any position in space ⟨ℰ̂_em(x⃗)⟩_|*⟩1ph≠0 for all x⃗∈ℝ^3. The proof will use the following Lemmas: Lemma 1: For any transverse field u⃗(x⃗) i.e. ∇·u⃗=0, ∇×u⃗=0 in an open set if and only if u⃗=0. This follows from the fact that any field can be written as the sum of a transverse and a longitudinal part. If in an open set both ∇·u⃗=0 and ∇×u⃗=0 then u⃗(x⃗)=0. Lemma 2: For any field v⃗(x⃗) that is not identically zero, Ωv⃗ and v⃗ cannot be both zero in any open set of ℝ^3. <cit.> We provide a proof of Lemma 2 in the Appendix <ref>. To prove the non-locality of single-photons, we will show a particular property of the splitting into positive and negative helicity parts of a transverse field f⃗. The separation of ± helicities in (<ref>) is the key feature that leads to the non-locality. The splitting produces transverse fields since f⃗ and Λf⃗ are transverse (see Appendix <ref>) and f⃗^(h±)=(1±Λ)f⃗/2. Let us assume now that f⃗^(h±) is zero in an open set 𝒮⊂ℝ^3, then Lemma 1 implies that ∇×f⃗^(h±) is also zero in 𝒮. However, by definition of the helicity operator, Λf⃗^(h±)=±f⃗^(h±) and Λf⃗^(h±)=cΩ^-1∇×f⃗^(h±) which means that Ωf⃗^(h±)/c=±∇×f⃗^(h±). Thus, ∇×f⃗^(h±) and Ωf⃗^(h±) are zero in the same set. This implies that f⃗^(h±) and Ωf⃗^(h±) are zero in the same sets according to Lemma 1. According to Lemma 2 f⃗^(h±) and Ωf⃗^(h±) cannot be zero in any open set. This implies ⟨ℰ̂_em(x⃗) ⟩_|*⟩1ph≠0, for all x⃗∈ℝ^3 and more generally ⟨ℰ̂_𝒱_d⟩_|*⟩1ph≠0 for any finite volume 𝒱_d. Since the zero point energy has been removed using the normal ordering in (<ref>), ⟨ℰ̂_em(x⃗) ⟩_|*⟩1ph is never equal to the vacuum expectation value, preventing Knight's localization criterion to be fulfilled for any single-photon states represented by ψ⃗∈ℋ_LP. In physical terms this means that if the electromagnetic field is prepared in a single-photon state, a detector placed anywhere in space that measures the energy in a finite volume, has a non-zero probability of detecting the photon. The probability can be small but is strictly non-zero. § ILLUSTRATION OF THE NON-LOCALITY The non-locality brought by the splitting into helicity components can be illustrated through a simple one-dimensional example. Let's compute the expectation value ⟨ℰ̂_em(x⃗)⟩_|1ph⟩ for single-photon states representing two extreme cases: first we consider a state |*⟩ψ_comp=B̂_ψ_comp^†|∅⟩, where ψ_comp∈ℋ_LP is a function of compact support i.e. ψ_comp(x)=0 outside an interval of size L, and a state |*⟩ψ_ext=B̂_ψ_ext^†|∅⟩, where ψ_ext∈ℋ_LP is extended over all space i.e. ψ_ext(x)≠0 for any x∈ℝ. To construct ψ_ext, we use real fields E_comp(x) and A_comp(x) with support in the interval [-L/2,L/2] of the form E_comp(x) ∝{[ sin^2(π/Lx+π/2) if x∈[-L/2,L/2],; 0 otherwise, ]. A_comp(x) ∝{[ sin^2(π/Lx+π/2) if x∈[-L/2,L/2],; 0 otherwise, ]. and to build ψ_comp we take the extended fields E_ext(x) =Ω^1/2E_comp(x), A_ext(x) =Ω^-1/2A_comp(x). The resulting ψ_comp and ψ_ext are represented as the solid blue lines in FIG. <ref> (a),(c) and (b),(d) respectively. From these two examples, we compute the expectation value of the energy density operator and obtain the results displayed as the dashed orange lines in FIG. <ref>. The compact support property of the state ψ_comp is not preserved for the expectation value of the local energy (FIG <ref> (a) and (c)), as expected. § ACKNOWLEDGMENTS § PLANE WAVE BASIS, CURL, FREQUENCY AND HELICITY OPERATOR We consider the generalized basis of transverse plane waves {ϕ⃗_k⃗,σ} defined by ϕ⃗_k⃗,σ(x⃗)=(2π)^-3/2ϵ⃗_σ(k⃗) e^ik⃗·x⃗, where σ=± and the circular polarization vectors can be choosen as ϵ⃗_+(k⃗)=1/√(2)|k⃗|√(k_x^2+k_y^2)[ [ -k_xk_z+i|k⃗|k_y; -k_yk_z-i|k⃗|k_x; k_x^2+k_y^2 ]], with ϵ⃗_-(k⃗)=ϵ⃗_+(k⃗)^⋆. They are eigenfunctions of the curl operator with eigenvalues ∇×ϕ⃗_k⃗,σ=σ|k⃗| ϕ⃗_k⃗,σ. To construct the frequency operator Ω, we recall that it is defined as the unique positive operator satisfying Ω^2=-c^2Δ where Δ is the Laplace operator which can be written for transverse fields as -Δ=∇×∇×. Therefore, the frequency operator satisfies Ω^2ϕ⃗_k⃗,σ=c^2*k⃗^2ϕ⃗_k⃗,σ=ω_k⃗^2ϕ⃗_k⃗,σ. The positive square root of Ω^2 can thus be defined by its action on the continuum basis of plane waves Ωϕ⃗_k⃗,σ=ω_k⃗ϕ⃗_k⃗,σ, ω_k⃗>0, and the particular powers Ω^±1/2 used in the LP representation (<ref>) can similarly be defined as Ω^±1/2ϕ⃗_k⃗,σ=ω_k⃗^±1/2ϕ⃗_k⃗,σ. Ω^2, Ω and Ω^±1/2 are all positive selfadjoint operators. The helicity operator Λ is then defined through a combination of the curl and the frequency operators by c∇×=ΩΛ. It has the same eigenfunctions with eigenvalues Λϕ⃗_k⃗,σ=σϕ⃗_k⃗,σ and therefore commutes with both ∇× and Ω. One can decompose any transverse field v⃗ into a sum of a positive and a negative helicity part v⃗^(h±) v⃗=v⃗^(h+)+v⃗^(h-), where Λv⃗^(h±)=±v⃗^(h±). The positive and negative helicity parts can be constructed by applying the following projectors ℙ^(h±)=(1±Λ)/2, i.e. v⃗^(h±)=ℙ^(h±)v⃗. We also remark that Λ^2=1 and Λ^-1=Λ. Helicity can be interpreted as the projection of the spin on the direction of motion <cit.>. § ANTI-LOCALITY OF THE FREQUENCY OPERATOR We provide in this Appendix, a proof of the Lemma 2 used to show the non-locality of photons in Section <ref>. This result was shown in <cit.> and we will sketch here the argument of <cit.>. We recall the statement: Lemma 2: For any field v⃗(x⃗) that is not identically zero, Ωv⃗ and v⃗ cannot be both zero in any open set of ℝ^3. We are going to prove that if v⃗ and Ωv⃗ are both equal to zero in some open set 𝒮, it imples that v⃗(x⃗)=0 everywhere. Proof: Since Ω is positive and selfadjoint, the operators U(t)=exp(iΩ t), t∈(-∞,+∞) define a one-parameter family of unitary operators. The field defined as u⃗(x⃗,t)=U(t)v⃗(x⃗) satisfies the wave equation ∂^2u⃗/∂ t^2=-Ω^2u⃗, with initial conditions u⃗(x⃗,t=0) =v⃗(x⃗), ∂u⃗/∂ t(x⃗,t=0) =iΩv⃗(x⃗). Since the solutions of the wave equation propagate with a finite speed c, the property of the initial conditions to be zero i.e. v⃗(x⃗)=0 in a set 𝒮, implies that there is a t_0 > 0 and a non-empty open subset 𝒮_0⊂𝒮 such that u⃗(x⃗,t)=0 for all x⃗∈𝒮_0 and 0≤ t<t_0. Thus, for any 𝒞^∞ field φ⃗(x⃗) with compact support in 𝒮_0 ⟨*|φ⃗|*⟩u⃗(·,t)=∫_ℝ^3d^3x φ⃗(x⃗)^⋆·v⃗(x⃗,t)=0 for 0≤ t<t_0. We now consider the continuation of the variable t into the upper complex half-plane and define the function f(z)=⟨*|e^iΩ zv⃗|φ⃗⟩, for z≥ 0 that has the following properties: * f(z) is holomorphic for z > 0, and continuous for z ≥ 0, * f(t) = ⟨*|u⃗(·,t)|φ⃗⟩ when t ∈ℝ, and f(t)∈ℝ for t∈(0, t_0), * f(t) = 0 for 0 ≤ t < t_0. We remark that for t > t_0, f(t) is not necessarily zero nor real. We will use the Schwarz reflection principle, which in the present context can be formulated as follows: If f_+(z) satisfies the following properties, * f_+(z) is holomorphic in the open upper complex rectangle D_+ = { z > 0, z ∈ (0, t_0)}, * f_+(z) continuous in D_+∪ (0, t_0), * f_+(t) is real in the interval t ∈ (0, t_0), then f_+(z) can be continued holomorphically trough the interval (0, t_0) to the lower rectangle D_- = { z ≤ 0, z∈(0,t_0)}, by defining f_+(z)={[ f_+(z) for z∈ D_+,; (f_+(z^⋆))^⋆ for z∈ D_-. ]. This defines thus a holomorphic function f_+(z) in the whole open set D_+∪ D_- = { z ∈ (0, t_0)}, which includes the interval (0, t_0). By applying the Schwarz reflection principle to the function f(z) that we combine with the properties <ref> and <ref>, it shows that f(z) is analytic in the union of the open upper half-plane and D_-. The property <ref> states that f(z) = 0 in the interval z ∈ (0, t_0), which implies that f(z) = 0 in the whole region where f is holomorphic, in particular in the whole open upper half-plane z > 0. Since according to <ref>, f(z) is continuous for z ≥ 0, this implies that f(z)=0 for z≥0. In particular, f(t):⟨*|φ⃗|*⟩u⃗(·,t)=0 for -∞<t<∞. Since φ⃗ is an arbitrary function, this implies that u⃗(x⃗,t)=0 for -∞<t<∞ and x⃗∈𝒰_0. The unique continuation theorem for solutions u⃗(x⃗,t) of the wave equation, proven e.g. in <cit.>, states that if u⃗(x⃗,t)=0 for an open set for all -∞<t<∞, then u⃗(x⃗,t)=0 everywhere. In particular, for t=0 this implies u⃗(x⃗,t=0)=v⃗(x⃗)=0, for all x⃗∈ℝ^3, which completes the proof.
http://arxiv.org/abs/2306.04539v1
20230607154453
Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications
[ "Paul Pu Liang", "Chun Kai Ling", "Yun Cheng", "Alex Obolenskiy", "Yudong Liu", "Rohan Pandey", "Alex Wilf", "Louis-Philippe Morency", "Ruslan Salakhutdinov" ]
cs.LG
[ "cs.LG", "cs.CL", "cs.CV", "cs.IT", "math.IT", "stat.ML" ]
On the Design Fundamentals of Diffusion Models: A Survey Ziyi Chang, George A. Koulieris, and Hubert P. H. Shum, Senior Member, IEEE The authors are with the Department of Computer Science, Durham University, Durham, DH1 3LE, United Kingdom. Email: {ziyi.chang, georgios.a.koulieris, hubert.shum}@durham.ac.uk Corresponding author: Hubert P. H. Shum ================================================================================================================================================================================================================================================================================================================================== In many machine learning systems that jointly learn from multiple modalities, a core research question is to understand the nature of multimodal interactions: the emergence of new task-relevant information during learning from both modalities that was not present in either alone. We study this challenge of interaction quantification in a semi-supervised setting with only labeled unimodal data and naturally co-occurring multimodal data (e.g., unlabeled images and captions, video and corresponding audio) but when labeling them is time-consuming. Using a precise information-theoretic definition of interactions, our key contributions are the derivations of lower and upper bounds to quantify the amount of multimodal interactions in this semi-supervised setting. We propose two lower bounds based on the amount of shared information between modalities and the disagreement between separately trained unimodal classifiers, and derive an upper bound through connections to approximate algorithms for min-entropy couplings. We validate these estimated bounds and show how they accurately track true interactions. Finally, two semi-supervised multimodal applications are explored based on these theoretical results: (1) analyzing the relationship between multimodal performance and estimated interactions, and (2) self-supervised learning that embraces disagreement between modalities beyond agreement as is typically done. § INTRODUCTION A core research question in multimodal learning is to understand the nature of multimodal interactions across modalities in the context of a task: the emergence of new task-relevant information during learning from both modalities that was not present in either modality alone <cit.>. In settings where labeled multimodal data is abundant, the study of multimodal interactions has inspired advances in theoretical analysis <cit.> and representation learning <cit.> in language and vision <cit.>, multimedia <cit.>, healthcare <cit.>, and robotics <cit.>. In this paper, we study the problem of interaction quantification in a setting where there is only unlabeled multimodal data 𝒟_M = {(x_1,x_2)} but some labeled unimodal data 𝒟_i = {(x_i,y)} collected separately for each modality. This multimodal semi-supervised paradigm is reminiscent of many real-world settings with the emergence of separate unimodal datasets like large-scale visual recognition <cit.> and text classification <cit.>, as well as the collection of data in multimodal settings (e.g., unlabeled images and captions or video and audio <cit.>) but when labeling them is time-consuming <cit.>. Using a precise information-theoretic definition of interactions <cit.>, our key contributions are the derivations of lower and upper bounds to quantify the amount of multimodal interactions in this semi-supervised setting with only 𝒟_i and 𝒟_M. We propose two lower bounds for interaction quantification: our first lower bound relates multimodal interactions with the amount of shared information between modalities, and our second lower bound introduces the concept of modality disagreement which quantifies the differences of classifiers trained separately on each modality. Finally, we propose an upper bound through connections to approximate algorithms for min-entropy couplings <cit.>. To validate our derivations, we experiment on large-scale synthetic and real-world datasets with varying amounts of interactions. In addition, these theoretical results naturally yield new algorithms for two applications involving semi-supervised multimodal data: * We first analyze the relationship between interaction estimates and downstream task performance when optimal multimodal classifiers are learned access to multimodal data. This analysis can help develop new guidelines for deciding when to collect and fuse labeled multimodal data. * As the result of our analysis, we further design a new family of self-supervised learning objectives that capture disagreement on unlabeled multimodal data, and show that this learns interactions beyond agreement conventionally used in the literature <cit.>. Our experiments show strong results on four datasets: relating cartoon images and captions <cit.>, predicting expressions of humor and sarcasm from videos <cit.>, and reasoning about multi-party social interactions <cit.>. More importantly, these results shed light on the intriguing connections between disagreement, interactions, and performance. Our code is available at <https://github.com/pliang279/PID>. § PRELIMINARIES §.§ Definitions and setup Let 𝒳_i and 𝒴 be finite sample spaces for features and labels. Define Δ to be the set of joint distributions over (𝒳_1, 𝒳_2, 𝒴). We are concerned with features X_1, X_2 (with support 𝒳_i) and labels Y (with support 𝒴) drawn from some distribution p ∈Δ. We denote the probability mass function by p(x_1,x_2,y), where omitted parameters imply marginalization. In many real-world applications <cit.>, we only have partial datasets from p rather than the full distribution: * Labeled unimodal data 𝒟_1 = {(x_1,y): 𝒳_1 ×𝒴}, 𝒟_2 = {(x_2,y): 𝒳_2 ×𝒴}. * Unlabeled multimodal data 𝒟_M = {(x_1,x_2): 𝒳_1 ×𝒳_2}. 𝒟_1, 𝒟_2 and 𝒟_M follow the pairwise marginals p(x_1, y), p(x_2, y) and p(x_1, x_2). We define Δ_p_1,2 = { q ∈Δ: q(x_i,y)=p(x_i,y) ∀ y∈𝒴, x_i ∈𝒳_i, i ∈ [2] } as the set of joint distributions which agree with the labeled unimodal data 𝒟_1 and 𝒟_2, and Δ_p_1,2,12 = { r ∈Δ: r(x_1,x_2)=p(x_1,x_2), r(x_i,y)=p(x_i,y) } as the set of joint distributions which agree with all 𝒟_1, 𝒟_2 and 𝒟_M. Despite partial observability, we often still want to understand the degree to which two modalities can interact to contribute new information not present in either modality alone, in order to inform our decisions on multimodal data collection and modeling <cit.>. We now cover background towards a formal information-theoretic definition of interactions and their approximation. §.§ Information theory, partial information decomposition, and synergy Information theory formalizes the amount of information that a variable (X_1) provides about another (X_2), and is quantified by Shannon's mutual information (MI) and conditional MI <cit.>: I(X_1; X_2) = ∫ p(x_1,x_2) logp(x_1,x_2)/p(x_1) p(x_2) dx, I(X_1;X_2|Y) = ∫ p(x_1,x_2|y) logp(x_1,x_2|y)/p(x_1|y) p(x_2|y) dx dy. The MI of two random variables X_1 and X_2 measures the amount of information (in bits) obtained about X_1 by observing X_2, and by extension, conditional MI is the expected value of MI given the value of a third (e.g., Y). However, the extension of information theory to three or more variables to describe the synergy between two modalities for a task remains an open challenge. Among many proposed frameworks, Partial information decomposition (PID) <cit.> posits a decomposition of the total information 2 variables X_1,X_2 provide about a task Y into 4 quantities: I_p({X_1,X_2}; Y) = R + U_1 + U_2 + S where I_p({X_1,X_2}; Y) is the MI between the joint random variable (X_1,X_2) and Y, redundancy R describes task-relevant information shared between X_1 and X_2, uniqueness U_1 and U_2 studies the task-relevant information present in only X_1 or X_2 respectively, and synergy S investigates the emergence of new information only when both X_1 and X_2 are present <cit.>: (Multimodal interactions) Given X_1, X_2, and a target Y, we define their redundant (R), unique (U_1 and U_2), and synergistic (S) interactions as: R = max_q ∈Δ_p_1,2 I_q(X_1; X_2; Y), U_1 = min_q ∈Δ_p_1,2 I_q(X_1; Y | X_2), U_2 = min_q ∈Δ_p_1,2 I_q(X_2; Y| X_1), S = I_p({X_1,X_2}; Y) - min_q ∈Δ_p_1,2 I_q({X_1,X_2}; Y), where the notation I_p(·) and I_q(·) disambiguates mutual information (MI) under p and q respectively. I(X_1; X_2; Y) = I(X_1; X_2) - I(X_1;X_2|Y) is a multivariate extension of information theory <cit.>. Most importantly, R, U_1, and U_2 can be computed exactly using convex programming over distributions q ∈Δ_p_1,2 with access only to the marginals p(x_1,y) and p(x_2,y) by solving an equivalent max-entropy optimization problem q^* = _q ∈Δ_p_1,2 H_q(Y | X_1, X_2) <cit.>. This is a convex optimization problem with linear marginal-matching constraints (see Appendix <ref>). This gives us an elegant interpretation that we need only labeled unimodal data in each feature from 𝒟_1 and 𝒟_2 to estimate redundant and unique interactions. § ESTIMATING SYNERGY WITHOUT MULTIMODAL DATA Unfortunately, S is impossible to compute via equation (<ref>) when we do not have access to the full joint distribution p, since the first term I_p(X_1, X_2;Y) is unknown. Instead, we will aim to provide lower and upper bounds in the form S≤ S ≤ which depend only on 𝒟_1, 𝒟_2, and 𝒟_M. §.§ Lower bounds on synergy Our first insight is that while labeled multimodal data is unavailable, the output of unimodal classifiers may be compared against each other. Let _𝒴 = { r ∈ℝ_+^|𝒴| | ||r||_1 = 1 } be the probability simplex over labels 𝒴. Consider the set of unimodal classifiers ℱ_i ∋ f_i: 𝒳_i →δ_𝒴 and multimodal classifiers ℱ_M ∋ f_M: 𝒳_1 ×𝒳_2 →δ_𝒴. The crux of our method is to establish a connection between modality disagreement and a lower bound on synergy. (Modality disagreement) Given X_1, X_2, and a target Y, as well as unimodal classifiers f_1 and f_2, we define modality disagreement as α(f_1,f_2) = 𝔼_p(x_1,x_2) [d(f_1,f_2)] where d: 𝒴×𝒴→ℝ^≥0 is a distance function in label space scoring the disagreement of f_1 and f_2's predictions. Quantifying modality disagreement gives rise to two types of synergy as illustrated in Figure <ref>: agreement synergy and disagreement synergy. As their names suggest, agreement synergy happens when two modalities agree in predicting the label and synergy arises within this agreeing information. On the other hand, disagreement synergy happens when two modalities disagree in predicting the label, and synergy arises due to disagreeing information. Agreement synergy We first consider the case when two modalities contain shared information that leads to agreement in predicting the outcome. In studying these situations, a driving force for estimating S is the amount of shared information I(X_1;X_2) between modalities, with the intuition that more shared information naturally leads to redundancy which gives less opportunity for new synergistic interactions. Mathematically, we formalize this by relating S to R <cit.>, S = R - I_p(X_1;X_2;Y) = R - I_p(X_1;X_2) + I_p(X_1;X_2|Y). implying that synergy exists when there is high redundancy and low (or even negative) three-way MI I_p(X_1;X_2;Y) <cit.>. By comparing the difference in X_1,X_2 dependence with and without the task (i.e., I_p(X_1;X_2) vs I_p(X_1;X_2|Y)), 2 cases naturally emerge (see top half of Figure <ref>): * 𝐒>𝐑: When both modalities do not share a lot of information as measured by low I(X_1;X_2), but conditioning on Y increases their dependence: I(X_1;X_2|Y) > I(X_1;X_2), then there is synergy between modalities when combining them for task Y. This setting is reminiscent of common cause structures. Examples of these distributions in the real world are multimodal question answering, where the image and question are less dependent (some questions like `what is the color of the car' or `how many people are there' can be asked for many images), but the answer (e.g., `blue car') connects the two modalities, resulting in dependence given the label. As expected, S = 4.92,R=0.79 for the VQA 2.0 dataset <cit.>. * 𝐑>𝐒: Both modalities share a lot of information but conditioning on Y reduces their dependence: I(X_1;X_2)>I(X_1;X_2|Y), which results in more redundant than synergistic information. This setting is reminiscent of common effect structures. A real-world example is in detecting sentiment from multimodal videos, where text and video are highly dependent since they are emitted by the same speaker, but the sentiment label explains away some of the dependencies between both modalities. Indeed, for multimodal sentiment analysis from text, video, and audio of monologue videos on MOSEI <cit.>, R=0.26 and S=0.04. However, I_p(X_1;X_2|Y) cannot be computed without access to the full distribution p. In Theorem <ref>, we obtain a lower bound on I_p(X_1;X_2|Y), resulting in a lower bound for synergy. (Lower-bound on synergy via redundancy) We can relate S to R as follows = R - I_p(X_1;X_2) + min_r ∈Δ_p_1,2,12 I_r(X_1;X_2|Y) ≤ S We include the full proof in Appendix <ref>, but note that min_r ∈Δ_p_1,2,12 I_r(X_1;X_2|Y) is equivalent to a max-entropy optimization problem solvable using convex programming. This implies that can be computed efficiently using only unimodal data 𝒟_i and unlabeled multimodal data 𝒟_M. Disagreement synergy We now consider settings where two modalities disagree in predicting the outcome: suppose y_1=_y p(y|x_1) is the most likely prediction from the first modality, y_2=_y p(y|x_2) for the second modality, and y=_y p(y|x_1,x_2) the true multimodal prediction. During disagreement, there are again 2 cases (see bottom half of Figure <ref>): * 𝐔>𝐒: Multimodal prediction y=_y p(y|x_1,x_2) is the same as one of the unimodal predictions (e.g., y=y_2), in which case unique information in modality 2 leads to the outcome. A real-world dataset that we categorize in this case is MIMIC involving mortality and disease prediction from tabular patient data and time-series medical sensors <cit.> which primarily shows unique information in the tabular modality. The disagreement on MIMIC is high α=0.13, but since disagreement is due to a lot of unique information, there is less synergy S=0.01. * 𝐒>𝐔: Multimodal prediction y is different from both y_1 and y_2, then both modalities interact synergistically to give rise to a final outcome different from both disagreeing unimodal predictions. This type of joint distribution is indicative of real-world examples such as predicting sarcasm from language and speech - the presence of sarcasm is typically detected due to a contradiction between what is expressed in language and speech, as we observe from the experiments on MUStARD <cit.> where S=0.44 and α=0.12 are both relatively large. We formalize these intuitions via Theorem <ref>, yielding a lower bound based on disagreement minus the maximum unique information in both modalities: (Lower-bound on synergy via disagreement, informal) We can relate synergy S and uniqueness U to modality disagreement α(f_1,f_2) of optimal unimodal classifiers f_1,f_2 as follows: = α(f_1,f_2) · c - max(U_1,U_2) ≤ S for some constant c depending on the label dimension |𝒴| and choice of label distance function d. Theorem <ref> implies that if there is substantial disagreement α(f_1,f_2) between unimodal classifiers, it must be due to the presence of unique or synergistic information. If uniqueness is small, then disagreement must be accounted for by synergy, thereby yielding a lower bound . Note that the notion of optimality in unimodal classifiers is important: poorly-trained unimodal classifiers could show high disagreement but would be uninformative about true interactions. We include the formal version of the theorem based on Bayes' optimality and a full proof in Appendix <ref>. Hence, agreement and disagreement synergy yield separate lower bounds and . Note that these bounds always hold, so we could take S=max{, }. §.§ Upper bound on synergy While the lower bounds tell us the least amount of synergy possible in a distribution, we also want to obtain an upper bound on the possible synergy, which together with the above lower bounds sandwich S. By definition, S = I_p({X_1,X_2}; Y) - max_q ∈Δ_p_1,2 I_q({X_1,X_2}; Y). Thus, upper bounding synergy is the same as maximizing the MI I_p(X_1,X_2;Y), which can be rewritten as max_r ∈Δ_p_1,2,12 I_r({X_1,X_2}; Y) = max_r ∈Δ_p_1,2,12{ H_r(X_1, X_2) + H_r(Y) - H_r(X_1, X_2, Y) } = H_p(X_1, X_2) + H_p(Y) - min_r ∈Δ_p_1,2,12 H_r(X_1, X_2, Y), where the second line follows from the definition of Δ_p_1,2,12. Since the first two terms are constant, an upper bound on S requires us to look amongst all multimodal distributions r ∈Δ which match the unimodal 𝒟_i and unlabeled multimodal data 𝒟_M, and find the one with minimum entropy. Solving r^* = _r ∈Δ_p_1,2,12 H_r(X_1, X_2, Y) is NP-hard, even for a fixed |𝒴| ≥ 4. Theorem <ref> suggests we cannot tractably find a joint distribution which tightly upper bounds synergy when the feature spaces are large. Thus, our proposed upper bound is based on a lower bound on min_r ∈Δ_p_1,2,12 H_r(X_1, X_2, Y), which yields (Upper-bound on synergy) S ≤ H_p(X_1, X_2) + H_p(Y) - min_r ∈Δ_p_12,y H_r(X_1, X_2, Y) - max_q ∈Δ_p_1,2 I_q({X_1,X_2}; Y) = where Δ_p_12,y = { r ∈Δ : r(x_1,x_2)=p(x_1,x_2), r(y)=p(y) }. The second optimization problem is solved with convex optimization. The first is the classic min-entropy coupling over (X_1, X_2) and Y, which is still NP-hard but admits good approximations <cit.>. Proofs of Theorem <ref>, <ref>, and approximations for min-entropy couplings are deferred to Appendix <ref> and <ref>. § EXPERIMENTS We design comprehensive experiments to validate these estimated bounds and show new relationships between disagreement, multimodal interactions, and performance, before describing two applications in (1) estimating optimal multimodal performance without multimodal data to prioritize the collection and fusion data sources, and (2) a new disagreement-based self-supervised learning method. §.§ Verifying predicted guarantees and analysis of multimodal distributions Synthetic bitwise datasets: We enumerate joint distributions over 𝒳_1, 𝒳_2, 𝒴∈{0,1} by sampling 100,000 vectors in the 8-dimensional probability simplex and assigning them to each p(x_1,x_2,y). Using these distributions, we estimate p̂(y|x_1) and p̂(y|x_2) to compute disagreement and the marginals p̂(x_1,y), p̂(x_2,y), and p̂(x_1,x_2) to estimate the lower and upper bounds. R0.3 < g r a p h i c s > Our two lower bounds and track actual synergy S from below, and the upper bound tracks S from above. We find that , tend to approximate S better than . Large real-world multimodal datasets: We also use the large collection of real-world datasets in MultiBench <cit.>: (1) MOSI: video-based sentiment analysis <cit.>, (2) MOSEI: video-based sentiment and emotion analysis <cit.>, (3) MUStARD: video-based sarcasm detection <cit.>, (5) MIMIC: mortality and disease prediction from tabular patient data and medical sensors <cit.>, and (6) ENRICO: classification of mobile user interfaces and screenshots <cit.>. While the previous bitwise datasets with small and discrete support yield exact lower and upper bounds, this new setting with high-dimensional continuous modalities requires the approximation of disagreement and information-theoretic quantities: we train unimodal neural network classifiers f̂_θ(y|x_1) and f̂_θ(y|x_2) to estimate disagreement, and we cluster representations of X_i to approximate the continuous modalities by discrete distributions with finite support to compute lower and upper bounds. We summarize the following regarding the utility of each bound (see details in Appendix <ref>): 1. Overall trends: For the 100,000 bitwise distributions, we compute S, the true value of synergy assuming oracle knowledge of the full multimodal distribution, and compute -S, -S, and S- for each point. Plotting these points as a histogram in Figure <ref>, we find that the two lower bounds track actual synergy from below (-S and -S approaching 0 from below), and the upper bound tracks synergy from above (S- approaching 0 from above). The two lower bounds are quite tight, as we see that for many points -S and -S are approaching close to 0, with an average gap of 0.18. The disagreement bound seems to be tighter empirically than the agreement bound: for half the points, is within 0.14 and is within 0.2 of S. For the upper bound, there is an average gap of 0.62. However, it performs especially well on high synergy data. When S > 0.6, the average gap is 0.24, with more than half of the points within 0.25 of S. On real-world MultiBench datasets, we show the estimated bounds and actual S (assuming knowledge of full p) in Table <ref>. The lower and upper bounds track true S: as estimated and increases from MOSEI to UR-FUNNY to MOSI to MUStARD, true S also increases. For datasets like MIMIC with disagreement but high uniqueness, can be negative, but we can rely on to give a tight estimate on low synergy. Unfortunately, our bounds do not track synergy well on ENRICO. We believe this is because ENRICO displays all interactions: R=0.73,U_1=0.38,U_2=0.53,S=0.34, which makes it difficult to distinguish between R and S using or U and S using since no interaction dominates over others, and is also quite loose relative to the lower bounds. Given these general observations, we now carefully analyze the relationships between interactions, agreement, and disagreement. 2. The relationship between redundancy and synergy: In Table <ref> we show the classic agreement XOR distribution where X_1 and X_2 are independent, but Y=1 sets X_1 ≠ X_2 to increase their dependence. I(X_1;X_2;Y) is negative, and = 1 ≤ 1=S is tight. On the other hand, Table <ref> is an extreme example where the probability mass is distributed uniformly only when y=x_1=x_2 and 0 elsewhere. As a result, X_1 is always equal to X_2 (perfect dependence), and yet Y perfectly explains away the dependence between X_1 and X_2 so I(X_1;X_2|Y) = 0: = 0 ≤ 0=S. A real-world example is multimodal sentiment analysis from text, video, and audio on MOSEI, R=0.26 and S=0.03, and as expected the lower bound is small = 0 ≤ 0.03=S (Table <ref>). 3. The relationship between disagreement and synergy: In Table <ref> we show an example called disagreement XOR. There is maximum disagreement between marginals p(y|x_1) and p(y|x_2): the likelihood for y is high when y is the opposite bit as x_1, but reversed for x_2. Given both x_1 and x_2: y seems to take a `disagreement' XOR of the individual marginals, i.e. p(y|x_1,x_2) = _y p(y|x_1) XOR _y p(y|x_2), which indicates synergy (note that an exact XOR would imply perfect agreement and high synergy). The actual disagreement is 0.15, synergy is 0.16, and uniqueness is 0.02, indicating a very strong lower bound =0.14 ≤ 0.16=S. A real-world equivalent dataset is MUStARD, where the presence of sarcasm is often due to a contradiction between what is expressed in language and speech, so disagreement α=0.12 is the highest out of all the video datasets, giving a lower bound =0.11 ≤ 0.44 = S. On the contrary, the lower bound is low when all disagreement is explained by uniqueness (e.g., y=x_1, Table <ref>), which results in = 0 ≤ 0 = S (α and U cancel each other out). A real-world equivalent is MIMIC: from Table <ref>, disagreement is high α=0.13 due to unique information U_1=0.25, so the lower bound informs us about the lack of synergy = -0.12 ≤ 0.02 = S. Finally, the lower bound is loose when there is synergy without disagreement, such as agreement XOR (y=x_1 XOR x_2, Table <ref>) where the marginals p(y|x_i) are both uniform, but there is full synergy: = 0 ≤ 1 = S. Real-world datasets which fall into agreement synergy include UR-FUNNY where there is low disagreement in predicting humor α=0.03, and relatively high synergy S=0.18, which results in a loose lower bound = 0.01 ≤ 0.18=S. 4. On upper bounds for synergy: Finally, we find that the upper bound for MUStARD is quite close to real synergy, = 0.79 ≥ 0.44=S. On MIMIC, the upper bound is the lowest = 0.41, matching the lowest S=0.02. Some of the other examples in Table <ref> show bounds that are quite weak. This could be because (i) there indeed exists high synergy distributions that match 𝒟_i and 𝒟_M, but these are rare in the real world, or (ii) our approximation used in Theorem <ref> is mathematically loose. We leave these as open directions for future work. §.§ Application 1: Estimating multimodal performance for multimodal fusion Now that we have validated the accuracy of these lower and upper bounds, we can apply them towards estimating multimodal performance without labeled multimodal data. This serves as a strong signal for deciding (1) whether to collect paired and labeled data from a second modality, and (2) whether one should use complex fusion techniques on collected multimodal data. Method: Our approach for answering these two questions is as follows: given 𝒟_1, 𝒟_2, and 𝒟_M, we can estimate synergistic information based on our derived lower and upper bounds S and S. Together with redundant and unique information which can be computed exactly, we will use the total information to estimate the performance of multimodal models trained optimally on the full multimodal distribution. Formally, we estimate optimal performance via a result from <cit.> and Fano's inequality <cit.>, which together yield tight bounds of performance as a function of total information I_p({X_1,X_2}; Y). Let P_acc(f_M^*) = 𝔼_p [ 1[ f_M^*(x_1,x_2) = y ] ] denote the accuracy of the Bayes' optimal multimodal model f_M^* (i.e., P_acc (f_M^*) ≥ P_acc (f'_M) for all f'_M ∈ℱ_M). We have that 2^I_p({X_1,X_2}; Y)-H(Y)≤ P_acc(f_M^*) ≤I_p({X_1,X_2}; Y) + 1/log |𝒴|, where we can plug in R+U_1,U_2+S≤ I_p({X_1,X_2}; Y) ≤ R+U_1,U_2+ to obtain lower P_acc(f_M^*) and upper P_acc(f_M^*) bounds on optimal multimodal performance (refer to Appendix <ref> for full proof). Finally, we summarize estimated multimodal performance as the average P̂_M = (P_acc(f_M^*) + P_acc(f_M^*))/2. A high P̂_M suggests the presence of important joint information from both modalities (not present in each) which could boost accuracy, so it is worthwhile to collect the full distribution p and explore multimodal fusion <cit.> to learn joint information over unimodal methods. Results: For each MultiBench dataset, we implement a suite of unimodal and multimodel models spanning simple and complex fusion. Unimodal models are trained and evaluated separately on each modality. Simple fusion includes ensembling by taking an additive or majority vote between unimodal models <cit.>. Complex fusion is designed to learn higher-order interactions as exemplified by bilinear pooling <cit.>, multiplicative interactions <cit.>, tensor fusion <cit.>, and cross-modal self-attention <cit.>. See Appendix <ref> for models and training details. We include unimodal, simple and complex multimodal performance, as well as estimated lower and upper bounds on optimal multimodal performance in Table <ref>. R0.6 .3 < g r a p h i c s > .3 < g r a p h i c s > Datasets with higher estimated multimodal performance P̂_M tend to show improvements from unimodal to multimodal (left) and from simple to complex multimodal fusion (right). RQ1: Should I collect multimodal data? We compare estimated performance P̂_M with the actual difference between unimodal and best multimodal performance in Figure <ref> (left). Higher estimated P̂_M correlates with a larger gain from unimodal to multimodal. MUStARD and ENRICO show the most opportunity for multimodal modeling, but MIMIC shows less improvement. RQ2: Should I investigate multimodal fusion? From Table <ref>, synergistic datasets like MUStARD and ENRICO show best reported multimodal performance only slightly above the estimated lower bound, indicating more work to be done in multimodal fusion. For datasets with less synergy like MOSEI and MIMIC, the best multimodal performance is much higher than the estimated lower bound, indicating that existing fusion methods may already be quite optimal. We compare P̂_M with the performance gap between complex and simple fusion methods in Figure <ref> (right). We again observe trends between higher P̂_M and improvements with complex fusion, with large gains on MUStARD and ENRICO. We expect new methods to further improve the state-of-the-art on these datasets due to their generally high interaction values and low multimodal performance relative to estimated lower bound P_acc(f_M^*). §.§ Application 2: Self-supervised multimodal learning via disagreement R0.6 0.6 < g r a p h i c s > Masked predictions do not always agree across modalities, as shown in this example from the Social-IQ dataset <cit.>. Adding a slack term enabling pre-training with modality disagreement yields strong performance improvement over baselines. Finally, we highlight an application of our analysis towards self-supervised pre-training, which is generally performed by encouraging agreement as a pre-training signal on large-scale unlabeled data <cit.> before supervised fine-tuning <cit.>. However, our results suggest that there are regimes where disagreement can lead to synergy that may otherwise be ignored when only training for agreement. We therefore design a new family of self-supervised learning objectives that capture disagreement on unlabeled multimodal data. Method: We build upon masked prediction that is popular in self-supervised pre-training: given multimodal data of the form (x_1,x_2) ∼ p(x_1,x_2) (e.g., x_1= caption and x_2= image), first mask out some words (x_1') before using the remaining words (x_1 \ x_1') to predict the masked words via learning f_θ(x_1'|x_1 \ x_1'), as well as the image x_2 to predict the masked words via learning f_θ(x_1'|x_2) <cit.>. In other words, maximizing agreement between f_θ(x_1'|x_1 \ x_1') and f_θ(x_1'|x_2) in predicting x_1': ℒ_agree = d(f_θ(x_1'|x_1\ x_1'), x_1') + d(f_θ(x_1'|x_2), x_1') for a distance d such as cross-entropy loss for discrete word tokens. To account for disagreement, we allow predictions on the masked tokens x_1' from two different modalities i,j to disagree by a slack variable λ_ij. We modify the objective such that each term only incurs a loss penalty if each distance d(x,y) is larger than λ as measured by a margin distance d_λ(x,y) = max (0, d(x,y) - λ): ℒ_disagree = ℒ_agree + ∑_1 ≤ i < j ≤ 2 d_λ_ij (f_θ(x_1'|x_i), f_θ(x_1'|x_j)) These λ terms are hyperparameters, quantifying the amount of disagreement we tolerate between each pair of modalities during cross-modal masked pretraining (λ=0 recovers full agreement). We show this visually in Figure <ref> by applying it to masked pre-training on text, video, and audio using MERLOT Reserve <cit.>, and also apply it to FLAVA <cit.> for images and text experiments (see extensions to 3 modalities and details in Appendix <ref>). Setup: We choose four settings with natural disagreement: (1) UR-FUNNY: humor detection from 16,000 TED talk videos <cit.>, (2) MUsTARD: 690 videos for sarcasm detection from TV shows <cit.>, (3) Social IQ: 1,250 multi-party videos testing social intelligence knowledge <cit.>, and (4) Cartoon: matching 704 cartoon images and captions <cit.>. Results: From Table <ref>, allowing for disagreement yields improvements on these datasets, with those on Social IQ, UR-FUNNY, MUStARD being statistically significant (p-value <0.05 over 10 runs). By analyzing the value of λ resulting in the best validation performance through hyperparameter search, we can analyze when disagreement helps for which datasets, datapoints, and modalities. On a dataset level, we find that disagreement helps for video/audio and video/text, improving accuracy by up to 0.6% but hurts for text/audio, decreasing the accuracy by up to 1%. This is in line with intuition, where spoken text is transcribed directly from audio for these monologue and dialog videos, but video can have vastly different information. In addition, we find more disagreement between text/audio for Social IQ, which we believe is because it comes from natural videos while the others are scripted TV shows with more agreement between speakers and transcripts. We further analyze individual datapoints with disagreement. On UR-FUNNY, the moments when the camera jumps from the speaker to their presentation slides are followed by an increase in agreement since the video aligns better with the speech. In MUStARD, we observe disagreement between vision and text when the speaker's face expresses the sarcastic nature of a phrase. This changes the meaning of the phrase, which cannot be inferred from text only, and leads to synergy. We include more qualitative examples including those on the Cartoon captioning dataset in Appendix <ref>. § RELATED WORK Multivariate information theory: The extension of information theory to 3 or more variables <cit.> remains on open problem. Partial information decomposition (PID) <cit.> was proposed as a potential solution that satisfies several appealing properties <cit.>. Today, PID has primarily found applications in cryptography <cit.>, neuroscience <cit.>, physics <cit.>, complex systems <cit.>, and biology <cit.>, but its application towards machine learning, in particular multimodality, is an exciting but untapped research direction. To the best of our knowledge, our work is the first to provide formal estimates of synergy in the context of unlabeled or unpaired multimodal data which is common in today's self-supervised paradigm <cit.>. Understanding multimodal models: Information theory is useful for understanding co-training <cit.>, multi-view learning <cit.>, and feature selection <cit.>, where redundancy is an important concept. Prior research has also studied multimodal models via additive or non-additive interactions <cit.>, gradient-based approaches <cit.>, or visualization tools <cit.>. This goal of quantifying and modeling multimodal interactions <cit.> has also motivated many successful learning algorithms, such as contrastive learning <cit.>, agreement and alignment <cit.>, factorized representations <cit.>, as well as tensors and multiplicative interactions <cit.>. Disagreement-based learning has been used to estimate performance from unlabeled data <cit.>, active learning <cit.>, and guiding exploration in reinforcement learning <cit.>. In multimodal learning, however, approaches have been primarily based on encouraging agreement in prediction <cit.> or feature space <cit.> in order to capture shared information. Our work has arrived at similar conclusions regarding the benefits of disagreement-based learning, albeit from different mathematical motivations and applications. § CONCLUSION We proposed estimators of multimodal interactions when observing only labeled unimodal data and some unlabeled multimodal data, a general setting that encompasses many real-world constraints involving partially observable modalities, limited labels, and privacy concerns. Our key results draw new connections between multimodal interactions, the disagreement of unimodal classifiers, and min-entropy couplings. Future work should investigate more applications of multivariate information theory in designing self-supervised models, predicting multimodal performance, and other tasks involving feature interactions such as privacy-preserving and fair representation learning. § ACKNOWLEDGEMENTS This material is based upon work partially supported by Meta, National Science Foundation awards 1722822 and 1750439, and National Institutes of Health awards R01MH125740, R01MH132225, R01MH096951 and R21MH130767. PPL is partially supported by a Facebook PhD Fellowship and a Carnegie Mellon University's Center for Machine Learning and Health Fellowship. RS is supported in part by ONR N000141812861, ONR N000142312368 and DARPA/AFRL FA87502321015. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF, NIH, Meta, Carnegie Mellon University’s Center for Machine Learning and Health, ONR, DARPA, or AFRL, and no official endorsement should be inferred. Finally, we would also like to acknowledge NVIDIA’s GPU support. plainnat § APPENDIX § BROADER IMPACT Multimodal semi-supervised models are ubiquitous in a range of real-world applications with only labeled unimodal data and naturally co-occurring multimodal data (e.g., unlabeled images and captions, video and corresponding audio) but when labeling them is time-consuming. This paper is our attempt at formalizing the learning setting of multimodal semi-supervised learning, allowing us to derive bounds on the information existing in multimodal semi-supervised datasets and what can be learned by models trained on these datasets. We do not foresee any negative broad impacts of our theoretical results, but we do note the following concerns regarding the potential empirical applications of these theoretical results in real-world multimodal datasets: Biases: We acknowledge risks of potential biases surrounding gender, race, and ethnicity in large-scale multimodal datasets <cit.>, especially those collected in a semi-supervised setting with unlabeled and unfiltered images and captions <cit.>. Formalizing the types of bias in multimodal datasets and mitigating them is an important direction for future work. Privacy: When making predictions from multimodal datasets with recorded human behaviors and medical data, there might be privacy risks of participants. Following best practices in maintaining the privacy and safety of these datasets, (1) these datasets have only been collected from public data that are consented for public release (creative commons license and following fair use guidelines of YouTube) <cit.>, or collected from hospitals under strict IRB and restricted access guidelines <cit.>, and (2) have been rigorously de-identified in accordance with Health Insurance Portability and Accountability Act such that all possible personal and protected information has been removed from the dataset <cit.>. Finally, we only use these datasets for research purposes and emphasize that any multimodal models trained to perform prediction should only be used for scientific study and should not in any way be used for real-world harm. § DETAILED PROOFS §.§ Information decomposition Partial information decomposition (PID) <cit.> decomposes of the total information 2 variables provide about a task I({X_1,X_2}; Y) into 4 quantities: redundancy R between X_1 and X_2, unique information U_1 in X_1 and U_2 in X_2, and synergy S. <cit.>, who first proposed PIDs, showed that they should satisfy the following consistency equations: R + U_1 = I(X_1; Y), R + U_2 = I(X_2; Y), U_1 + S = I(X_1; Y | X_2), U_2 + S = I(X_2; Y | X_1), R - S = I(X_1; X_2; Y). We choose the PID definition by <cit.>, where redundancy, uniqueness, and synergy are defined by the solution to the following optimization problems: R = max_q ∈Δ_p I_q(X_1; X_2; Y) U_1 = min_q ∈Δ_p I_q(X_1; Y | X_2) U_2 = min_q ∈Δ_p I_q(X_2; Y| X_1) S = I_p({X_1,X_2}; Y) - min_q ∈Δ_p I_q({X_1,X_2}; Y) where Δ_p = { q ∈Δ: q(x_i,y)=p(x_i,y) ∀ y, x_i, i ∈{1,2}}, Δ is the set of all joint distributions over X_1, X_2, Y, and the notation I_p(·) and I_q(·) disambiguates MI under joint distributions p and q respectively. The key difference in this definition of PID lies in optimizing q ∈Δ_p to satisfy the marginals q(x_i,y)=p(x_i,y), but relaxing the coupling between x_1 and x_2: q(x_1,x_2) need not be equal to p(x_1,x_2). The intuition behind this is that one should be able to infer redundancy and uniqueness given only access to separate marginals p(x_1,y) and p(x_2,y), and therefore they should only depend on q ∈Δ_p which match these marginals. Synergy, however, requires knowing the coupling p(x_1,x_2), and this is reflected in equation (<ref>) depending on the full p distribution. §.§ Computing q^*, redundancy, and uniqueness According to <cit.>, it suffices to solve for q using the following max-entropy optimization problem q^* = _q ∈Δ_p H_q(Y | X_1, X_2), the same q^* equivalently solves any of the remaining problems defined for redundancy, uniqueness, and synergy. This is a concave maximization problem with linear constraints. When 𝒳_i and 𝒴 are small and discrete, we can represent all valid distributions q(x_1,x_2,y) as a set of tensors Q of shape |𝒳_1| × |𝒳_2| × |𝒴| with each entry representing Q[i,j,k] = p(X_1=i,X_2=j,Y=k). The problem then boils down to optimizing over valid tensors Q ∈Δ_p that match the marginals p(x_i,y) for the objective function H_q(Y | X_1, X_2). We rewrite conditional entropy as a KL-divergence <cit.>, H_q(Y|X_1, X_2) = log |𝒴| - KL(q||q̃), where q̃ is an auxiliary product density of q(x_1,x_2) ·1/|𝒴| enforced using linear constraints: q̃(x_1, x_2, y) = q(x_1,x_2) / |𝒴|. The KL-divergence objective is recognized as convex, allowing the use of conic solvers such as SCS <cit.>, ECOS <cit.>, and MOSEK <cit.>. Finally, optimizing over Q ∈Δ_p that match the marginals can also be enforced through linear constraints: the 3D-tensor Q summed over the second dimension gives q(x_1,y) and summed over the first dimension gives q(x_2,y), yielding the final optimization problem: _Q,Q̃ KL(Q||Q̃), s.t. Q̃(x_1, x_2, y) = Q(x_1,x_2) / |𝒴|, ∑_x_2 Q = p(x_1,y), ∑_x_1 Q = p(x_2,y), Q ≥ 0, ∑_x_1,x_2,y Q = 1. After solving this optimization problem, plugging q^* into (<ref>)-(<ref>) yields the desired estimators for redundancy and uniqueness: R = I_q^*(X_1; X_2; Y), U_1 = I_q^*(X_1; Y | X_2), U_2 = I_q^*(X_2; Y| X_1), and more importantly, can be inferred from access to only labeled unimodal data p(x_1,y) and p(x_2,y). Unfortunately, S is impossible to compute via equation (<ref>) when we do not have access to the full joint distribution p, since the first term I_p(X_1, X_2;Y) is unknown. Instead, we will aim to provide lower and upper bounds in the form S≤ S ≤ so that we can have a minimum and maximum estimate on what synergy could be. Crucially, S and should depend only on 𝒟_1, 𝒟_2, and 𝒟_M in the multimodal semi-supervised setting. §.§ Lower bound on synergy via redundancy (Theorem <ref>) We first restate Theorem <ref> from the main text to obtain our first lower bound linking synergy to redundancy: (Lower-bound on synergy via redundancy, same as Theorem <ref>) We can relate S to R as follows = R - I_p(X_1;X_2) + min_r ∈Δ_p_1,2,12 I_r(X_1;X_2|Y) ≤ S where Δ_p_1,2,12 = { r ∈Δ: r(x_1,x_2)=p(x_1,x_2), r(x_i,y)=p(x_i,y) }. min_r ∈Δ_p_1,2,12 I_r(X_1;X_2|Y) is a max-entropy convex optimization problem which can be solved exactly using linear programming. By consistency equation (<ref>) S = R - I_p(X_1;X_2;Y) = R - I_p(X_1;X_2) + I_p(X_1;X_2|Y). This means that lower bounding the synergy is the same as obtaining a lower bound on the mutual information I_p(X_1,X_2|Y), since R and I_p(X_1;X_2) can be computed exactly based on p(x_1, y), p(x_2, y), and p(x_1,x_2). To lower bound I_p(X_1,X_2|Y), we consider minimizing it subject to the marginal constraints with p, which gives min_r ∈Δ_p_1,2,12 I_r(X_1;X_2|Y) = min_r ∈Δ_p_1,2,12 H_r(X_1) - I_r(X_1;Y) - H_r(X_1|X_2,Y) = H_p(X_1) - I_p(X_1;Y) - max_r ∈Δ_p_1,12 H_r(X_1|X_2,Y) where in the last line the p_2 constraint is removed since H_r(X_1|X_2,Y) is fixed with respect to p(x_2,y). To solve max_r ∈Δ_p_1,12 H_r(X_1|X_2,Y), we observe that it is also a concave maximization problem with linear constraints. When 𝒳_i and 𝒴 are small and discrete, we can represent all valid distributions r(x_1,x_2,y) as a set of tensors R of shape |𝒳_1| × |𝒳_2| × |𝒴| with each entry representing R[i,j,k] = p(X_1=i,X_2=j,Y=k). The problem then boils down to optimizing over valid tensors R ∈Δ_p_1,12 that match the marginals p(x_1,y) and p(x_1,x_2). Given a tensor R representing r, our objective is the concave function H_r(X_1 | X_2, Y) which we rewrite as a KL-divergence log |𝒳_1| - KL(r||r̃) using an auxiliary distribution r̃ = r(x_2,y) ·1/|𝒳_1| and solve it exactly using convex programming with linear constraints: _R,R̃ KL(R||R̃), s.t. R̃(x_1, x_2, y) = R(x_2,y) / |𝒴|, ∑_x_2 R = p(x_1,y), ∑_y R = p(x_1,x_2), R ≥ 0, ∑_x_1,x_2,y R = 1. with marginal constraints R ∈Δ_p_1,12 enforced through linear constraints on tensor R. Plugging the optimized r^* into (<ref>) yields the desired lower bound = R - I_p(X_1;X_2) + I_r^*(X_1;X_2|Y). §.§ Lower bound on synergy via disagreement (Theorem <ref>) We first restate some notation and definitions from the main text for completeness. The key insight behind Theorem <ref>, a relationship between disagreement and synergy, is that while labeled multimodal data is unavailable, the output of unimodal classifiers may be compared against each other. Let _𝒴 = { r ∈ℝ_+^|𝒴| | ||r||_1 = 1 } be the probability simplex over labels 𝒴. Consider the set of unimodal classifiers ℱ_i ∋ f_i: 𝒳_i →δ_𝒴 and multimodal classifiers ℱ_M ∋ f_M: 𝒳_1 ×𝒳_2 →δ_𝒴. (Unimodal and multimodal loss) The loss of a given unimodal classifier f_i ∈ℱ_i is given by L(f_i) = 𝔼_p(x_i,y)[ ℓ( f_i(x_i), y) ] for a loss function over the label space ℓ: 𝒴×𝒴→ℝ^≥0. We denote the same for multimodal classifier f_M ∈ℱ_M, with a slight abuse of notation L(f_M) = 𝔼_p(x_1,x_2,y)[ ℓ( f_M(x_1, x_2), y) ] for a loss function over the label space ℓ. (Unimodal and multimodal accuracy) The accuracy of a given unimodal classifier f_i ∈ℱ_i is given by P_acc (f_i) = 𝔼_p [ 1[ f_i(x_i) = y ] ]. We denote the same for multimodal classifier f_M ∈ℱ_M, with a slight abuse of notation P_acc (f_M) = 𝔼_p [ 1[ f_M(x_1, x_2) = y ] ]. An unimodal classifier f_i^* is Bayes-optimal (or simply optimal) with respect to a loss function L if L(f_i^*) ≤ L(f'_i) for all f'_i ∈ℱ_i. Similarly, a multimodal classifier f_M^* is optimal with respect to loss L if L(f_M^*) ≤ L(f'_M) for all f'_M ∈ℱ_M. Bayes optimality can also be defined with respect to accuracy, if P_acc (f_i^*) ≥ P_acc (f'_i) for all f'_i ∈ℱ_i for unimodal classifiers, or if P_acc (f_M^*) ≥ P_acc (f'_M) for all f'_M ∈ℱ_M for multimodal classifiers. The crux of our method is to establish a connection between modality disagreement and a lower bound on synergy. (Modality disagreement) Given X_1, X_2, and a target Y, as well as unimodal classifiers f_1 and f_2, we define modality disagreement as α(f_1,f_2) = 𝔼_p(x_1,x_2) [d(f_1,f_2)] where d: 𝒴×𝒴→ℝ^≥0 is a distance function in label space scoring the disagreement of f_1 and f_2's predictions, where the distance function d must satisfy some common distance properties, following <cit.>: (Relaxed triangle inequality) For the distance function d: 𝒴×𝒴→ℝ^≥0 in label space scoring the disagreement of f_1 and f_2's predictions, there exists c_d ≥ 1 such that ∀ŷ_1, ŷ_2, ŷ_3 ∈𝒴̂, d(ŷ_1, ŷ_2) ≤ c_d ( d(ŷ_1, ŷ_3) + d(ŷ_3, ŷ_2) ). (Inverse Lipschitz condition) For the function d, it holds that for all f, 𝔼 [d(f(x_1,x_2), f^*(x_1,x_2))] ≤ |L(f)- L(f^*)| where f^* is the Bayes optimal multimodal classifier with respect to loss L, and 𝔼 [d(f_i(x_i), f_i^*(x_i))] ≤ |L(f_i)- L(f_i^*)| where f_i^* is the Bayes optimal unimodal classifier with respect to loss L. (Classifier optimality) For any unimodal classifiers f_1,f_2 in comparison to the Bayes' optimal unimodal classifiers f_1^*,f_2^*, there exists constants ϵ_1,ϵ_2>0 such that | L(f_1) - L(f_1^*) |^2 ≤ϵ_1, | L(f_2) - L(f_2^*) |^2 ≤ϵ_2 We now restate Theorem <ref> from the main text obtaining , our second lower bound on synergy linking synergy to disagreement: (Lower-bound on synergy via disagreement, same as Theorem <ref>) We can relate synergy S and uniqueness U to modality disagreement α(f_1,f_2) of optimal unimodal classifiers f_1,f_2 as follows: = α(f_1,f_2) · c - max(U_1,U_2) ≤ S for some constant c depending on the label dimension |𝒴| and choice of label distance function d. Theorem <ref> implies that if there is substantial disagreement between the unimodal classifiers f_1 and f_2, it must be due to the presence of unique or synergistic information. If uniqueness is small, then disagreement must be accounted for by the presence of synergy, which yields a lower bound. The first part of the proof is due to an intermediate result by <cit.>, which studies how multi-view agreement can help train better multiview classifiers. We restate the key proof ideas here for completeness. The first step is to relate I_p(X_2;Y|X_1) to | L(f_1^*) - L(f^*) |^2, the difference in errors between the Bayes' optimal unimodal classifier f_1^* with the Bayes' optimal multimodal classifier f^* for some appropriate loss function L on the label space: | L(f_1^*) - L(f^*) |^2 = | 𝔼_X 𝔼_Y|X_1,X_2ℓ (f^*(x_1,x_2), y) - 𝔼_X 𝔼_Y|X_1ℓ (f^*(x_1,x_2), y) |^2 ≤ | 𝔼_Y|X_1,X_2ℓ (f^*(x_1,x_2), y) - 𝔼_Y|X_1ℓ (f^*(x_1,x_2), y) |^2 ≤KL (p(y|x_1,x_2), p(y|x_1) ) ≤𝔼_X KL (p(y|x_1,x_2), p(y|x_1) ) = I_p(X_2;Y|X_1), where we used Pinsker's inequality in (<ref>) and Jensen's inequality in (<ref>). Symmetrically, | L(f_2^*) - L(f^*) |^2 ≤ I_p(X_1;Y|X_2), and via the triangle inequality through the Bayes' optimal multimodal classifier f^* and the inverse Lipschitz condition we obtain 𝔼_p(x_1,x_2) [d(f_1^*,f_2^*)] ≤𝔼_p(x_1,x_2) [d(f_1^*,f^*)] + 𝔼_p(x_1,x_2) [d(f^*,f_2^*)] ≤ | L(f_1^*) - L(f^*) |^2 + | L(f_2^*) - L(f^*) |^2 ≤ I_p(X_2;Y|X_1) + I_p(X_1;Y|X_2). Next, we relate disagreement α(f_1,f_2) to I_p(X_2;Y|X_1) and I_p(X_1;Y|X_2) via the triangle inequality through the Bayes' optimal unimodal classifiers f_1^* and f_2^*: α(f_1,f_2) = 𝔼_p(x_1,x_2) [d(f_1,f_2)] ≤ c_d ( 𝔼_p(x_1,x_2) [d(f_1,f_1^*)] + 𝔼_p(x_1,x_2) [d(f_1^*,f_2^*)] + 𝔼_p(x_1,x_2) [d(f_2^*,f_2)] ) ≤ c_d ( ϵ_1' + I_p(X_2;Y|X_1) + I_p(X_1;Y|X_2) + ϵ_2' ) ≤ 2 c_d (max(I_p(X_1;Y|X_2), I_p(X_2;Y|X_1)) + max(ϵ_1', ϵ_2')) where used classifier optimality assumption for unimodal classifiers f_1, f_2 in (<ref>). Finally, we use consistency equations of PID relating U and S in (<ref>)-(<ref>): to complete the proof: α(f_1,f_2) ≤ 2 c_d (max(I_p(X_1;Y|X_2), I_p(X_2;Y|X_1)) + max(ϵ_1', ϵ_2')) = 2 c_d (max(U_1+S, U_2+S) + max(ϵ_1', ϵ_2')) = 2 c_d (S + max(U_1, U_2) + max(ϵ_1', ϵ_2')), In practice, setting f_1 and f_2 as neural network function approximators that can achieve the Bayes' optimal risk <cit.> results in max(ϵ_1', ϵ_2') = 0, and rearranging gives us the desired inequality. §.§ Proof of NP-hardness (Theorem <ref>) Our proof is based on a reduction from the restricted timetable problem, a well-known scheduling problem closely related to constrained edge coloring in bipartite graphs. Our proof description proceeds along 4 steps. * Description of our problem. * How the minimum entropy objective can engineer “classification” problems using a technique from <cit.>. * Description of the RTT problem of <cit.>, how to visualize RTT as a bipartite edge coloring problem, and a simple variant we call Q-RTT which RTT reduces to. * Polynomial reduction of Q-RTT to our problem. §.§.§ Formal description of our problem Recall that our problem was min_r ∈Δ_p_1,2,12 H_r(X_1, X_2, Y) where Δ_p_1,2,12 = { r ∈Δ: r(x_1,x_2)=p(x_1,x_2), r(x_i,y)=p(x_i,y) }. [Strictly speaking, the marginals p(x_1, x_2) and p(x_i, y) ought to be rational. This is not overly restrictive, since in practice these marginals often correspond to empirical distributions which would naturally be rational.] Our goal is to find the minimum-entropy distribution over 𝒳_1 ×𝒳_2 ×𝒴 where the pairwise marginals over (X_1, X_2), (X_1, Y) and (X_2, Y) are specified as part of the problem. Observe that this description is symmetrical, X_i and Y could be swapped without loss of generality. §.§.§ Warm up: using the min-entropy objective to mimic multiclass classification We first note the strong similarity of our min-entropy problem to the classic min-entropy coupling problem in two variables. There where the goal is to find the min-entropy joint distribution over 𝒳×𝒴 given fixed marginal distributions of p(x) and p(y). This was shown to be an NP-hard problem which has found many practical applications in recent years. An approximate solution up to 1 bit can be found in polynomial time (and is in fact the same approximation we give to our problem). Our NP-hardness proof involves has a similar flavor as <cit.>, which is based on a reduction from the classic subset sum problem, exploiting the min-entropy objective to enforce discrete choices. Subset sum There are d items with value c_1 … c_d ≥ 0, which we assume WLOG to be normalized such that ∑_i^d c_i = 1. Our target sum is 0 ≤ T ≤ 1. The goal is to find if some subset 𝒮⊆ [d] exists such that ∑_i ∈𝒮 c_i = T. Reduction from subset sum to min-entropy coupling <cit.> Let 𝒳 be the d items and 𝒴 be binary, indicating whether the item was chosen. Our joint distribution is of size |𝒳| × |𝒴|. We set the following constraints on marginals. * p(x_i) = c_i for all i, (row constraints) * p(include)=T, p(omit)=1-T, (column constraints) Constraints (i) split the value of each item additively into nonnegative components to be included and not included from our chosen subset, while (ii) enforces that the items included sum to T. Observe that the min-entropy objective H(X,Y) = H(Y|X)+H(X), which is solely dependent on H(Y|X) since H(X) is a constant given marginal constraints on X. Thus, H(Y|X) is nonnegative and is only equal to 0 if and only if Y is deterministic given X, i.e., r(x_i, include) = 0 or r(x_i, omit) = 0. If our subset sum problem has a solution, then this instantiation of the min-entropy coupling problem would return a deterministic solution with H(Y|X)=0, which in turn corresponds to a solution in subset sum. Conversely, if subset sum has no solution, then our min-entropy coupling problem is either infeasible OR gives solutions where H(Y|X) > 0 strictly, i.e., Y|X is non-deterministic, which we can detect and report. Relationship to our problem Observe that our joint entropy objective may be decomposed H_r(X_1, X_2, Y) = H_r(Y|X_1, X_2) + H_r(X_1, X_2). Given that p(x_1, x_2) is fixed under Δ_p_1,2,12, our objective is equivalent to minimizing H_r(Y| X_1, X_2). Similar to before, we know that H_r(Y| X_1, X_2) is nonnegative and equal to zero if and only if Y is deterministic given (X_1, X_2). Intuitively, we can use 𝒳_1, 𝒳_2 to represent vertices in a bipartite graph, such that (X_1, X_2) are edges (which may or may not exist), and 𝒴 as colors for the edges. Then, the marginal constraints for p(x_1, x_2) could be used alongside the min-entropy objective to ensure that each edge has exactly one color. The marginal constraints p(x_1, y) and p(x_2, y) tell us (roughly speaking) the number of edges of each color that is adjacent to vertices in 𝒳_1 and 𝒳_2. However, this insight alone is not enough; first, edge coloring problems in bipartite graphs (e.g., colorings in regular bipartite graphs) can be solved in polynomial time, so we need a more difficult problem. Second, we need an appropriate choice of marginals for p(x_i, y) that does not immediately `reveal' the solution. Our proof uses a reduction from the restricted timetable problem, one of the most primitive scheduling problems available (and closely related to edge coloring or multicommodity network flow). §.§.§ Restricted Timetable Problem (RTT) The restricted timetable (RTT) problem was introduced by <cit.>, and has to do with how to schedule teachers to classes they must teach. It comprises the following * A collection of { T_1, …, T_n }, where T_i ⊆ [3]. These represent n teachers, each of which is available for the hours given in T_i. * m students, each of which is available at any of the 3 hours * An binary matrix { 0, 1}^ n × m. R_ij = 1 if teacher i is required to teach class j, and 0 otherwise. Since R_ij is binary, each class is taught by a teacher at most once. * Each teacher is tight, i.e., |T_i| = ∑_j=1^m R_ij. That is, every teacher must teach whenever they are available. Suppose there are exactly 3 hours a day. The problem is to determine if there exists a meeting function f: [n] × [m] × [3] →{ 0, 1}, where our goal is to have f(i,j,h) = 1 if and only if teacher i teaches class j at the h-th hour. We require the following conditions in our meeting function: * f(i,j,h)=1 h ∈ T_i. This implies that teachers are only teaching in the hours they are available. * ∑_h ∈ [3] f(i,j,h) = R_ij for all i ∈ [n], j∈[m]. This ensures that every class gets the teaching they are required, as specified by R. * ∑_i ∈ [n] f(i,j,h) ≤ 1 for all j ∈ [m] and h ∈ [3]. This ensures no class is taught by more than one teachers at once. * ∑_j ∈ [m] f(i,j,h) ≤ 1 for all i ∈ [n] and h ∈ [3]. This ensures no teacher is teaching more than one class simultaneously. <cit.> showed that RTT is NP-hard via a clever reduction from 3-SAT. Our strategy is to reduce RTT to our problem. calc, cd node split radius/.initial=1, node split color 1/.initial=red, node split color 2/.initial=green, node split color 3/.initial=blue, node split half/.style=node split=#1,#1+180, node split/.style args=#1,#2 path picture= x=((path picture bounding box.east)-(path picture bounding box.center)), y=((path picture bounding box.north)-(path picture bounding box.center)), radius=/tikz/node split radius [count=, remember=as (initially #1)] in #2,360+#1 [line join=round, draw, fill=/tikz/node split color ] (path picture bounding box.center) –++(:/tikz/node split radius) arc[start angle=, end angle=] –cycle; hold1/.style=draw=hour1, ultra thick, hold2/.style=draw=hour2, ultra thick, hold3/.style=draw=hour3, ultra thick Viewing RTT through the lens of bipartite edge coloring RTT can be visualized as a variant of constrained edge coloring in bipartite graphs (Figure <ref>). The teachers and classes are the two different sets of vertices, while R gives the adjacency structure. There are 3 colors available, corresponding to hours in a day. The task is to color the edges of the graph with these 3 colors such that * No two edges of the same color are adjacent. This ensures students and classes are at most teaching/taking one session at any given hour (condition 3 and 4) * Edges adjacent to teacher i are only allowed colors in T_i. This ensures teachers are only teaching in available hours (condition 1) If every edge is colored while obeying the above conditions, then it follows from the tightness of teachers (in the definition of RTT) that every class is assigned their required lessons (condition 2). The decision version of the problem is to return if such a coloring is possible. Time Constrained RTT (Q-RTT) A variant of RTT that will be useful is when we impose restrictions on the number of classes being taught at any each hour. We call this Q-RTT, where Q = (q_1,q_2,q_3) ∈ℤ^3. Q-RTT returns true if, in addition to the usual RTT conditions, we require the meeting function to satisfy ∑_i ∈ [n],j ∈ [m] f(i,j,h) = q_h. That is, the total number of hours taught by teachers in hour h is exactly q_h. From the perspective of edge coloring, Q-RTT simply imposes an additional restriction on the total number of edges of each color, i.e., there are q_k edges of color k for each k∈[3]. Obviously, RTT can be Cook reduced to Q-RTT: since there are only 3 hours and a total of g = ∑_i ∈ [n],j ∈ [m] R_ij total lessons to be taught, there are at most 𝒪(g^2) ways of splitting the required number of lessons up amongst the 3 hours. Thus, we can solve RTT by making at most 𝒪(g^2) calls to Q-RTT. This is polynomial in the size of RTT, and we conclude Q-RTT is NP-hard. §.§.§ Reduction of Q-RTT to our problem We will reduce Q-RTT to our problem. Let α = 1/(∑_i,j R_ij + 3m ), where 1/α should be seen as a normalizing constant given by the number of edges in a bipartite graph. One should think of α as an indicator of the boolean TRUE and 0 as FALSE. We use the following construction * Let 𝒳_1 = [n] ∪𝒵, where 𝒵 = {Z_1, Z_2, Z_3}. From a bipartite graph interpretation, these form one set of vertices that we will match to classes. Z_1, Z_2, Z_3 are “holding rooms”, one for each of the 3 hours. Holding rooms are like teachers whose classes can be assigned in order to pass the time. They will not fulfill any constraints on R, but they can accommodate multiple classes at once. We will explain the importance of these holding rooms later. * Let 𝒳_2 = [m]. These form the other set of vertices, one for each class. * Let 𝒴 = [3] ∪{ 0 }. 1, 2, and 3 are the 3 distinct hours, corresponding to edge colors. 0 is a special “null” color which will only be used when coloring edges adjacent to the holding rooms. * Let p(i,j, · )= α· R_ij and p(i, j) = α for all i ∈𝒵, j ∈ [m]. Essentially, there is an edge between a teacher and class if R dictates it. There are also always edges from every holding room to each class. * For i ∈ [n], set p(i, ·, h) = α if h ∈ T_i, 0 otherwise. For Z_i ∈𝒵, we set p(Z_i, ·, h)= α· q_i h = 0 α· (m-q_i) h = i 0 otherwise In order words, at hour h, when a class is not assigned to some teacher (which would to contribute to q_h), they must be placed in holding room Z_h. * Let p(·, j, h) = α for h ∈ [3], and p(·, j, h) = α·∑_i ∈ [n] R_i, j. The former constraint means that for each of the 3 hours, the class must be taking some lesson with a teacher OR in the holding room. The second constraint assigns the special “null” value to the holding rooms which were not used by that class. calc, cd node split radius/.initial=1, node split color 1/.initial=red, node split color 2/.initial=green, node split color 3/.initial=blue, node split half/.style=node split=#1,#1+180, node split/.style args=#1,#2 path picture= x=((path picture bounding box.east)-(path picture bounding box.center)), y=((path picture bounding box.north)-(path picture bounding box.center)), radius=/tikz/node split radius [count=, remember=as (initially #1)] in #2,360+#1 [line join=round, draw, fill=/tikz/node split color ] (path picture bounding box.center) –++(:/tikz/node split radius) arc[start angle=, end angle=] –cycle; hold1/.style=draw=hour1, ultra thick, hold2/.style=draw=hour2, ultra thick, hold3/.style=draw=hour3, ultra thick A solution to our construction with 0 conditional entropy implies a valid solution to Q-RTT Suppose that our construction returns a distribution r such that every entry r(x_1,x_2,y) is either α or 0. We claim that the meeting function f(i,j,h)=1 if r(i,j,h)=α and 0 otherwise solves Q-RTT. * Teachers are only teaching in the hours they are available, because of our marginal constraint on p(i,·, h). * Every class gets the teaching they need. This follows from the fact that teachers are tight and the marginal constraint p(i,·,h), which forces teachers to be teaching whenever they can. The students are getting the lessons from the right teachers because of the marginal constraint on p(i, j, ·), since teachers who are not supposed to teach a class have those marginal values set to 0. * No class is taught by more than one teacher at once. This follows from marginal constraint p(·, j, h). For each of the hours, a class is with either a single teacher or the holding room. * No teacher is teaching more than one class simultaneously. This holds again from our marginal constraint on p(i,·, h). * Lastly, the total number of lessons (not in holding rooms) held in each hour is q_h as required by Q-RTT. To see why, we consider each color (hour). Each color (excluding the null color) is used exactly m times by virtue of p(·, j, h). Some of these are in holding rooms, other are with teachers. The former (over all classes) is given by m-q_h because of our constraint on p(i, ·, h), which means that exactly q_h lessons in hour h as required. A valid solution to Q-RTT implies a solution to our construction with 0 conditional entropy Given a solution to Q-RTT, we recover a candidate solution to our construction in a natural way. If teacher i is teaching class j in hour h, then color edge ij with color h, i.e., r(i,j,h)=α and r(i,j,h')=0 if h' ≠ h. Since in RTT each teacher and class can be assigned one lesson per hour at most, there will be no clashes with this assignment. For all other i ∈ [3], j∈[m] where R_ij=0, we assign r(i,j,·)=0. Now, we will also need to assign students to holding rooms. For h ∈ [3], we set r(Z_h, j, h) = α if class j was not assigned to any teacher in hour h. If class j was assigned some teacher in hour h, then r(Z_h, j, 0)=α, i.e., we give it the special null color. All other entries are given a value of 0. We can verify * r is a valid probability distribution. The nonnegativity of r follows from the fact that α > 0 strictly. We need to check that r sums to 1. We break this down into two cases based on whether the first argument of r is some Z_h or i. In Case 1, we have ∑_i ∈ [n], h ∈ [3] ∪{ 0 }, j ∈ [m] r(i,j,h) = ∑_i ∈ [n], h ∈ [3], j ∈ [m] r(i,j,h) = α·∑_i ∈ [n], j ∈ [m] R_ij, where the first line follows from the fact that we never color a teacher-class edge with the null color, and the second line is because every class gets its teaching requirements satisfied. In Case 2, we know that by definition every class is matched to every holding room and assigned either the null color or that room's color, hence ∑_i ∈{Z_1, Z_2, Z_3}, h ∈ [3] ∪{ 0 }, j ∈ [m] r(i, j, h) = 3m Summing them up, we have α·( 3m + ∑_i ∈ [n], j ∈ [m] R_ij) = 1 (by our definition of α. * This r distribution has only entries in α or 0. This follows by definition. * This r distribution has minimum conditional entropy. For a fixed i,j, r(i,j,·) is either α or 0. That is, Y is deterministic given X_1,X_2, hence H(Y |X_1, X_2)=0. * All 3 marginal constraints in our construction are obeyed. We check them in turn. * Marginal constraint r(i, j) = p(i, j). When i ∈ [3]: (i) when R_ij=1 exactly one time h is assigned to teacher i and class j, hence r(i,j)=α = p(i,j) as required, (ii)when R_ij=0 as specified. Now when i ∈{Z_1, Z_2, Z_3 }, we have r(i,j,·)=α = p(i,j) since every holding room is either assigned it's color to a class, or assigned the special null color. * Marginal constraint r(i, h)=p(i,h). When i ∈ [3], this follows directly from tightness. Similarly, when i ∈{ Z_1, Z_2 ,Z_3}, we have by definition of Q-RTT the assignments to holding rooms equal to m - q_h for hour h, and consequently, q_h null colors adjacent to Z_h as required. * Marginal constraint r(j,h)=p(j,h). For every h ∈ [3], the class is assigned either to a teacher or a holding room, so this is equal to α as required. For h = 0, i.e., the null color, this is used exactly ∑_i ∈ [n] R_ij times (since these were the number hours that were not assigned to teachers), as required, making its marginal ∑_i ∈ [n] R_ij and r(j,h)=α·∑_i ∈ [n] R_ij as required. Thus, if RTT returns TRUE, our construction will also return a solution with entries in { 0, α}, and vice versa. Corollary The decision problem of whether there exists a distribution in r ∈Δ_p_1,2,12 such that H(Y| X_1, X_2) = 0 is NP-complete. This follows because the problem is in NP since checking if Y is deterministic (i.e., H(Y|X_1, X_2) = 0) can be done in polynomial time, while NP-hardness follows from the same argument as above. §.§ Upper bound on synergy (Theorem <ref>) We begin by restating Theorem <ref> from the main text: (Upper-bound on synergy, same as Theorem <ref>). S ≤ H_p(X_1, X_2) + H_p(Y) - min_r ∈Δ_p_12,y H_r(X_1, X_2, Y) - max_q ∈Δ_p_1,2 I_q({X_1,X_2}; Y) = where Δ_p_12,y = { r ∈Δ : r(x_1,x_2)=p(x_1,x_2), r(y)=p(y) }. Recall that this upper bound boils down to finding max_r ∈Δ_p_1,2,12 I_r({X_1,X_2}; Y). We have max_r ∈Δ_p_1,2,12 I_r({X_1,X_2}; Y) = max_r ∈Δ_p_1,2,12{ H_r(X_1, X_2) + H_r(Y) - H_r(X_1, X_2, Y) } = H_p(X_1, X_2) + H_p(Y) - min_r ∈Δ_p_1,2,12 H_r(X_1, X_2, Y), ≤ H_p(X_1, X_2) + H_p(Y) - min_r ∈Δ_p_12,y H_r(X_1, X_2, Y) where the first two lines are by definition. The last line follows since Δ_p_12,y is a superset of r ∈Δ_p_1,2,12, which implies that minimizing over it would yield a a no larger objective. In practice, we use the slightly tighter bound which maximizes over all the pairwise marginals, max_r ∈Δ_p_1,2,12 I_r(X_1,X_2; Y) ≤ H_p(X_1, X_2) + H_p(Y) - maxmin_r ∈Δ_p_12,y H_r(X_1, X_2, Y) min_r ∈Δ_p_1,x_2 H_r(X_1, X_2, Y) min_r ∈Δ_p_2,x_1 H_r(X_1, X_2, Y) . Estimating using min-entropy couplings We only show how to compute min_r ∈Δ_p_12,y H_r(X_1, X_2, Y), since the other variants can be computed in the same manner via symmetry. We recognize that by treating (X_1, X_2)=X as a single variable, we recover the classic min-entropy coupling over X and Y, which is still NP-hard but admits good approximations <cit.>. There are many methods to estimate such a coupling, for example <cit.> give a greedy algorithm running in linear-logarithmic time, which was further proven by <cit.> to be a 1-bit approximation of the minimum coupling [This a special case when there are 2 modalities. For more modalities, the bounds will depend on the sizes and number of signals.]. Another line of work was by <cit.>, which constructs an appropriate coupling and shows that it is optimal to 1-bit to a lower bound H(p(x_1,x_2) ∧ p(y)), where ∧ is the greatest-lower-bound operator, which they showed in <cit.> can be computed in linear-logarithmic time. We very briefly describe this method; more details may be found in <cit.> directly. Remark A very recent paper by <cit.> show that one can get an approximation tighter than 1-bit. We leave the incorporation of these more advanced methods as future work. Without loss of generality, suppose that 𝒳 and 𝒴 are ordered and indexed such that p(x) and p(y) are sorted in non-increasing order of the marginal constraints, i.e., p(X=x_i) ≥ p(X=x_j) for all i ≤ j. We also assume WLOG that the supports of X and Y are of the same size n, if they are not, then pad the smaller one with dummy values and introduce marginals that constrain these values to never occur (and set n accordingly if needed). For simplicity, we will just refer to p_i and q_j for the distributions of p(X=x_i) and p(Y=y_j) respectively. Given 2 distributions p, q we say that p is majorized by q, written as p ≼ q if and only if ∑_i=1^k p_i ≤∑_i=1^k q_i for all k ∈ 1 … n As <cit.> point out, there is a strong link between majorization and Schur-convex functions; in particular, if p ≼ q, then we have H(p) ≥ H(q). Indeed, if we treat ≽ as a partial order and consider the set 𝒫^n = { p = (p_1, …, p_n) : p_i ∈ [0, 1], ∑_i^n p_i = 1, p_i≥ p_i+1} as the set of finite (ordered) distributions with support size n with non-increasing probabilities, then we obtain a lattice with a unique greatest lower bound (∧) and least upper bound (∨). Then, <cit.> show that that p ∧ q can be computed recursively as p ∧ q = α(p, q) = (a_1, …, a_n) where a_i = min{∑_j=1^i p_j, ∑_j=1^i q_j} + ∑_j=1^i-1 a_j-1 It was shown by <cit.> that any coupling satisfying the marginal constraints given by p and q, i.e., M ∈ C(p, q) = { M = m_ij: ∑_j m_ij = p_i, ∑_i m_ij = p_j} has entropy H(M) ≥ H(p ∧ q). In particular, this includes the min-entropy one. Since we only need the optimal value of such a coupling and not the actual coupling per-se, we can use plug the value of H(p ∧ q) into the minimization term (<ref>), which yields an upper bound for max_r ∈Δ_p_1,2,12 I_r({X_1,X_2}; Y), which would form an upper bound on itself. § EXPERIMENTAL DETAILS §.§ Verifying lower and upper bounds Synthetically generated datasets: To test our derived bounds on synthetic data, We randomly sampled 100,000 distributions of {X_1, X_2, Y} to calculate their bounds and compare with their actual synergy values. We set X_1, X_2, and Y as random binary values, so each distribution can be represented as a size 8 vector of randomly sampled entries that sum up to 1. Results: We calculated the lower bound via redundancy, lower bound via disagreement, and upper bound of all distributions and plotted them with actual synergy value (Figure <ref>). We define a distribution to be on the boundary if its lower/upper bound is within 10% difference from its actual synergy value. We conducted the least mean-square-error fitting on these distributions close to the boundary. We plot actual synergy against in Figure <ref> (left), and find that it again tracks a lower bound of synergy. In fact, we can do better and fit a linear trend y=1.095x on the distributions along the margin (RMSE =0.0013). We also plot actual synergy against computed in Figure <ref> (middle). As expected, the lower bound closely tracks actual synergy. Similarly, we can again fit a linear model on the points along the boundary, obtaining y=1.098x with a RMSE of 0.0075 (see this line in Figure <ref> (middle)). Finally, we plot actual synergy against estimated in Figure <ref> (right). Again, we find that the upper bound consistently tracks the highest attainable synergy - we can fit a single constant y=x-0.2 to obtain an RMSE of 0.0022 (see this line in Figure <ref> (right)). This implies that our bound enables both accurate comparative analysis of relative synergy across different datasets, and precise estimation of absolute synergy. Real-world datasets: We also use the large collection of real-world datasets in MultiBench <cit.>: (1) MOSI: video-based sentiment analysis <cit.>, (2) MOSEI: video-based sentiment and emotion analysis <cit.>, (3) MUStARD: video-based sarcasm detection <cit.>, (5) MIMIC: mortality and disease prediction from tabular patient data and medical sensors <cit.>, and (6) ENRICO: classification of mobile user interfaces and screenshots <cit.>. While the previous bitwise datasets with small and discrete support yield exact lower and upper bounds, this new setting with high-dimensional continuous modalities requires the approximation of disagreement and information-theoretic quantities: we train unimodal neural network classifiers f̂_θ(y|x_1) and f̂_θ(y|x_2) to estimate disagreement, and we cluster representations of X_i to approximate the continuous modalities by discrete distributions with finite support to compute lower and upper bounds. Implementation details: We first apply PCA to reduce the dimension of multimodal data. For the test split, we use unsupervised clustering to generate 20 clusters. We obtain a clustered version of the original dataset 𝒟={(x_1,x_2,y)} as 𝒟_cluster={(c_1,c_2,y)} where c_i∈{1,…,20} is the ID of the cluster that x_i belongs to. In our experiments, where 𝒴 is typically a classification task, we set the unimodal classifiers f_1 = p̂(y|x_1) and f_2 = p̂(y|x_2) as the Bayes optimal classifiers for multiclass classification tasks. For classification, 𝒴 is the set of k-dimensional 1-hot vectors. Given two logits ŷ_1, ŷ_2 obtained from x_1, x_2 respectively, define d(ŷ_1, ŷ_2) = (ŷ_1-ŷ_2)^2. We have that c_d=1, and ϵ_1 = |L(f_1) - L(f_1^*)|^2 = 0 and ϵ_2 = |L(f_2) - L(f_2^*)|^2 = 0 for well-trained neural network unimodal classifiers f_1 and f_2 for Theorem <ref>. For datasets with 3 modalities, we perform the experiments separately for each of the 3 modality pairs, before taking an average over the 3 modality pairs. Extending the definitions of redundancy, uniqueness, and synergy, as well as our derived bounds on synergy for 3 or more modalities is an important open question for future work. §.§ Relationships between agreement, disagreement, and interactions 1. The relationship between redundancy and synergy: We give some example distributions to analyze when the lower bound based on redundancy is high or low. The bound is high for distributions where X_1 and X_2 are independent, but Y=1 sets X_1 ≠ X_2 to increase their dependence (i.e., agreement XOR distribution in Table <ref>). Since X_1 and X_2 are independent but become dependent given Y, I(X_1;X_2;Y) is negative, and the bound is tight = 1 ≤ 1=S. Visual Question Answering 2.0 <cit.> falls under this category, with S = 4.92,R=0.79, where the image and question are independent (some questions like `what is the color of the object' or `how many people are there' can be asked for many images), but the answer connects the two modalities, resulting in dependence given the label. As expected, the estimated lower bound for agreement synergy: = 4.03 ≤ 4.92=S. Conversely, the bound is low for Table <ref> with the probability mass distributed uniformly only when y=x_1=x_2 and 0 elsewhere. As a result, X_1 is always equal to X_2 (perfect dependence), and yet Y perfectly explains away the dependence between X_1 and X_2 so I(X_1;X_2|Y) = 0: = 0 ≤ 0=S. Note that this is an example of perfect redundancy and zero synergy - for an example with synergy, refer back to disagreement XOR in Table <ref> - due to disagreement there is non-zero I(X_1;X_2) but the label explains some of the relationships between X_1 and X_2 so I(X_1;X_2|Y) < I(X_1;X_2): = -0.3 ≤ 1=S. A real-world example is multimodal sentiment analysis from text, video, and audio of monologue videos on MOSEI, R=0.26 and S=0.04, and as expected the lower bound is small = 0.01 ≤ 0.04=S. 2. The relationship between disagreement and synergy: To give an intuition of the relationship between disagreement, uniqueness, and synergy, we use one illustrative example shown in Table <ref>, which we call disagreement XOR. We observe that there is maximum disagreement between marginals p(y|x_1) and p(y|x_2): the likelihood for y is high when y is the same bit as x_1, but reversed for x_2. Given both x_1 and x_2: y seems to take a `disagreement' XOR of the individual marginals, i.e. p(y|x_1,x_2) = p(y|x_1) XOR p(y|x_2), which indicates synergy (note that an exact XOR would imply perfect agreement and high synergy). The actual disagreement is 0.15, synergy is 0.16, and uniqueness is 0.02, indicating a very strong lower bound =0.13 ≤ 0.16=S. A real-world equivalent dataset is MUStARD for sarcasm detection from video, audio, and text <cit.>, where the presence of sarcasm is often due to a contradiction between what is expressed in language and speech, so disagreement α=0.12 is the highest out of all the video datasets, giving a lower bound =0.11 ≤ 0.44 = S. On the contrary, the lower bound is low when all disagreement is explained by uniqueness (e.g., y=x_1, Table <ref>), which results in = 0 ≤ 0 = S (α and U cancel each other out). A real-world equivalent is MIMIC involving mortality and disease prediction from tabular patient data and time-series medical sensors <cit.>. Disagreement is high α=0.13 due to unique information U_1=0.25, so the lower bound informs us about the lack of synergy = -0.12 ≤ 0.02 = S. Finally, the lower bound is loose when there is synergy without disagreement, such as agreement XOR (y=x_1 XOR x_2, Table <ref>) where the marginals p(y|x_i) are both uniform, but there is full synergy: = 0 ≤ 1 = S. Real-world datasets which fall into agreement synergy include UR-FUNNY where there is low disagreement in predicting humor α=0.03, and relatively high synergy S=0.18, which results in a loose lower bound = 0.01 ≤ 0.18=S. R0.3 0.3 < g r a p h i c s > Comparing the qualities of the bounds when there is agreement and disagreement synergy. During agreement synergy, is tight, is loose, and is tight. For disagreement synergy, is loose, is tight, and is loose with respect to true S. 3. On upper bounds for synergy: We also run experiments to obtain estimated upper bounds on synthetic and MultiBench datasets. The quality of the upper bound shows some intriguing relationships with that of lower bounds. For distributions with perfect agreement synergy such as y = x_1 XOR x_2 (Table <ref>), = 1 ≥ 1 = S is really close to true synergy, = 1 ≤ 1 = S is also tight, but = 0 ≤ 1 = S is loose. For distributions with disagreement synergy (Table <ref>), = 0.52 ≥ 0.13 = S far exceeds actual synergy, = -0.3 ≤ 1=S is much lower than actual synergy, but =0.13 ≤ 0.16=S is tight (see relationships in Figure <ref>). Finally, while some upper bounds (e.g., MUStARD, MIMIC) are close to true S, some of the other examples in Table <ref> show bounds that are quite weak. This could be because (i) there indeed exists high synergy distributions that match 𝒟_i and 𝒟_M, but these are rare in the real world, or (ii) our approximation used in Theorem <ref> is mathematically loose. We leave these as open directions for future work. § APPLICATION 1: ESTIMATING MULTIMODAL PERFORMANCE FOR FUSION Formally, we estimate performance via a combination of <cit.> and Fano's inequality <cit.> together yield tight bounds of performance as a function of total information I_p({X_1,X_2}; Y). We restate Theorem <ref> from the main text: Let P_acc(f_M^*) = 𝔼_p [ 1[ f_M^*(x_1,x_2) = y ] ] denote the accuracy of the Bayes' optimal multimodal model f_M^* (i.e., P_acc (f_M^*) ≥ P_acc (f'_M) for all f'_M ∈ℱ_M). We have that 2^I_p({X_1,X_2}; Y)-H(Y)≤ P_acc(f_M^*) ≤I_p({X_1,X_2}; Y) + 1/log |𝒴|, where we can plug in R+U_1,U_2+S≤ I_p({X_1,X_2}; Y) ≤ R+U_1,U_2+ to obtain lower P_acc(f_M^*) and upper P_acc(f_M^*) bounds on optimal multimodal performance. We use the bound from <cit.>, where we define the Bayes' optimal classifier f_M^* is the one where given x_1,x_2 outputs y such that p(Y=y|x_1,x_2) is maximized over all y ∈𝒴. The probability that this classifier succeeds is max_y p(Y=y|x_1,x_2), which is 2^-H_∞ (Y|X_1=x_1,X_2=x_2)) where -H_∞ (Y|X_1,X_2) is the min-entropy of the random variable Y conditioned on X_1,X_2. Over all inputs (x_1,x_2), the probability of accuracy is P_acc(f_M^*) = 𝔼_x_1,x_2[ 2^-H_∞(Y|X_1=x_1,X_2=x_2))] ≥ 2^-𝔼_x_1,x_2[ H_∞(Y|X_1=x_1,X_2=x_2)) ] ≥ 2^-𝔼_x_1,x_2[ H_p(Y|X_1=x_1,X_2=x_2)) ] ≥ 2^-H_p(Y|X_1,X_2) = 2^I_p({X_1,X_2}; Y)-H(Y). The upper bound is based on Fano's inequality <cit.>. Starting with H_p(Y|X_1,X_2) ≤ H(P_err) + P_err (log |𝒴| -1) and assuming that Y is uniform over |𝒴|, we rearrange the inequality to obtain P_acc(f_M^*) ≤H(Y) - H_p(Y|X_1,X_2) + log 2/log |𝒴| = I_p({X_1,X_2}; Y) + 1/log |𝒴|. Finally, we summarize estimated multimodal performance as the average between estimated lower and upper bounds on performance: P̂_M = (P_acc(f_M^*) + P_acc(f_M^*))/2. Unimodal and multimodal performance: Table <ref> summarizes all final performance results for each dataset, spanning unimodal models and simple or complex multimodal fusion paradigms, where each type of model is represented by the most recent state-of-the-art method found in the literature. § APPLICATION 2: SELF-SUPERVISED MULTIMODAL LEARNING VIA DISAGREEMENT §.§ Training procedure We continuously pretrain MERLOT Reserve Base on the datasets before finetuning. The continuous pretraining procedure is similar to Contrastive Span Training, with the difference that we add extra loss terms that correspond to modality disagreement. The pretraining procedure of MERLOT Reserve minimizes a sum of 3 component losses, ℒ=ℒ_𝓉ℯ𝓍𝓉 + ℒ_𝒶𝓊𝒹𝒾ℴ + ℒ_𝒻𝓇𝒶𝓂ℯ where each of the component losses is a contrastive objective. Each of the objectives aims to match an independent encoding of masked tokens of the corresponding modality with the output of a Joint Encoder, which takes as input the other modalities and, possibly, unmasked tokens of the target modality. We modify the procedure by adding disagreement losses between modalities to the objective. This is done by replacing the tokens of a modality with padding tokens before passing them to the Joint Encoder, and then calculating the disagreement between representations obtained when replacing different modalities. For example, ℒ_frame uses a representation of video frames found by passing audio and text into the Joint Encoder. Excluding one of the modalities and passing the other one into the Encoder separately leads to two different representations, f̂_t for prediction using only text and f̂_a for prediction using only audio. The distance between the representations is added to the loss. Thus, the modified component loss is ℒ_disagreement, frame = ℒ_frame + d_λ_text, audio( f̂_t, f̂_a ) where d_λ_text, audio(x, y)=max(0, d(x, y) - λ_text, audio), and d(x, y) is the cosine difference: d(x, y)=1 - x·y/|x||y| Similarly, we modify the other component losses by removing one modality at a time, and obtain the new training objective ℒ_disagreement=ℒ_𝒹𝒾𝓈𝒶ℊ𝓇ℯℯ𝓂ℯ𝓃𝓉, 𝓉ℯ𝓍𝓉 + ℒ_𝒹𝒾𝓈𝒶ℊ𝓇ℯℯ𝓂ℯ𝓃𝓉, 𝒶𝓊𝒹𝒾ℴ + ℒ_𝒹𝒾𝓈𝒶ℊ𝓇ℯℯ𝓂ℯ𝓃𝓉, 𝒻𝓇𝒶𝓂ℯ §.§ Training details We continuously pretrain and then finetune a pretrained MERLOT Reserve Base model on the datasets with a batch size of 8. During pretraining, we train the model for 960 steps with a learning rate of 0.0001, and no warm-up steps, and use the defaults for other hyperparameters. For every dataset, we fix two of {λ_text, audio, λ_vision, audio, λ_text, vision} to be +∞ and change the third one, which characterizes the most meaningful disagreement. This allows us to reduce the number of masked modalities required from 3 to 2 and thus reduce the memory overhead of the method. For Social-IQ, we set λ_text, vision to be 0. For UR-FUNNY, we set λ_text, vision to be 0.5. For MUStARD, we set λ_vision, audio to be 0. All training is done on TPU v2-8 accelerators, with continuous pretraining taking 30 minutes and using up to 9GB of memory. §.§ Dataset level analysis We visualize the impact of pairwise modality disagreement on model performance by fixing two modalities M_1, M_2 and a threshold t, and setting the modality pair-specific disagreement slack terms λ according to the rule λ_a, b= t, a=M_1, b=M_2 +∞, else This allows us to isolate d_λ_M_1, M_2 while ensuring that the other disagreement loss terms are 0. We also modify the algorithm to subtract d_λ_M_1, M_2 from the loss rather than adding it (see Section <ref>). By decreasing t, we encourage higher disagreement between the target modalities. In Figure <ref>, we plot the relationship between model accuracy and t for the MUStARD dataset to visualize how pairwise disagreement between modalities impacts model performance. §.§ Datapoint level analysis After continuously pretraining the model, we fix a pair of modalities (text and video) and find the disagreement in these modalities for each datapoint. We show examples of disagreement due to uniqueness and synergy in Figure <ref>. The first example shows a speaker using descriptive slides, leading to less unique information being present in the text and higher agreement between modalities. In the second example, the facial expression of the person shown does not match the text being spoken, indicating sarcasm and leading to disagreement synergy. §.§ Alternative training procedure We also explore an alternative training procedure, which involves subtracting the disagreements d_λ_a, b from the loss rather than adding them. This achieves the opposite effect of pushing modalities further away from each other if they disagree significantly. The reasoning behind this is that in some settings, such as sarcasm prediction in MUStARD, we expect modalities not just to disagree, but to store contradicting information, and disagreement between them should be encouraged. However, we find that the results obtained using this method are not as good as the ones obtained using the procedure outlined in Section <ref>.
http://arxiv.org/abs/2306.07786v2
20230613140752
A Cloud-based Machine Learning Pipeline for the Efficient Extraction of Insights from Customer Reviews
[ "Robert Lakatos", "Gergo Bogacsovics", "Balazs Harangi", "Istvan Lakatos", "Attila Tiba", "Janos Toth", "Marianna Szabo", "Andras Hajdu" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Quantum coherent feedback control of an N-level atom with multiple excitations Haijin Ding, Guofeng Zhang Haijin Ding is with the Laboratoire des Signaux et Systèmes (L2S), CNRS-CentraleSupélec-Université Paris-Sud, Université Paris-Saclay, 3, Rue Joliot Curie, 91190, Gif-sur-Yvette, France (e-mail: [email protected]). Guofeng Zhang is with the Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, SAR, China, and The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, 518057, China (e-mail: [email protected]). Corresponding author: Guofeng Zhang. July 31, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== empty empty The efficiency of natural language processing has improved dramatically with the advent of machine learning models, particularly neural network-based solutions. However, some tasks are still challenging, especially when considering specific domains. In this paper, we present a cloud-based system that can extract insights from customer reviews using machine learning methods integrated into a pipeline. For topic modeling, our composite model uses transformer-based neural networks designed for natural language processing, vector embedding-based keyword extraction, and clustering. The elements of our model have been integrated and further developed to meet better the requirements of efficient information extraction, topic modeling of the extracted information, and user needs. Furthermore, our system can achieve better results than this task's existing topic modeling and keyword extraction solutions. Our approach is validated and compared with other state-of-the-art methods using publicly available datasets for benchmarking. atural language processing; machine learning; neural networks; unsupervised learning; clustering; keyphrase extraction; topic modeling § INTRODUCTION Users of social platforms, forums, and online stores generate a significant amount of textual data. One of the most useful applications of machine learning-based text processing is to find words and phrases that describe the content of these texts. In e-commerce, the knowledge contained in data such as customer reviews can be of great value and provide a tangible and measurable financial return. However, it is impossible to efficiently extract information from such large amounts of data using human labor alone. The difficulty in solving this problem effectively with automated methods is that human-generated texts often contain a lot of noise in addition to substantive details. Filtering the relevant information is further complicated by the fact that different texts can have different characteristics. For example, the document to be analyzed may contain words too common to be distinctive or, for example, information irrelevant to the analysis objective. In fact, different parts of the text may be considered noise, depending on how we view the data and what we think is relevant. This in turn makes it difficult to solve this task: it is not enough to find some specific information in texts, but we also have to decide what information we need based on the texts. Our aim is to extract information from textual data in the field of e-commerce. Our application is an end-to-end system that runs on a cloud-based infrastructure and can be used as a service by small and medium-sized businesses. Our system uses machine learning tools developed for natural language processing and can identify those sets of words and phrases in customer reviews that characterize their opinion. We have built a system based on machine learning solutions that effectively handles such text-processing tasks and, in some aspects, outperforms currently available approaches. To build an application that can be used in an e-commerce environment, we needed a model that could identify topics in texts and provide a way to determine which topics are relevant, given our analysis goals. Therefore, before developing our system, we investigated the N-gram model <cit.>, dependency parsing <cit.> and embedded vector space-based keyword extraction solutions, and various distance or density-based and hierarchical clustering <cit.> <cit.> techniques. In addition, we tested the LDA <cit.>, Top2Vec <cit.>, and BERTopic <cit.> complex topic modeling methods. We focused on these tools because we found them to be the most appropriate for our goals based on the available literature. It was also important to us that these methods and complex models have a stable implementation, are verifiably executable in our cloud-based environment, and can be adequately integrated. Extracting information from customer feedback and reviews will achieve the desired result if we can identify the words and phrases that describe the customers' opinions and thus help to further improve a product or service, both from a sales and technical point of view. In other words, it can help increase revenue or avoid potential loss. However, it is not enough to know the frequency of certain words and phrases; it is also necessary to provide the possibility of grouping the phrases according to different criteria. For example, different sentiment features can affect the information value of a frequent phrase. Furthermore, highly negative or positive reviews should be excluded from the analysis due to their bias. To produce good-quality results, it is important to identify the text passages that we consider noise. Certain parts of texts are generally considered noise. These include stop words <cit.>, special characters, and punctuation. Removing these words is often one of the first steps in text preprocessing. However, noise is often more difficult to identify. In the case of stop words, for example, a complicated problem is the removal of elements that express negation. While this may seem trivial, it may result in a loss of information, depending on the model's behavior. In addition, removing certain punctuation marks can distort the semantics and lead to information loss. Therefore, it is not possible to address these high-level issues with the general word-level methods. As a result, we needed an adaptive approach to deal with these issues. We run into further difficulties if we take a higher perspective and look at this problem at the sentence level. Namely, customer reviews can consist of several separate parts describing different problems. These topics are easier to capture in more complex situations at the sentence or sub-sentence level. Therefore, solving the aforementioned problems requires a specialized approach. Suppose we identify separate text parts, phrases, sentences, or sub-sentences with different meanings. In this case, they can be grouped into useful and useless texts according to their meaning. This is another level of abstraction with its own difficulties in information extraction. One of the critical problems with organizing text into topics is that the number of topics is unknown in advance, and this number changes as the amount of text increases. Finding these semantically distinct sets of text belongs to the topic modeling and text clustering subfields of natural language processing. This is an unsupervised machine learning problem, which is particularly difficult because it requires finding the optimal clusters. Optimal results are obtained when we can create sufficiently dissimilar clusters in the useful customer reviews about a given product. The corpus should be clustered according to the product's substantive information, not word frequency or other properties common to textual data. In our experience, among the topic modelers, distance or density-based and hierarchical clustering methods, and keyword extraction solutions we have investigated, LDA, Top2Vec, and BERTopic could meet our requirements. However, none of them offered a comprehensive solution for all problems. The clustering and keyword extraction solutions cannot be considered complex enough. The topic modeling tools did not provide us with adequate results without requiring significant modifications in their implementation to adjust their functionality. Therefore, we decided to build our own model. Of course, our solution draws on the experience gained with the models and tools listed above. However, we have taken the building blocks of these approaches and reworked them to better address the problems described. In the end-to-end system we designed, we used a semantic embedding approach for keyword extraction and applied recursive hierarchical clustering to find relevant topics. It enabled our system to perform parameterizable content-dependent clustering using cosine distance for semantic similarity measurement. With this architecture, we could achieve that our system could adapt to the specific structure of the text. As a result, we created a model integrated into a pipeline to group the extracted sets based on their semantic meaning. Furthermore, we could influence the density of the resulting sets and remove outliers. Our model can address all these issues by making the extracted words and phrases retain as much information content as possible. To validate this claim, the words and phrases extracted by our model were compared with those extracted by the topic modeling methods LDA, Top2Vec, and BERTopic. We tested their loss of information during the comparison process using a text classification task. As a result, we were able to build a solution better suited to our needs and, based on our measurements, more sophisticated and usable in terms of the resulting topic/keyword groups. Moreover, after removing the irrelevant text passages according to the topic modeling, the extracted text yielded better classification results using a regression model than the text extracted by the topic modeling methods in the literature. The rest of the paper is organized as follows. In section <ref>, we overview the recent related work. Our methodology, including the dataset used during development and our topic modeling pipeline details, is presented in section <ref>. Then, in section <ref>, we provide the results of our experiments regarding the performance of the keyword and phrase extraction methods as well as the performance of our model compared to the state-of-the-art. Finally, some conclusions are drawn in section <ref>. § RELATED WORK §.§ Keyphrase extraction There are several approaches to extracting keywords from natural language texts. On the one hand, this problem can be approached by deciding which words and phrases are relevant to us depending on the frequency of words. Approaches based on word frequency <cit.> can be an effective solution for a comprehensive analysis of large texts. However, this approach is less efficient for shorter texts, unless we have some general information about the frequency of words in that language. Furthermore, such approaches can be sensitive to the quality of the text cleaning, depending on the nature of the text. Dependency parsing <cit.> is another approach that can be used to extract information from text. This technique attempts to identify specific elements of a text based on the grammatical rules of the language. It can be particularly useful if we have prior knowledge about the type of words that carry the information we are looking for. In our experience, dependency parsing-based solutions tend to work better for smaller texts and sentence fragments compared to frequency-based approaches. When dealing with large amounts of text, it is often helpful to break it down into smaller parts, e.g., into sentences. This approach can improve the accuracy of information extraction. One potential drawback of using dependency parsing is that it can be sensitive to the preprocessing of the text. Semantic approaches based on text embedding <cit.> can also be used to identify keywords. Such an approach involves identifying the relationship between words and parts of the text. This can be done by vectorizing the elements of the text and their larger units, such as sentences, using embedding techniques, and measuring the similarity between them using some metric. The advantage of methods based on this approach is that they are less sensitive to the lack of text cleaning. Their disadvantage is that the quality of the vector space required for similarity measurement largely determines the model's functionality. Furthermore, unlike the previous two approaches, it currently imposes a higher computational burden and works better on smaller texts than on larger texts. However, because of the semantic approach, if the choice of text splitting rules, similarity metrics, and vector space are well chosen, better results can be obtained than with approaches based on frequency or dependency parsing. In the case of frequency-based techniques, dependency parsing, or semantic embedding, it can generally be said that, although they offer the possibility of finding the essential elements of a text, none of them provides a clear answer to the question of the relationship between the words found and the topics of the text. If we need to find the main terms of the text but also group them according to their content to answer higher-level questions, we need to use clustering or topic modeling. §.§ Clustering The effectiveness of text clustering is determined by what can be done to transform the text into an embedded vector space that best represents the documents regarding the target task. There are several ways to vectorize a text. There are frequency-based techniques, such as one-hot encoding or count vectorization <cit.>. However, there are solutions using more complex statistical methods, such as term frequency <cit.> and inverse document frequency-based <cit.> (TF-IDF) techniques or the transformation mechanism of LDA <cit.>. In addition, we can use semantic embedding like GloVe <cit.> as a first pioneer, which generates vectors for each word on a statistical basis. However, statistical models were soon replaced by neural networks-based solutions with the rise of word2vec <cit.>, and fastText <cit.>. With the advent of transformers <cit.>, neural networks with special architectures have emerged that can create semantically superior embedded vector spaces. Several clustering techniques are available to group the entities in the embedded vector space. In our work, we investigated the k-means <cit.>, agglomerative <cit.>, DBSCAN <cit.>, and HDBSCAN <cit.> clustering. The source of our problem was that we found that embedded textual entities do not usually form dense regions in the vector space, even for terms with the same meaning. For this reason, either centroid, hierarchical, or density-based methods, in our experience, would have difficulty handling vector spaces considered for text and word embeddings. Another problem of working with text-based embedded vector spaces is that, in order to achieve good semantic quality, textual data is currently converted into high-dimensional vectors, which is resource intensive to develop. Although the resource demand can be reduced by various dimension reduction techniques such as PCA <cit.>, UMAP <cit.> or T-SNE <cit.>, it results in a loss of information, which in turn can lead to distortions due to the particular structure of the embedded vector space generated from the text. §.§ LDA Latent Dirichlet Allocation (LDA) is a popular model for extracting text topics. It has the advantage of having efficient and reliable implementations, making it easy to integrate into machine learning systems. To adapt it to a specific context, Batch <cit.> or Online Variational Bayes <cit.> methods have become popular. To measure the quality of the topics generated by LDA and thus find the right number of topics, the perplexity <cit.> values of each topic are computed. For dictionary construction, the bag-of-words <cit.> technique is a common choice, with lemmatization and stop word removal. Considering our system, a drawback of this model is that it is a probabilistic statistical method that ignores word order and semantics and is less sensitive to finding smaller topics. It means that it finds fewer topics and cannot isolate finer details. In practice, when using LDA, we tend to focus on e.g., the top 10 words, which narrows down the list of key terms, especially when there are few topics. Although these properties of the model are parameterizable, our experience shows that the more words we select from a given topic, the more the quality of the words or phrases associated with that topic deteriorates. This leads to greater overlap between the words of the topics, which requires further corrections. §.§ Top2Vec Unlike LDA, Top2Vec uses semantic embedding and does not require stop word removal or lemmatization. Such pre-processing steps can even be detrimental to the model's performance, as they can distort the semantic meaning of the text in the documents. The quality of semantic embedding-based methods is determined by the embedded vector space they use. This makes Top2Vec sensitive to the vector space of the model it uses on the target corpus. It has the advantage that the model offers a compact topic number determination method and is, therefore, able to search for topics that it considers to be related automatically. In our tests, its main disadvantage was that, like LDA, it proved to be less capable of finding smaller topics. Although, in our experience, there have been cases where Top2Vec has performed better than LDA, its performance is not consistently better than LDA. In addition, the implementation available for it was not always stable in our development environment. §.§ BERTopic BERTopic is also a topic modeling technique that uses semantic embedding. It forms dense clusters using c-TF-IDF <cit.> for easier clustering. The model is based on the BERT transformer-based neural network <cit.> and considers the internal representation of BERT to generate the embedded vector space. Like Top2Vec, it does not require stop word removal or lemmatization. As with techniques based on semantic embedding, the model's effectiveness is highly dependent on the quality of the embedded vector space it uses. However, the use of BERT can certainly be considered an effective tool to achieve this goal. Based on our research, the main drawback is the limitations of the model's implementation, which the developers have encoded. This problem was an unnecessary inconvenience despite the open-source implementation. Unfortunately, the number of words that could be extracted from the topics found was limited by the developers to a total of 30 words, which for us, made the ability to extract targeted information impaired and inflexible. In fact, the low number of retrievable words causes a similar phenomenon as in the case of LDA, where we only focus on the top 10 words per topic. § METHODOLOGY Now we present the proper background of our methodology including the dataset used during the development and the details of our topic modeling pipeline. §.§ Dataset We used data from the Amazon Review (2018) <cit.> dataset to evaluate our system. The Amazon Review dataset contains 233.1 million Amazon product reviews (review text, ratings, and auxiliary votes), along with corresponding product metadata and links. For our purposes, we selected the electronic subset of this dataset, which contains 7,824,482 reviews, and created a dataset (hereafter Electronics_50K) as follows. Firstly, we tokenized the review text using the vocabulary and tokenization method of the BERT neural network architecture. The vocabulary used for BERT tokenization is built using the WordPiece <cit.> subword tokenization algorithm. After tokenization, we kept only those reviews with lengths between 16 and 128 tokens since BERT was used for sentence and word embedding. The maximum input size of BERT is 512 tokens, so we fixed the input size to 128 to fit our own development environment for efficiency reasons. We then performed a uniform sampling to obtain 10,000 reviews for each rating (1 to 5, without fractions). This resulted in a dataset of 50,000 reviews covering 8281, 8267, 8388, 8219, and 8063 different products for ratings from 1 to 5. For a quick impression of how the dataset looks like, see Figures <ref> and <ref> for its 2D visualizations obtained by PCA and TSNE, respectively. §.§ Keyphrase extraction §.§.§ N-gram-based keyphrase extraction An established method for extracting the semantic meaning of texts based on the N-gram approach is to examine keywords and the text fragments containing them. In the case of N-grams, the "N" denotes the length of the text fragments (e.g., 2-grams, 3-grams); see also Figure <ref>. This method can generate the sentence narrow context of our keywords, potentially allowing us to examine a few words reviews. The method can be further improved by stop words removal, where the most commonly used words in a given language are omitted from our analysis, and we focus on words with stronger meanings. The following steps summarise the extraction of keywords using classical language processing tools. Firstly, the text is pre-processed and cleaned, and stop words are removed using an appropriate dictionary. This is followed by extracting noun phrases from which a dictionary can be defined. The last step is the extraction of keywords and keyphrases, taking into account the noun phrases. This is the contextual extraction of 1, 2, or 3 words depending on the position of the noun phrases in the sentence. §.§.§ Dependency parsing based keyphrase extraction Once we had broken down the reviews into sentences and identified their respective keywords, we began looking for the contexts of these keywords. We achieved this by applying dependency parsing, an example of which is shown in Figure <ref>. During this process, we parsed the whole sentence to obtain its grammatical structure and identify "head" words and words that are grammatically connected to them. After this, we identified the keyword kw and its "head" word h, then looked for words that were connected to h while keeping the original order of the words in the sentence. We looked for words connected to h instead of kw because most of the time kw was either an adjective, adverb, or some other word that expressed emotions, sentiments, or qualities and had no particular meaning by itself. So instead of looking for words connected to this term, we focused on the word that is grammatically connected to kw (e.g., a noun). During the search procedure, we excluded common words, such as prepositional modifiers and words representing coordinating conjunctions. This way, the resulting phrase contained the most important parts (nouns, adjectives, etc.) while still being relatively short. This procedure resulted in very long phrases that were sometimes not much shorter than the original sentence. To decrease their lengths, we integrated a thresholding mechanism into the search procedure to decide which keywords to keep and which to discard. This thresholding was based solely on the sentiment of the keyword: positive keywords were kept if their sentiment score <cit.> was above 0.89, while negative and neutral ones were kept if their score was above 0.79. The exact thresholding levels were calculated based on our training set and were optimized based on the average length of the resulting contexts and their interpretability. Optimization was a step-by-step method with expert reinforcement. During which, starting from a still meaningful lower value of 0.49 with a step size of 0.1 we produced all possible outputs of a 100-item randomly sampled validation set, based on which the optimal parameters were defined relying on evaluation by human experts. This step-by-step optimization was performed for both positive and negative keywords. §.§.§ Cosine similarity based keyphrase extraction It is a complex problem to extract the words and phrases from a text that contain the most relevant information with respect to a given analysis goal. This problem has several approaches; each may perform better for different text types. Next, we describe the methods tested on the review dataset detailed above. One commonly used metric to measure the similarity of two words or even phrases is the cosine similarity measure. Mathematically, it measures the cosine of the angle between two vectors x=(x_1,…,x_n), y=(y_1,…,y_n)∈ℝ^d projected in a vector space: cos (xy)= xy/xy = ∑_i=1^nx_iy_i/√(∑_i=1^n(x_i)^2)√(∑_i=1^n(y_i)^2) The smaller the angle, the higher the cosine similarity. The similarity value is between -1 and 1; full similarity (identity) is described by 1, full dissimilarity by -1, and neutral behavior (orthogonality) by 0. The main advantage of it is that it measures the similarity of the documents irrespective of their size. In contrast, the method of counting the maximum number of common words between the documents can give a false result. For example, if a document grows in size, the number of common words tends to increase, even if the topic is not the same in the two documents. KeyBERT <cit.> is a simple-to-use method for keyword extraction based on BERT embeddings. An example of a KeyBERT output is shown in Figure <ref>. §.§ Pre-trained models §.§.§ BERT+R architecture Dataset labels are formed by user ratings. This means that users rated its elements on a scale of 1 to 5. However, we wanted to build an unsupervised machine learning system that could separate multiple details of the dataset, even if it had a simpler (e.g., binary) scale. We have designed our system so that, in theory, any other method can be incorporated into the architecture to improve the detection capability. In practice, however, we have built on our own needs and created a sentiment analysis system at this level. We believe a more detailed sentiment-based rating scale would allow for a more efficient sentiment-based evaluation of texts. We have therefore created a model that can predict the sentiment of a sentence containing a statement on a scale of 1 to 5 with high accuracy. For this we used a pre-trained BERT network and modified its architecture by adding a regression layer to the output instead of a classification layer. We then fine-tuned the specific BERT+R model on our dataset. The resulting model, depicted in Figure <ref>, allows us to predict sentiment values with finer granularity. Furthermore, our system has become more sensitive to the differences between negation and assertion sentences. §.§.§ Sentence BERT Sentence BERT <cit.> is a special transformer-based architecture designed for sentence embedding. The network is designed to assign vectors to each sentence such that their similarity is measured by cosine similarity. Basically, Sentence-BERT (SBERT) is a modification of the pre-trained BERT network that uses Siamese and triplet network structures to derive semantically meaningful sentence embeddings. §.§ The modules of the pipeline For later integration into a commercial application, the information extraction process is implemented as a single pipeline. This pipeline was designed to ensure that the model applied can evolve with the growth and changes of the dataset. In addition, it needs to be stable and easy to maintain so that it can be made available as a service to other applications. §.§.§ Text cleaning The purpose of text cleaning is to remove characters and text elements from the raw data that do not contain relevant information and may distort the results obtained by the model. This is the first level of our pipeline, where these text transformations are performed to ensure that the text meets the tokenization requirements of the model. As described in more detail in section <ref>, our requirement of tokenization is to work properly with the specialized BERT+R neural network. Text cleaning removes punctuation and unnecessary spaces that we think might cause noise. In addition, the entire text has been lower-cased, and language-specific abbreviations have been removed to meet the tokenization requirements of the BERT+R model used. §.§.§ Splitting the text into sentences A single review may contain multiple relevant topics that express the opinion of the customer. We, therefore, needed to find a way to separate the different topics. One sentence or sub-sentence usually contains one or more statements about the same topic. However, less frequently, users make several statements within a sentence without using punctuation. Our implementation at this level divided the text along the sentence and sub-sentence ending punctuation used in English. Thus, the information carried by each statement is extracted sentence by sentence. §.§.§ Generating regression-based sentiment values We predicted sentiment scores after decomposing user reviews into sentences. For this purpose, we used our BERT+R model, which we applied to each sentence. As mentioned earlier, this pipeline uses a modular approach, which means that the module implementing a step can be replaced by any other solution if it improves the performance of the pipeline, i.e., in this case, leads to better sentiment scores. §.§.§ Sentiment-based classification Based on the sentiment scores predicted by the regression model, each sentence was classified with 3 labels (negative, neutral, positive). Sentences with sentiment scores less than 2 were classified as negative, and sentences with sentiment scores greater than 4 were classified as positive. Consequently, sentences with a sentiment score between 2 and 4 were classified as neutral. Our research focused mainly on negative statements, so we considered these 3 groups sufficient for further investigation. Of course, the regression values we generated also allow for a finer resolution. This can be seen as a parameter of the pipeline that can be chosen depending on the application. §.§ Keyphrase extraction We apply semantic embedding to extract the keywords. For this, the text split into sentences, and the words belonging to each sentence and their bigram and trigram combinations we embedded separately in a 768-dimensional vector space using SBERT. Then we measured the cosine distance of the given words and their bigrams and trigrams from the sentence that contained them. In the next step, we selected from each sentence the top 3 expressions whose semantic meaning is closest to the sentence containing it. This gave us an item triplet for each sentence, which could contain words and phrases or their negation auxiliaries. In addition, combinations of bigrams and trigrams were used with elementary words since these are the N-gram combinations that still make sense in natural languages. In fact, by maximizing the value of N, we can eliminate redundant combinations. §.§ Keyphrase embedding After cleaning, classifying, and extracting key terms from the text, to group words and phrases according to their meaning, we need to vectorize them. In other words, we need to represent them so that words and phrases describing similar topics can be identified. To do this, we used the text embedding technique SBERT. It generated a 768-dimensional embedded vector from each extracted key expression. In the resulting embedded vector space, we could apply our own clustering approach for topic search. §.§ Hierarchical and density-based recursive clustering Embedded vectors were used to group terms with similar meanings. A special property of embedded vector spaces generated from text data is that they are difficult to cluster, as the distances between adjacent vectors are usually similar. However, slightly dense clusters appear in the vector space for similar contents. To extract these slightly dense clusters, we need to remove outliers. To achieve this, we used our own solution. This consists of using hierarchical clustering with recursive execution, as shown in Figure <ref>. Based on our tests, the elements of slightly dense clusters describe the same topics. Our experience shows that the cosine similarity between these clusters is above 0.7. These slightly denser clusters could be extracted by recursively applying hierarchical clustering. This method re-clusters the clusters with densities below 0.7 in the next clustering cycle. This is repeated until the minimum number of elements (in our case, chosen as 5) or the density value 0.7 is reached. Of course, these hyperparameters of the pipeline can be freely adjusted to improve the generalization ability of our system concerning the nature of the text. The resulting clusters have sufficiently good descriptive properties. §.§ Computational and development environment During development and research, we worked in a cloud environment. For data storage, text data was stored in JSON and CSV files in a Hadoop-based storage system <cit.>. We also used two different clusters for computations for cost efficiency. We worked with a CPU-optimized virtual environment for operations requiring more CPU computations. This environment was configured with a 16-core AMD EPYC 7452 processor, 128 GB RAM and 400 GB of storage. For operations with transformer-based neural networks, we used a GPU-optimized virtual environment that was configured with a 12-core Intel Xeon E5-2690 processor, 224 GB RAM, 1474 GB (SSD) cache, and two NVIDIA Tesla P100 (32GB) GPUs. § EXPERIMENTAL RESULTS In this section, we present the methods and results of our experiments. First, we present our experimental results that informed our choice of keyword and phrase extraction methods for our pipeline. Then, in a supervised classification problem, we compare the efficiency of our complex model with that of LDA, Top2Vec, and BERTopic topic modeling solutions. §.§ Evaluation of keyword extraction solutions Three solutions were compared to find the best keyword extraction method. 7 independent human experts evaluated their results. To be able to compare and evaluate each model by independent experts, we needed to produce a dataset that human experts could process. For our experiments, we used the Cell Phones and Accessories (CPA) subset of the Amazon Review dataset, from which we randomly sampled 100 sentences (see Supplementary Material 1). We randomly selected 100,000 products from this subset and chose the 4 products with the most reviews. This step was introduced to ensure that there would be a sufficient number of reviews for the same product, including a sufficient number of explicitly positive and negative samples. The 4 products with the most reviews already provided a sufficient sample for further narrowing. In the next step, we removed the neutral reviews with a rating of 3. This step was introduced because our experience has shown that extracting information from extreme emotional expressions is more difficult. Words with a strong emotional charge and negative sentences can obscure the information, making it difficult to extract. The remaining reviews have been split into sentences, and grammatical abbreviations have been corrected. We then removed sentences containing fewer than 6 or more than 14 words. In our experience, sentences shorter than 6 words generally do not contain meaningful information, and the 14-word upper limit fits well with the length of an average English sentence. From the remaining sentences, we randomly selected 100. Finally, N-gram, dependency parsing, and semantic embedding-based keyword extraction methods were applied to each of the 100 sentences. The keywords generated by the tested models were evaluated by independent experts, who voted for each sentence which method best extracted the meaning of that sentence. The experts could vote for multiple models or choose the "none of them" category. The evaluation's result (see Table <ref>) showed that the embedding-based technique dominated in 61% of the cases. Therefore we implemented this method into our framework. In Figure <ref>, the different models' correlation can also be checked regarding this task. §.§ Topic modeling evaluation §.§.§ Approach To assess our own and referenced models' information retention capabilities, we trained unsupervised keyword retrieval on a supervised text classification task. Evaluating unsupervised learning models often involves relating them to a supervised classification task, as seen in the case of LDA where binary classification is used to measure its effectiveness. We hypothesized that if the models can extract meaningful keywords, the accuracy of an independent classification model using only those extracted keywords should be correlated with the information value of the words. This assumption is supported by the observation that text cleaning, which removes noise, improves classification accuracy. However, this improvement is only seen when elements considered noise are removed. Consequently, if words with information value for classification are eliminated, the accuracy of classification models will decrease. While keyword extraction does not aim to enhance classification accuracy, it is plausible that the quality of the extracted words can impact classification outcomes. If the classification accuracy significantly drops compared to using all text elements, it suggests that the topics and their associated words might not have been correctly extracted. Conversely, when relevant words are extracted, classification accuracy should ideally increase. It is important to note that a keyword extraction system will not always improve classification accuracy in every case. Nevertheless, this evaluation method allows for comparing the relevance of words extracted by different models for information retention. §.§.§ Dataset For the evaluation, we used the 20newsgroups and Amazon Reviews databases. 20newsgroups is a widely used benchmarking dataset that provides training and test subsets. Furthermore, software packages allow quick and easy use without pre-processing steps. From the 20newsgroups dataset, binary classification tasks were created as follows. The dataset contains 20 classes in 6 categories, so a single class from each category was chosen to obtain sufficiently different elements for binary classification. Finally, from the 6 classes selected (comp.graphics, rec.autos, sci.med, misc.forsale, talk.politics.guns, soc.religion.christian), we created as classification tasks all possible binary combinations with comparing each pair of classes. For an impression of this dataset, see Figures <ref> and <ref> for its 2D visualizations obtained by PCA and TSNE, respectively. We have created several classification tasks from the Amazon Reviews dataset. Due to its relatively large size compared to the available resources, a reduced set was used in this case, as it was in the development phase. In this case, the CPA subset was used, as in the evaluation of the keyword extraction models. From the dataset, we selected products with a uniform set of 100 reviews for each class (ratings from 1 to 5). Thus, we created a classification task with 6000 train and 1500 test records, which we used for a 5-label multi-classification. §.§.§ Setup Because of the different approaches of the different models, each model has been set up to maximize their perfromance. BERTopic has limited configuration options, so we maximized the keyword extraction capability of the model. This means that the maximum number of words that can be defined for a topic is 30. For Top2Vec, the mincount parameter of the model is responsible for the number of words to extract. Therefore, we tried numbers from 10 to 500. 500 was the maximum value as long as the model was stable in our development environment. Automatic determination of the number of topics is not built into the LDA by default, so a more complex optimization was performed for this model. In the LDA, a separate parameter can be used to set the number of topics searched for. This parameter was set to be between 2 and 50. Within this range, we tried both online and batch optimization techniques to find the best number of topics. The optimal topic number for the LDA always has the best average perplexity among the selected topics. In the case of our own model, we left the parameters at the values we had found in our development experience and set them as defaults. This means that each of the topics selected by our model had more than 5 elements and the cosine similarity value between the elements of each topic was greater than 0.7. Sets outside this range were treated as outliers and irrelevant topics. The topic words extracted from the models were used as a dictionary, and only these words were retained from the datasets for each evaluation of the model for the classification tasks. The texts filtered by the model dictionaries were vectorized using one-hot, count vectorization, and TF-IDF techniques. We then used a simple regression model for each model evaluation. The advantage of the regression model is that it is simple, and the operation of the model can be clearly explained using weights. In each case, the models were trained on a training dataset filtered by the dictionaries generated by the models, and their accuracy was measured on the test set. §.§.§ Results Since we tested several parameters for the LDA and Top2Vec models, we considered their average performances. On the binary classification tasks, we measured an average accuracy of 89.82% for our own model, 82.75% for LDA with batch optimization, 82.38% for LDA with online optimization, 76.21% for Top2Vec and 79.46% for BERTopic (see Table <ref>). In terms of the number of topics found, our model found 58 topics on average, LDA with batch optimization 7 topics, LDA with online optimization 7 topics, To2Vec 3 topics, and BERTopic 7 topics as enclosed also in Table <ref>. The detailed results for each binary classification task and topic modeling approach are enclosed in Supplementary Material 1. For the multi-classification tasks, our model provided an average accuracy of 45.24%, we got 35.73% for LDA with batch optimization, 35.48% for LDA with online optimization, 39.41% for Top2Vec and 42.8% for BERTopic; see Table <ref>. In terms of the number of topics found, our model found 64 topics on average, LDA with batch optimization 2 topics, LDA with online optimization 2 topics, Top2Vec 17 topics, and BERTopic 94 topics; see Table <ref>. § CONCLUSIONS This paper outlines the difficulties of applying information retrieval from texts when accounting market considerations. In doing so, we have pointed out that the popular literary methods we have studied and tested in natural language text processing cannot comprehensively address these considerations. Indeed, in order to solve a target task effectively, we need to focus on aspects for which the solutions we tested did not work satisfactorily. Our experience, therefore, suggests that there is currently no general solution to the problems of information retrieval and topic modeling. In our development work, we have tried to review the various possible solutions and combine and refine them to meet our needs. By doing the necessary research, we could develop solutions that could put together these elements in a uniform pipeline. As a result, we have created a complex pipeline model that performs the minimum necessary steps of data cleansing, finds key terms, and organizes them into coherent topics. The process is consistent and produces a result that proves to be more efficient than the available literature models. The complex pipeline we propose is a computational cloud-based system that, based on our measurements, has good information retention and can use semantic embedding to find key terms and the topics around which they cluster. This provides us with more responsive and usable system results than other solutions. In addition, the steps of the overall process have been designed and implemented in a well-separated way so that additional elements needed for other target tasks can be easily inserted between each step. § ACKNOWLEDGEMENT This work is partly funded by the projects GINOP-2.3.2-15-2016-00005 supported by the European Union, co-financed by the European Social Fund, and by the project TKP2021-NKTA-34, implemented with the support provided by the National Research, Development, and Innovation Fund of Hungary under the TKP2021-NKTA funding scheme. Further funding was provided by the Project no. KDP-2021 that has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the C1774095 funding scheme. § SUPPLEMENTARY MATERIAL 1 RESULT OF THE HUMAN EVALUATION OF THE PERFORMANCE OF DIFFERENT KEYWORD EXTRACTION APPROACHES Here, we summarize all the human expert evaluations on the performance of the different keyword extraction approaches we studied. The Amazon Review (2018) amazon dataset was used for this evaluation. Next, we present the extracted keywords and the corresponding votes for 100 reviews from this dataset. § SUPPLEMENTARY MATERIAL 2 AVERAGE NUMBER OF TOPICS AND VOCABULARY SIZES FOUND BY THE DIFFERENT MODELS AND THEIR CLASSIFICATION RESULTS Here we summarize all the models and their outputs in terms of the number of topics and vocabulary sizes they found for the 20newsgroups newsgroups20 dataset. We also present their results for a classification task using different embedding strategies. The results for each model are presented in separate tables below. The columns Topic number and Vocabulary size contain the averages of the different runs for each model. It can be seen that our approach outperforms the others with respect to the classification problem investigated.
http://arxiv.org/abs/2306.04413v1
20230607131357
Global convergence towards pushed travelling fronts for parabolic gradient systems
[ "Ramon Oliver-Bonafoux", "Emmanuel Risler" ]
math.AP
[ "math.AP", "35B38, 35B40, 35K57" ]
The GAPS programme at TNGBased on: observations made with the Italian Telescopio Nazionale Galileo (TNG), operated on the island of La Palma by the INAF - Fundación Galileo Galilei at the Roque de Los Muchachos Observatory of the Instituto de Astrofísica de Canarias (IAC). M. Pinamonti<ref> D. Barbato<ref> A. Sozzetti<ref> L. Affer<ref> S. Benatti<ref> K. Biazzo<ref> A. Bignamini<ref> F. Borsa<ref> M. Damasso<ref> S. Desidera<ref> A. F. Lanza<ref> J. Maldonado<ref> L. Mancini<ref>,<ref>,<ref> L. Naponiello<ref>,<ref>,<ref> D. Nardiello<ref> M. Rainer<ref> L. Cabona<ref> C. Knapic<ref> G. Andreuzzi<ref>,<ref> R. Cosentino<ref> A. Fiorenzano<ref> A. Ghedina<ref> A. Harutyunyan<ref> V. Lorenzi<ref>,<ref> M. Pedani<ref> R. Claudi<ref> E. Covino<ref> A. Maggio<ref> G. Micela<ref> E. Molinari<ref> I. Pagano<ref> G. Piotto<ref> E. Poretti<ref>,<ref> Received <date> / Accepted <date> ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ 2020 Mathematics Subject Classification: 35B38, 35B40, 35K57. Key words and phrases: parabolic gradient system, maximal linear invasion speed (linear spreading speed), maximal nonlinear invasion speed, variational speed, pushed travelling front, Poincaré inequality, global convergence. This article addresses the issue of global convergence towards pushed travelling fronts for solutions of parabolic systems of the form u_t = - ∇ V(u) + u_xx , where the potential V is coercive at infinity. It is proved that, if an initial condition x↦ u(x,t=0) approaches, rapidly enough, a critical point e of V to the right end of space, and if, for some speed c_0 greater than the linear spreading speed associated with e, the energy of this initial condition in a frame travelling at the speed c_0 is negative — with symbols, ∫_ e^c_0 x(1/2 u_x(x,0)^2 + V(u(x,0))- V(e)) dx < 0 , then the corresponding solution invades e at a speed c greater than c_0, and approaches, around the leading edge and as time goes to +∞, profiles of pushed fronts (in most cases a single one) travelling at the speed c. A necessary and sufficient condition for the existence of pushed fronts invading a critical point at a speed greater than its linear spreading speed follows as a corollary. In the absence of maximum principle, the arguments are purely variational. The key ingredient is a Poincaré inequality showing that, in frames travelling at speeds exceeding the linear spreading speed, the variational landscape does not differ much from the case where the invaded equilibrium e is stable. The proof is notably inspired by ideas and techniques introduced by Th. Gallay and R. Joly, and subsequently used by C. Luo, in the setting of nonlinear damped wave equations. empty empty plain § INTRODUCTION AND STATEMENTS OF THE MAIN RESULTS §.§ System, semi-flow Let us consider the nonlinear parabolic system u_t=-∇ V (u) + u_xx , where the time variable t and the space variable x are real, the spatial domain is the whole real line, the function (x,t)↦ u(x,t) takes its values in ^d with d a positive integer, and the nonlinearity is the gradient of a potential function V:^d→, of class ^2, and coercive at infinity in the following sense: H_coerclim_R→+∞ inf_u≥ R u·∇ V(u)/u^2 >0 . The uniformly local Sobolev space (see <ref> for references) provides a natural framework for the study of system <ref> on the whole real line. This system defines a local semi-flow in this space, and according the assumption <ref>, this semi-flow is actually global (<ref> below). Let us denote by (S_t)_t≥0 this semi-flow. In the following, a solution of system <ref> will refer to a function ×[0,+∞)→^d , (x,t)↦ u(x,t) , such that the function u_0:x↦ u(x,t=0) (initial condition) is in and u(·,t) equals (S_t u_0)(·) for every nonnegative time t. §.§ Invaded critical point According to assumption <ref>, we may consider the quantity defined as = min_u∈^dV(u) . Let us consider a point e of ^d, and let us assume that: H_crit, e∇ V(e)=0 and V(e)=0 and <0 . In other words, e is assumed to be a critical point which is not a global minimum of V, and V is normalized so that it takes the value 0 at e. The aim of this paper is mainly to address the case where e is not a nondegenerate minimum point of V; that is, if D^2V(u) denotes the Hessian matrix of V at some point u of ^d and σ(D^2V(u)) denotes the spectrum of this Hessian matrix, the case where min(σ(D^2V(e)))≤ 0 . Indeed, if e is a local minimum point of V, global convergence towards travelling fronts invading e can be addressed by differing techniques leading to stronger results <cit.>. As a consequence, the reader may assume that, in addition to assumption <ref>, inequality <ref> holds, even if this inequality will nowhere be formally required. §.§ Travelling waves/fronts §.§.§ Definition Let c be a positive real quantity. A wave travelling at the speed c for system <ref> is a function of the form (x,t)↦ϕ(x-ct), where ϕ is a solution (with values in ^d) of the second order differential system ϕ”=-cϕ'+∇ V(ϕ) , which is equivalent to the first order differential system [ ϕ'; ψ' ] = [ ψ; - c ψ + ∇ V(ϕ) ] . The function ϕ is called the profile of the travelling wave. Notice that the solutions of systems <ref> may blow up in finite time, so that a travelling wave is, formally, not necessarily a solution of system <ref> as defined in <ref>; its profile ϕ is a function (T_-,T_+)→^d, where (T_-,T_+) is the (maximal) time of existence of ϕ as a solution of systems <ref> (thus T_- is in {-∞}∪ and T_+ is in ∪{+∞}, and the travelling wave (x,t)↦ϕ(x-ct) itself is defined accordingly). Let (V) denote the set of critical points of V; with symbols, (V) = {u∈^d: ∇ V(u) = 0} . Let us call travelling wave invading e at the speed c a wave travelling at the speed c such that its profile ϕ is nonconstant, defined on a maximal interval of the form (T_-,+∞), and satisfies: ϕ(ξ) e ; and let us call front this wave if ϕ is defined up to -∞ and if there exists a negative quantity V_-∞ such that the following limit holds: (ϕ(ξ),(V)∩ V^-1({V_-∞})) 0 . Finally, let us call travelling wave (front) invading e a travelling wave (front) invading e at some positive speed. * As stated in conclusion <ref> of <ref>, in order a travelling wave (invading e at the speed c) to be a travelling front in the sense of <ref>, it is sufficient that its profile ϕ(·) be defined up to -∞ and bounded on . * Voluntarily, this definition a travelling front slightly differs from the usual one, since it does not require that ϕ(ξ) approach a single critical point of V as ξ goes to -∞. However, in most cases (at least if the critical points of V are isolated — which is true for a generic potential V, or if V is analytic, see for instance <cit.>), then the profile of a travelling front in the sense of <ref> does approach a single critical point of V to the left end of space. §.§.§ Linearization at the invaded critical point The linearization of the (equivalent) differential systems <ref> at the point (e,0_^d) reads: ϕ” = -cϕ' + D^2V(e)·ϕ and d/dξ[ ϕ; ψ ] = [ 0 I_d; D^2V(e) -c I_d ]·[ ϕ; ψ ] . For every real quantity μ, let us consider the quantities λ_c,±(μ) defined as λ_c,±(μ) = { -c/2±√(c^2/4+μ) if -c^2/4≤μ , -c/2± i√(-c^2/4-μ) if μ≤ -c^2/4 , . see <ref>; these two quantities are the roots of the polynomial equation λ^2 = -cλ +μ. Let μ_1,…,μ_d denote the (real) eigenvalues of D^2V(e), counted with algebraic multiplicity, and ordered, so that μ_1≤…≤μ_d . These quantities will also be referred to as the curvatures of the potential V at the critical point e. No assumption is made concerning their signs (see the remark following inequality <ref>). According to this notation, the eigenvalues of the linearized differential systems <ref> (counted with algebraic multiplicity) are the 2d quantities: λ_c,-(μ_1), λ_c,+(μ_1), …, λ_c,-(μ_d), λ_c,+(μ_d) , see <ref>. Let (u_1,…,u_d) denote an orthonormal basis of ^d such that, for every j in {1,…,d}, u_j is an eigenvector of D^2V(e) for the eigenvalue μ_j. Then, for every j in {1,…,d}, the vectors [ u_j; λ_c,-(μ_j) u_j ] and [ u_j; λ_c,+(μ_j) u_j ] are eigenvectors of the linearized systems <ref>, for the eigenvalues λ_c,-(μ_j) and λ_c,+(μ_j), respectively. §.§.§ Maximal linear invasion speed For every real quantity (curvature) μ, let us consider the (nonnegative) quantity (μ) defined as (μ) = { 2√(-μ) = 2√(μ) if μ<0 , 0 if μ≥ 0 , . see <ref>, and, for every j in {1,…,d}, let us denote by j the quantity (μ_j). Let us call maximal linear invasion speed (associated with the critical point e) the (nonnegative) quantity defined as = max(1,…,d) , or equivalently = 1 . Accordingly, the quantities 1,…,d may be called the linear invasion speeds associated with the eigenvalues μ_1,…,μ_d of D^2V(e), but only the maximal linear invasion speed will play a significant role in the following. This quantity is usually called linear spreading speed in the literature, see for instance <cit.>, and is referred as such in the abstract of this article. In the following only the denomination “maximal linear invasion speed” will be used to denote this quantity: the adjective “maximal” is relevant for systems, and the terms “invasion/invaded/invading”, which fit with the phenomenon of propagation (to the right) into the state e considered here, are ubiquitous. In addition, a “maximal nonlinear invasion speed” will be introduced in <ref>. According to the expression <ref> of (·), this maximal linear invasion speed is nonnegative (but might vanish). It follows from the expression <ref> of λ_c,±(·) that, for every j in {1,…,d}, if c>j , or equivalently if -c^2/4 < μ_j , then the eigenvalues λ_c,-(μ_j) and λ_c,+(μ_j) of the linear system <ref> are real and distinct, and the corresponding eigenvectors <ref> are real (see <ref>). §.§.§ Pushed travelling waves and fronts invading a critical point Let us keep the previous notation, and let ϕ denote the profile of a wave invading the critical point e at the speed c. According to the previous considerations, there must exist some nonpositive eigenvalue λ among the quantities <ref> such that lnϕ(ξ)-e/ξ→λ as ξ→+∞ . Let us call steepness of the wave under consideration the quantity λ defined by the limit <ref>. A travelling wave (front) invading the critical point e at some positive speed c is said to be pushed if its steepness λ (<ref>) satisfies the inequality λ<-c/2 , or equivalently c/2<λ , or equivalently if the following limit holds: ϕ(ξ) -e = o(e^-1/2cξ) as ξ→+∞ . As shown on <ref>, if the least eigenvalue μ_1 of D^2V(e) is negative then the pushed (or “steep”) character is non-generic among the solutions of the differential system <ref> (with c greater than ) approaching e at the right end of space. However, when pushed travelling fronts exist, these fronts are approached by solutions of the parabolic system <ref> for a “large and relevant” set of initial conditions, at least if their speed is greater than the maximal linear invasion speed (see for instance <cit.> in the scalar case d equals 1, and <ref> below — the main result of this paper — in the vector case d larger than 1). On the other hand, if μ_1 is nonnegative (in particular if e is a local minimum point of V), then all waves (fronts) invading e at a positive speed are “pushed” in the sense of <ref>, although usually not qualified as such. §.§.§ Genericity/counting arguments related to the profiles of pushed travelling fronts The following statements are proved in <cit.> (see also <cit.>): for a generic potential V, * the set of profiles of pushed travelling fronts is discrete, * and, for every pushed travelling front, * the speed of the front is greater than the maximal linear invasion speed of the critical point invaded by the front, * the profile of the front approaches a single “invading” critical point at the left end of space (compare with <ref>), this invading critical point is a nondegenerate local minimum point of V, * the profile of the front approaches the invaded critical point at the right end of space (the invading critical point at the left end of space) tangentially to the least eigenvalue of the Hessian D^2V(·) at this invaded (invading) critical point. A rough justification of these statements is provided by the following counting arguments. Let us recall that the Morse index of a nondegenerate critical point of V is the number of negative eigenvalues of the Hessian D^2V at this critical point. If e_- and e_+ are two nondegenerate critical points of V and c is a positive speed, * the dimension of the unstable manifold of the equilibrium (e_-,0_^d) for the differential system <ref> is equal to d-m(e_-), * and if denotes the smallest index j in {1,…,d} such that the jth eigenvalue μ_j of D^2V(e_+) (by increasing order as in <ref>) is greater than -c^2/4, then the dimension of the “steep stable” manifold of (e_+,0_^d) is equal to d-+1 (see <ref>). The profiles of pushed fronts connecting e_- (the invading critical point) to e_+ (the invaded one) live in the intersection between these two manifolds, which is at least one-dimensional if nonempty. As a consequence, with the additional freedom provided by the speed parameter c, a transverse intersection between these two manifolds cannot occur unless m(e_-) is equal to 0 and is equal to 1, and in this case this (transverse) intersection is made of isolated trajectories; and a similar counting argument supports conclusion <ref> (see <cit.> for details). These statements are not directly involved in the results of this paper nor in their proofs, but they will be called upon in some comments, and they shed light on the general picture. In particular, they show that, among pushed travelling fronts, the most relevant ones are those with a speed exceeding the maximal linear invasion speed of the invaded critical point (the others do generically not exist). It is precisely the convergence towards these pushed travelling fronts that the variational techniques used in this paper most easily apply to, and, as a matter of fact, only this convergence will be addressed. §.§.§ Parametrization of pushed waves and fronts invading a critical point and travelling at a speed exceeding its maximal linear invasion speed Let us keep the notation introduced before the previous <ref>. For every speed c in (,+∞) and for every small enough positive quantity δ, there exists a map eδc:B_^d(e,δ)→^d such that, for every (u,v) in B_^d(e,δ)×^d, the following two conditions are equivalent: * v = eδc(u), * the solution ξ↦(ϕ(ξ),ψ(ξ)) of system <ref> with initial condition (u,v) at ξ equals 0 remains in B_^d(e,δ)×^d for all ξ in [0,+∞) and is the profile of a pushed travelling wave invading e. In other words, the set eδc defined as the graph, over B_^d(e,δ), of the map eδc(·): eδc = {(u,eδc(u)):u∈B_^d(e,δ)} can be seen as a “steep” local stable manifold of the equilibrium (e,0_^d) for the differential system <ref>, where “steep” means “at an exponential rate which is greater than c/2”. For every u in ∂ B_^d(e,δ), let us denote by ϕ_c,u(·) the (maximal) solution of the differential system <ref> for the initial condition (ϕ_c,u(0),ϕ_c,u'(0)) = (u,eδc(u)) (so that ϕ_c,u(·) is the profile of a pushed travelling wave invading e at the speed c), and let us consider the set eδc = { u∈∂ B_^d(e,δ): ϕ_c,u is the profile of a pushed front invading e at the speed c} . Depending on the value of the speed c, an explicit value of a quantity δ ensuring that the conclusions of <ref> hold is provided by prop:extension_local_steep_stable_manifold. §.§ Invasion through profiles of pushed fronts Let us keep the previous notation, and in particular let us still consider a speed c which is greater than the maximal linear invasion speed . A solution (x,t)↦ u(x,t) of the parabolic system <ref> is said to invade e at the speed c through profiles of pushed fronts if there exist a positive quantity δ and a function t↦x̃(t) in ^1([0,+∞),) such that the following conclusions hold: * the conclusions of <ref> hold for the speed c and the parameter δ; * the set eδc is nonempty; * for t large enough positive, u(x̃(t),t) is in ∂ B_^d(e,δ); * the following three limits hold as t goes to +∞: * x̃'(t)→ c, * (u(x̃(t),t),eδc) → 0, * for every positive quantity L, sup_y∈[-L,+∞)u(x̃(t)+y,t) - ϕ_c,u(x̃(t),t)(y)→ 0 . * The profiles ϕ_c,u(x̃(t),t)(·) involved in the limit <ref> are profiles of pushed waves invading e, but not necessarily of pushed fronts (these profiles are therefore not necessarily defined up to -∞). However, due to the limit provided by conclusion <ref>, these profiles get closer and closer to profiles of actual pushed fronts as time goes to +∞. <Ref> is actually unchanged if, in the limit <ref>, the vector u(x̃(t),t) parametrizing the profile ϕ_c,u(x̃(t),t)(·) is replaced with a “close” vector in the set eδc, so that this profile is replaced with the profile of an actual pushed travelling front (this alternative approach is used in Theorem 1 of <cit.>). * Whatever the choice made for its formulation, <ref> does not ensure the convergence towards a single pushed travelling front. More precisely, the ω-limit set of the function t↦ u(x̃(t),t), defined as = ⋂_t>0⋃_s>t{u(x̃(s),s)} , is a nonempty compact connected subset of ∂ B_^d(e,δ), and is, according to condition <ref>, included in the set eδc. If this set is reduced to a singleton {}, then the limit <ref> can be replaced with the simpler and more precise limit sup_y∈[-L,+∞)u(x̃(t)+y,t) - ϕ_c,(y)→ 0 as t→+∞ , involving the profile ϕ_c, of a single pushed travelling front. Notice that the set is necessarily reduced to such a singleton if the set eδc is discrete, which, according to the statements of <ref>, is true for a generic potential V. * <Ref> does not provide any information concerning the behaviour of the solution in the wake (to the left) of the invasion. Let us however briefly mention that, if a solution invading e at the speed c through the profiles of pushed fronts is, in addition, close to a nondegenerate minimum point of V at the left end of space, then, under generic assumptions on the potential V, the global behaviour of this solution is rather well understood <cit.>. §.§ Energy in a travelling frame Let (x,t)↦ u(x,t) denote a solution of system <ref>, let c denote a positive quantity, and let us consider the function (ξ,t)↦ v(ξ,t) defined as: v(ξ,t) = u(x,t) for x = ct + ξ . This function v(·,·) is a solution of the system v_t - cv_ξ = - ∇ V(v) + v_ξξ . Multiplying this equation by e^cξ v_t and integrating over leads us to introduce the energy below (<ref>), and for that purpose the following weighted Sobolev spaces: H^1_c(,^d) = { w ∈(,^d) :  the functions ξ↦ e^1/2cξ w(ξ) and ξ↦ e^1/2cξ w'(ξ) are in L^2(,^d)} , and ce = { w ∈(,^d) : the function ξ↦ w(ξ) - e is in H^1_c(,^d) } . For every w in H^1_c,e(,^d), let us call energy (Lagrangian) of w in the frame travelling as speed c, and let us denote by _c[w] the quantity defined by the integral: _c[w] = ∫_e^cξ(1/2 w'(ξ)^2 + V(w(ξ))) dξ . Everywhere in the paper (including in the definition above), if u is a vector of ^d, the square of the euclidean norm u^2 of u is simply denoted by u^2. For w in H^1_c,e(,^d), the integral to the right hand of <ref> converges at the right end of , and according to assumption <ref>, it either converges to the left end of or is equal to +∞. Thus the expression <ref> defines a functional: H^1_c,e(,^d)→∪{+∞}. For w in H^1_c,e(,^d)∩ (in this case w is bounded), the same integral also converges at the left end of and _c[w] is in . The following proposition, proved in <ref>, shows that this energy <ref> defines a Lyapunov function for the solutions of system <ref> that belong to H^1_c,e(,^d). Recall that (S_t)_t≥0 denotes the semi-flow of this system in . For every positive quantity c and every solution (x,t)↦ u(x,t) of the parabolic system <ref>, if the initial condition x↦ u(x,t=0) is in ce (in addition to being in ), then the same is true for the profile x↦ u(x,t) of the solution at every nonnegative time t. In this case, the function t↦_c[v(·,t)] (for the function v(·,·) defined in <ref> and the functional _c[·] defined in <ref>) is continuous on [0,+∞), differentiable on (0,+∞), and, for every positive time t, the integral D_c(t) = ∫_e^cξ v_t(ξ,t)^2 dξ is finite, and the following equality holds: d/dt_c[v(·,t)] = - D_c(t) . In addition, for every nonnegative time t, the restriction of the map S_t (the semi-flow at time t) to the space H^1_c,e(,^d)∩ defines a continuous map from this space to itself, for the sum of the H^1_c,e(,^d)-norm and the -norm. §.§ Variational structure in travelling frames §.§.§ Infimum of the energy in a travelling frame For every positive quantity c, let us consider the quantity ℑ(c) = inf_w∈ce_c[w] ; and, for every function w in ce, and every real quantity ξ_0, let us consider the function T_ξ_0w defined as T_ξ_0w(ξ) = w(ξ-ξ_0) . It follows from the equality _c[T_ξ_0w] = e^cξ_0_c[w] that the quantity ℑ(c) is equal to either 0 or -∞. In other words, the subsets _-∞ and _0 of (0,+∞), defined as _-∞ = {c∈(0,+∞): ℑ(c) = -∞} and _0 = {c∈(0,+∞): ℑ(c) = 0 } are complementary subsets of (0,+∞); with symbols, _-∞⊔_0 = (0,+∞) . §.§.§ Lower quadratic hull of the potential at the invaded critical point According to assumptions <ref> and <ref>, the set {μ∈: for every u in ^d, V(u)≥1/2μ (u-e)^2 } is nonempty, and bounded from above by μ_1. Let us denote by the supremum of this set (in this notation, the index “quad-hull” refers to “the curvature of the lower quadratic hull” of the graph of V, centred at (e,0)), see <ref>. Then, according to this definition, -∞<≤μ_1 and, for every u in ^d, V(u)≥1/2 (u-e)^2 , and since according to <ref> the quantity is negative, it follows that < 0 . Let us write = 2√(-) , so that = - ^2/4 and ≥ . §.§.§ Maximal nonlinear invasion speed Some basic properties of the sets _-∞ and _0 are stated in <ref> (<ref>). Three of these properties are: _-∞≠∅ and (0,) ⊂_-∞ and [,+∞) ⊂_0 ; notice that the quantity may vanish, so that the first of these properties is not a consequence of the second. These properties set the ground for the next definition. Let us call maximal nonlinear invasion speed of the critical point e the quantity defined as = sup(_-∞) (this quantity is noted as c^† in <cit.>). It follows from this definition and from the properties <ref> that 0≤≤≤ and 0< . It will turn out, as a consequence of <ref> below (the main results of this paper) that the two sets _-∞ and _0 are actually nothing but the intervals (0,) and [,+∞) (see <ref> below); this additional information will not be used until then. §.§.§ Decay speed and variational invasion speed of a solution Let (x,t)↦ u(x,t) denote a solution of system <ref> (in the space ). For every nonnegative time t, let us consider (following <cit.>) the sets (t) and (t) defined as (t) = {c∈(0,+∞): u(·,t)∈ce} and (t) = {c∈(0,+∞): u(·,t)∈ce and _c[u(·,t)]<0} , so that (t) is a subset of (t). According to the definition <ref> of the maximal nonlinear invasion speed , (t) = {c∈(0,): u(·,t)∈ce and _c[u(·,t)]<0}⊂ (0,) . Let us consider the suprema sup((t)) and sup((t)) of the two sets (t) and (t), with the convention that each supremum equals 0 if the corresponding set is empty. According to <ref>, both sets are non-decreasing for inclusion with respect to t, so that both suprema are non-decreasing. This leads to the following definition. Let us call, respectively, decay speed and variational invasion speed of the solution u the quantities [u] and [u] defined as [u] = lim_t→+∞sup((t)) and [u] = lim_t→+∞sup((t)) . It follows from this definition that 0≤[u]≤ and [u]≤[u] ≤ +∞ . Similarly, if w is a function in let us denote by [w] and [w] the quantities [u] and [u] defined by the solution u of system <ref> for the initial condition w at time 0. The following proposition, proved in <ref>, is a direct consequence of the definition of the variational invasion speed. For every c in (,+∞), the map ∩ce→ [0,] , w↦[w] is lower semi-continuous with respect to the topology induced by the sum of the -norm and the H^1_c(,^d)-norm. For a stronger version of this result (lower semi-continuity of the invasion speed with respect to a wider class of functions) when the invaded critical point e is a nondegenerate local minimum point of V, see <cit.>. §.§ Main results Let V denote a potential function in ^2(^d,) and e denote a critical point of V, and let us assume that assumptions <ref> and <ref> hold. The following two statements call upon: * the notation (<ref>) and (<ref>) denoting, respectively, the maximal linear invasion speed and the maximal nonlinear invasion speed of the critical point e, * the notation [u] and [u] (<ref>) denoting, respectively, the decay speed and the variational speed of a solution u of system <ref>, * <Ref> of a pushed travelling front invading e, and <ref> of invasion through profiles of such pushed fronts. The first of these two statements (<ref>) is the main result of this paper, and the second (<ref>, proved in <ref>), is mainly a consequence of <ref>. Every solution u of the parabolic system (1.1) satisfying the condition < [u] < [u] invades the critical point e at its variational speed [u] through profiles of pushed fronts. The following two conditions are equivalent: * <; * there exists a pushed front invading e at a speed which is greater than . Moreover, if these conditions hold then the following two additional conclusions also hold: * there exists a pushed front invading e at the speed ; * the profile of every pushed front invading e at the speed is a global minimizer of the energy _[·] in e. * A sufficient condition for the condition <ref> of <ref> to hold is: there exist speeds c and c' such that ≤ c and < c' and _c[u(·,0)]< 0 and u(·,0)∈c'e . This condition is, however, more demanding that condition <ref>, especially concerning the rate at which the initial condition u(·,0) approaches e at the right end of space. * The key (and costly, if μ_1 is negative or equivalently is positive) assumption of <ref> is the inequality < [u] of <ref>, which ensures that invasion occurs at a speed which is greater than the maximal linear invasion speed , and implicitly requires the strict inequality < (see <ref>). * By contrast, if μ_1 is nonnegative (or equivalently if equals 0), then it follows from <ref> that the condition <ref> is satisfied for a large set of solutions, and it follows from <ref> that there exists (at least) one (pushed) travelling front invading e at the (positive) speed . This generalizes (in particular) <cit.>, <cit.>, and <cit.>. * The second inequality of <ref> is required for the variational arguments involved in the proof. This condition is stronger than the one required in the setting of parabolic scalar equations <cit.>, where the speed c of the (unique) pushed front is a priori known, and global convergence towards this pushed front only requires that the initial condition approach the critical point at an exponential rate which is larger than λ_c,+(μ_1), <cit.>. Unfortunately <ref> says nothing concerning the behaviour of solutions for which the first inequality of <ref> is fulfilled, but the profile is say only in λ_[u],+(μ_1)e. To the best knowledge of the authors, this is an open question. * <Ref> only deals with convergence towards pushed front travelling at speeds that are greater than the maximal linear invasion speed , and <ref> deals with the existence of those pushed fronts only. Pushed fronts travelling at speeds not greater than may exist for certain potentials (an easy way to build such an example is to consider two uncoupled scalar equations, see for instance <cit.>), but as already mentioned in <ref>, they do not exist for a generic potential, <cit.>. * <Ref> is the analogue, in the simpler setting of a spatial domain equal to considered here, of <cit.> which is concerned with gradient systems in infinite cylinders (see also <cit.> for scalar parabolic equations and <cit.> for scalar equations in cylinders). The parameter ν_0 of <cit.> is the “cylinder” analogue of the quantity denoted here by μ_1. §.§ Implications on the variational structure in travelling frames The following corollary, proved in <ref>, is a direct consequence of <ref>. The following equalities hold: _-∞ = (0,) , or equivalently _0 = [,+∞) . In addition, if is larger than (in particular if equals 0), then < , and in this case is the only speed c in (0,+∞) for which the energy _c[·] has a global minimizer in ce which is not identically equal to e. Thus there are exactly four possible configurations for the respective positions of the quantities 0, , , and on the real line (see <ref>): * 0 = < <, * 0 < = =, * 0 < = <, * 0 < < <, and, according to <ref>, there exists a pushed front invading e at a speed larger than if and only if is larger than , that is in cases <ref>. §.§ Short historical review Global convergence towards pulled travelling fronts for scalar equations was first established in the celebrated work of Kolmogorov, Petrovskii, and Piskunov <cit.>. The adjectives “pulled” and “pushed” were introduced by Stokes in <cit.>. Concerning global convergence towards pushed travelling fronts for scalar equations, the first results were obtained by Kanel <cit.> in the “combustion” (“ignition”) case. Fife and McLeod proved the global stability of fronts propagating into stable equilibria (“bistable fronts”, which can be seen as a particular class of pushed fronts) <cit.> and of stacked families of such bistable fronts <cit.>. Still in the scalar case, global convergence towards general pushed fronts was proved by Stokes <cit.> and Rothe <cit.>, and extended to the setting of cylinders by Roquejoffre <cit.>. In all those references, proofs of global convergence rely on comparison principles, and the main result of this paper (<ref> above) can be seen as an extension to systems of <cit.>, where maximum principles are replaced with variational arguments. Still in the scalar case, a minmax expression of the minimal speed of monotone fronts invading a critical point, and therefore a characterization of the nature (pushed or pulled) of the corresponding front was provided by Hadeler and Rothe in <cit.>. For a broader picture and a thorough review of experimental observations of invasion processes across the sciences, see <cit.>, and for a deeper explanation of the difference between pushed and pulled travelling fronts, see <cit.>. For parabolic gradient systems as those considered in this article (when the dimension d exceeds 1), maximum principles do not hold any more in general. However, many of the global stability results known in the scalar case can still be recovered by variational methods for such systems. The fundamental observation underlying the proofs of such extensions is the fact that a variational structure (an energy decreasing with time, at least formally) exists not only in standing frames, but also in frames travelling at any constant velocity. This fact is known for a long time, and was used for instance by Fife and McLeod in their proof of the global stability of bistable fronts in the scalar case <cit.> and by Roquejoffre <cit.>. However, attempts to fully embrace the implications of this rich variational structure are more recent, and originated with the works of Heinze <cit.>, and especially of Muratov and his collaborators <cit.> (by the way, as mentioned in remark <ref> above, <ref> is essentially contained in <cit.>, and conclusion <ref> of prop:basic_properties_variational_structure is a reformulation of <cit.>). Pushing further these ideas, Gallay and the second author proved global convergence results towards travelling fronts invading stable equilibria for parabolic systems of the form <ref> <cit.>, and a rather comprehensive description of the asymptotic behaviour of solutions that are stable at both ends of (“bistable” solutions) was obtained by the second author, for parabolic systems <cit.>, for their hyperbolic analogues <cit.>, and for radially symmetric solutions of parabolic systems in higher space dimension <cit.>, under generic assumptions on the potential V <cit.>. In the meanwhile, the same variational structure has been successfully applied to a broader range of settings: harmonic heat flow <cit.>, heterogeneous environments <cit.>, FitzHugh–Nagumo system <cit.>, two-dimensional heteroclinic travelling waves <cit.>. In the setting of scalar hyperbolic equations, a set of new ideas and techniques was introduced by Gallay and Joly to derive from the same gradient structure the global stability of bistable travelling waves <cit.>. Their approach turns out to be especially relevant to prove global convergence towards pushed travelling fronts, as was shown by Luo <cit.> still in the same setting of scalar hyperbolic equations. It is the same set of ideas and techniques, adapted to parabolic systems, that are the main building blocks of the proof of <ref> provided here. The initial motivation for this work is a recent result of the first author <cit.> about the existence of travelling waves connecting degenerate minimum sets of the potential V for system <ref>, proved by a completely different approach, which extends the method introduced by Alikakos and Katzourakis in <cit.> to curves taking values in an infinite-dimensional Hilbert space, in the spirit of earlier works by Monteil and Santambrogio <cit.> and Smyrnelis <cit.>. As a matter of fact, as already mentioned in remark <ref> above, <ref> extends the existence part of <cit.>; more refined results in the specific setting of propagation into degenerate minimum sets will be provided in the forthcoming work <cit.>. § PRELIMINARIES Let us consider a potential function V in ^2(^d,) and a critical point e of V, satisfying assumptions <ref> and <ref>. §.§ Global existence of solutions and regularization Among various possible choices for the functional space where the semi-flow of system <ref> can be considered, the space (see for instance <cit.>) fits well with the purpose of this article and the variational methods involved in the proofs: it contains bounded solutions (among which travelling fronts) and the regularity of its functions allows to consider the energy of a solution from time zero (<ref>). The following proposition is standard (for a proof see for instance <cit.>). For every function u_0 in , system <ref> has a unique globally defined solution t↦ S_t u_0 in ^0([0,+∞),) with initial condition u_0. In addition, the quantity lim sup_t→+∞x↦(S_t u_0)(x)_ is bounded from above by a quantity depending only on V. In addition, the parabolic system <ref> has smoothing properties (Henry <cit.>). Due to these properties, since V is of class ^2 and thus the nonlinearity ∇ V is of class ^1, for every quantity α in the interval (0,1), every solution t↦ S_t u_0 in ^0([0,+∞),) actually belongs to ^0((0,+∞),2,α)∩^1((0,+∞),0,α), and, for every positive quantity ε, the quantities sup_t≥εS_t u_0_2,α and sup_t≥εd(S_t u_0)/dt(t)_0,α are finite. In addition, there exists a quantity (radius of an attracting ball for the ^1-norm), depending only on V, such that, for every large enough positive time t, S_t u_0_^1(,^d)≤ . §.§ Asymptotic compactness The next lemma follows from the bounds <ref> above. For every solution (x,t)↦ u(x,t) of system <ref>, and for every sequence (x_n,t_n)_n∈ in ×[0,+∞) such that t_n→+∞ as n→+∞, there exists a entire solution u_∞ of system <ref> in ^0(,2)∩^1(,0) , such that, up to replacing the sequence (x_n,t_n)_n∈ by a subsequence, D^2,1u(x_n+·,t_n+·)→ D^2,1u_∞ as n→+∞ , uniformly on every compact subset of ^2, where the symbol D^2,1v stands for (v,v_x,v_xx,v_t) (for v equal to u or u_∞). See <cit.> or the proof of <cit.>. §.§ Invaded critical point at the origin of Rd For convenience, it will be assumed all along the current <ref> (and along most of the next <ref>) that e = 0_^d . This assumption amounts to replacing the initial potential function u↦ V(u) by the “new” potential function u↦ V(e+u), and it can be made without loss of generality; indeed, even if assumption <ref> is not necessarily satisfied by this new potential, this assumption will not be directly used in the proof: only its consequences (the global existence of solutions and their asymptotic compactness stated in the two subsec:compactness above) will, and these consequences still hold after a translation in the state variable u. §.§ Time derivative of energy in a travelling frame The aim of this subsec:proof_time_derivative_energy_travelling_frame is to prove <ref>. For every positive quantity c, following the notation H^1_c(,^d) introduced in <ref>, let us introduce the following weighted Sobolev spaces: L^2_c(,^d) = {w∈(,^d):  the function ξ↦ e^1/2cξ w(ξ) is in L^2(,^d) } , H^2_c(,^d) = {w∈ H^1_c(,^d):  the function ξ↦ e^1/2cξ w”(ξ) is in L^2(,^d)} . The proof follows from standard results of analytic semi-group theory, see <cit.>, and similar statements in related settings can be found in the literature, see for instance <cit.>, <cit.>, and <cit.>. The setting considered in these two last references (scalar equations in cylindrical domains) differs from the one considered here, however the semi-group arguments proving the result are unchanged. Here are some elements of the proof. The operator ∂_xx:H^2_c(,^d)→ L^2_c(,^d) is a densely defined sectorial operator of L^2_c(,^d) and since the values u(x,t) taken by the solution are bounded (uniformly with respect to x in and t in [0,+∞)), the nonlinearity w↦∇ V(w) can be considered as a globally Lipschitz map of L^2_c(,^d) onto itself (up to changing the values of ∇ V outside of a large ball of ^d containing all the values taken by the solution). Thus, the last conclusion of <ref> (semi-flow in H^1_c(,^d)∩ and continuity with respect to the initial condition in this space) follows from <cit.>, and it follows from <cit.> that, for every α in (0,1), the solution t↦ u(·,t) belongs to the space ^0([0,+∞),H^1_c(,^d)) ∩ ^α((0,+∞),H^2_c(,^d)) ∩ ^1,α((0,+∞),L^2_c(,^d)) ; in particular, its time derivative t↦ u_t(·,t) belongs to the space ^α((0,+∞),L^2_c(,^d)) , which means that D_c is uniformly continuous on (0,+∞). It follows from the first among these two conclusions that the function t↦_c[v(·,t)] is continuous on [0,+∞). Regarding its differentiability, a formal derivation under the integral sign yields: d/dt_c[v(·,t)] = ∫_ e^cξ(v_ξ· v_tξ + ∇ V(v)· v_t) dξ , which provides the intended conclusion after integrating by parts the term e^cξv_ξ· v_tξ. However, this computation is not rigorously justified since the function v_tξ may not belong to L^2_c(,^d), or even exist. A way to circumvent this issue is to work with the discrete time derivative of v_ξ. Notice that, due to local parabolic estimates and since ∇ V is of class ^1, the function v is of class ^1 in time and ^2 in space on ×(0,+∞). As a consequence, for all positive quantities h and L, introducing the functions v^h and v^h_ξ defined as v^h(ξ,t) = v(ξ,t+h/2) - v(ξ,t-h/2)/h and v^h_ξ(ξ,t) = v_ξ(ξ,t+h/2) - v_ξ(ξ,t-h/2)/h , the following integration by parts holds: ∫_-L^L e^cξ v^h_ξ(ξ,t)· v_ξ(ξ,t) dξ = e^cLv^h(L,t)· v_ξ(L,t)-e^-cLv^h(-L,t)· v_ξ(-L,t) -∫_-L^L e^cξ v^h(ξ,t) ·(v_ξξ(ξ,t)+cv_ξ(ξ,t)) dξ . It follows from the regularity of v that ∫_-L^L e^cξ v^h_ξ(ξ,t)· v_ξ(ξ,t) dξ = e^cLv_t(L,t)· v_ξ(L,t)-e^-cLv_t(-L,t)· v_ξ(-L,t) -∫_-L^L e^cξ v_t(ξ,t) ·(v_ξξ(ξ,t)+cv_ξ(ξ,t)) dξ + O_h→ 0(h) . For all positive quantities T_1 and T_2 satisfying the inequalities h/2<T_1<T_2, integrating this equality on the interval [T_1,T_2] and applying Fubini's Theorem yields: 1/h∫_T_2-h/2^T_2∫_-L^L e^cξ v_ξ(ξ,t+h/2)· v_ξ(ξ,t) dξ dt - 1/h∫_T_1-h/2^T_1∫_-L^L e^cξ v_ξ(ξ,t+h/2)· v_ξ(ξ,t) dξ dt =∫_T_1^T_2( e^cL v_t(L,t)· v_ξ(L,t) - e^-cL v_t(-L,t)· v_ξ(-L,t)) dt -∫_-L^L ∫_T_1^T_2 e^cξ v_t(ξ,t) ·(v_ξξ(ξ,t)+cv_ξ(ξ,t)) dt dξ + O_h→ 0(h) ; according to the continuity of v_ξ, passing to the limit as h goes to 0 yields: 1/2∫_-L^L e^cξ v_ξ^2(ξ,T_2) dξ -1/2∫_-L^L e^cξ v_ξ^2(ξ,T_1) dξ =∫_T_1^T_2( e^cL v_t(L,t)· v_ξ(L,t) - e^-cL v_t(-L,t)· v_ξ(-L,t)) dt -∫_-L^L ∫_T_1^T_2 e^cξ v_t(ξ,t) ·(v_ξξ(ξ,t)+cv_ξ(ξ,t)) dt dξ ; and, since t ↦ u(·,t) belongs to the space <ref>, passing to the limit as L goes to +∞ along a suitable subsequence yields, after another application of Fubini's Theorem, 1/2∫_ e^cξ v_ξ^2(ξ,T_2) dξ - 1/2∫_ e^cξ v_ξ^2(ξ,T_1) dξ = -∫_T_1^T_2∫_ e^cξ v_t(ξ,t) ·(v_ξξ(ξ,t) + cv_ξ(ξ,t)) dξ dt . Another consequence of Fubini's Theorem is the identity ∫_ e^cξ V(v(ξ,T_2)) dξ - ∫_ e^cξ V(v(ξ,T_1)) dξ = ∫_T_1^T_2∫_ e^cξ∇ V(v(ξ,t)) · v_t(ξ,t) dξ dt . It follows from <ref> that _c[v(·,T_2)]-_c[v(·,T_1)] = -∫_T_1^T_2 D_c(t) dt , which, by the Fundamental Theorem of Calculus, implies that t↦_c[v(·,t)] is differentiable on (0,+∞) and that its derivative at every positive time t is equal to -D_c(t), which completes the proof. §.§ Energy of a pushed front in a travelling frame Let ϕ denote the profile of a pushed front travelling at some (positive) speed c and invading 0_^d. According to the notation <ref> for the eigenvalues of the linearized differential system <ref> at 0_^d and to the <ref> of a pushed travelling front, there exists an integer j in {1,…,d} such that, for k in {0,1,2} and if D^kϕ denotes the k-th derivative of ϕ, D^k ϕ(ξ) = O(e^λ_c,-(μ_j)ξ) as ξ→+∞ . According to the expression <ref> of λ_c,-(μ) and to the <ref> of a pushed travelling front, 2λ_c,-(μ_j) < -c . The following result (see <ref>) was first established by Muratov <cit.> in the setting of gradient parabolic systems in cylinders. For sake of completeness, a proof in the present setting is provided below. For every speed c' in the interval (0,2λ_c,-(μ_j)), the following equality holds: _c'[ϕ] = (1-c/c') ∫_ e^c'ξϕ'(ξ)^2  dξ ; in particular, the energy of a pushed front in the frame travelling at its own speed vanishes: _c[ϕ] = 0 . Multiplying the differential system <ref> governing the profile of ϕ by e^c'ξϕ'(ξ) and integrating over leads to (omitting the argument ξ of ϕ and its derivatives in the integrand): ∫_ e^c'ξ(ϕ'·ϕ” + c (ϕ')^2 - ∇ V(ϕ)·ϕ') dξ = 0 , or equivalently, (c'-c) ∫_ e^c'ξ (ϕ')^2  dξ = ∫_ e^c'ξ(ϕ'·ϕ” + c' (ϕ')^2 - ∇ V(ϕ)·ϕ') dξ = ∫_ e^c'ξ(-1/2c'(ϕ')^2 + c' (ϕ')^2 +c' V(ϕ)) dξ = c'∫_ e^c'ξ(1/2(ϕ')^2 + V(ϕ)) dξ , which is the intended equality <ref>. Choosing c' equal to c in this equality <ref> yields the second equality <ref>. §.§ Poincaré inequalities in weighted Sobolev spaces As was already observed by Muratov <cit.>, Poincaré inequalities in the weighted Sobolev spaces H^1_c(,^d) are a key ingredient for exploiting the variational structure in travelling frames, in that they provide lower bounds on the energy _c[·]. The following lemma is a variant of <cit.>, <cit.>, and <cit.>. It will be used in the proof of <ref> in the next subsec:lower_bound_energy_trav_frame, and furthermore all along the proof of <ref>. For every positive quantity c and every function v in H^1_c(,^d), the following conclusions hold. * The following limits hold: v(ξ) = o(e^-1/2cξ) as ξ→±∞. * For every real quantity ξ_0, every real quantity ξ_1 greater than ξ_0, and every positive quantity λ, the following inequalities hold: ∫_ξ_0^ξ_1 e^cξv'(ξ)^2 dξ ≥λ e^cξ_0v(ξ_0)^2 - λ e^cξ_1v(ξ_1)^2 + λ(c-λ)∫_ξ_0^ξ_1 e^cξv(ξ)^2 dξ , ∫_ξ_0^+∞ e^cξv'(ξ)^2 dξ ≥λ e^cξ_0v(ξ_0)^2 + λ(c-λ)∫_ξ_0^+∞ e^cξv(ξ)^2 dξ , ∫_ξ_0^+∞ e^cξv'(ξ)^2 dξ ≥c/2e^cξ_0v(ξ_0)^2 + c^2/4∫_ξ_0^+∞e^cξv(ξ)^2 dξ , ∫_-∞^+∞ e^cξv'(ξ)^2 dξ ≥c^2/4∫_-∞^+∞e^cξv(ξ)^2 dξ . In addition, if v is not identically equal to 0_^d on then inequality <ref> is actually strict, and so is inequality <ref> if v is not identically equal to 0_^d on [ξ_0,+∞). Inequality <ref> is the limit of inequality <ref> as ξ_1 goes to +∞, and inequality <ref> is nothing but inequality <ref> for λ equal to c/2. This choice of λ is optimal to maximize the term involving the integral of v(ξ)^2 to the right-hand side of these inequalities; in particular, it is the best possible choice if the integration domain is the whole real line (inequality <ref>). In inequality <ref>, choosing a quantity λ which is larger than c/2 (say between c/2 and c) increases the size of the term involving v(ξ_0)^2 at the expense of the integral (see statement <ref> of <ref>) — and choosing λ smaller than c/2 does not make sense. In inequality <ref> by contrast, choosing λ smaller than c/2 can make sense since this decreases the size of the negative term on the right-hand side (see the proof of <ref>). For every quantity ξ_1 greater than ξ_0, e^cξ_1v(ξ_1)^2 - e^cξ_0v(ξ_0)^2 = ∫_ξ_0^ξ_1 e^cξ(c v(ξ)^2 + 2 v(ξ)· v'(ξ) ) dξ . Since v is in H^1_c(,^d), the right-hand side of this inequality converges to a finite limit as ξ_1 goes to +∞; thus the same is true for the quantity e^cξ_1v(ξ_1)^2, and since the function ξ↦ e^cξv(ξ)^2 is in L^1(,^d), this limit is necessarily 0. The same argument shows that the quantity e^cξ_0v(ξ_0)^2 must also go to 0 as ξ_0 goes to -∞. This proves conclusion <ref>. For every positive quantity λ, using the polar identity 2 v(ξ)· v'(ξ) = - λ^-1 v'(ξ)^2 - λ v(ξ)^2 + (λ^-1/2v'(ξ)+λ^1/2v(ξ))^2 and multiplying the previous equality by λ, it follows that ∫_ξ_0^ξ_1 e^cξv'(ξ)^2 dξ = λ e^cξ_0v(ξ_0)^2 - λ e^cξ_1v(ξ_1)^2 + λ(c-λ) ∫_ξ_0^ξ_1 e^cξv(ξ)^2 dξ + λ∫_ξ_0^ξ_1e^cξ(λ^-1/2v'(ξ)+λ^1/2v(ξ))^2 dξ , and dropping the last (nonnegative) integral gives inequality <ref>. According to conclusion <ref>, passing to the limit as ξ_1 goes to +∞ in equality <ref> gives ∫_ξ_0^+∞ e^cξv'(ξ)^2 dξ = λ e^cξ_0v(ξ_0)^2 + λ(c-λ) ∫_ξ_0^+∞ e^cξv(ξ)^2 dξ + λ∫_ξ_0^+∞e^cξ(λ^-1/2v'(ξ)+λ^1/2v(ξ))^2 dξ , and dropping the last (nonnegative) integral gives inequality <ref> and inequality <ref> for λ equal to c/2, and finally inequality <ref> by passing to the limit as ξ_0 goes to +∞. To prove the “strict” version of inequalities <ref>, observe that, if the quantity ∫_ξ_0^+∞e^cξ(λ^-1/2v'(ξ)+λ^1/2v(ξ))^2 dξ vanishes, then there must exists some vector w of ^d such that, for every ξ in [ξ_0,+∞), v(ξ) = e^-λξ w , and if in addition λ is equal to c/2, then according to conclusion <ref> this vector w must be equal to 0_^d. In other words, if v is not identically equal to 0_^d on [ξ_0,+∞) then the integral <ref> is positive, and the same is true for the same integral over if v is not identically equal to 0_^d on . As can be seen on equality <ref>, the fact that the quantity e^cξ_1v(ξ_1)^2 goes to 0 as ξ_1 goes to +∞ is crucial to obtain a meaningful lower bounds on the integrals of e^cξv'(ξ)^2 at the left-hand side of inequalities <ref>. §.§ Lower bound on energy in a travelling frame Let us consider a positive quantity c_0 and a negative quantity μ_0 such that μ_0 = -c_0^2/4 , or equivalently c_0 = 2√(-μ_0) , and μ_0 ≤μ_1 , or equivalently ≤ c_0 , see <ref>. Let us consider a quantity c in [c_0,+∞) and a function v in H^1_c(,^d), and, in accordance with the notation λ_c,±(·) introduced in <ref>, let us consider the quantities λ_c,±(μ_0) = -c/2±√(c^2/4+μ_0) , see <ref>. The next lemma will rely on the assumption that the inequality V(v(ξ))≥1/2μ_0 v(ξ)^2 holds for ξ in or for ξ in some interval [ξ,+∞) of (see <ref>). Conclusion <ref> of this lemma is similar to <cit.> and conclusion <ref> is similar to <cit.>, <cit.>, and <cit.>. The following two statements hold. * If v is not identically equal to 0_^d and inequality <ref> holds for every ξ in , then _c[v]>0 . * If there exists ξ in such that inequality <ref> holds for every ξ in [ξ,+∞), then _c[v] ≥ e^cξ(-/c+1/2λ_c,-(μ_0)v(ξ)^2) , and if in addition c is greater than c_0, then there exists a positive quantity α, depending on c and c_0 (only) such that _c[v] ≥ -e^cξ/c+α∫_ξ^+∞ e^cξ(v'(ξ)^2+v(ξ)^2) dξ . It follows from the expression <ref> of _c[v] that _c[v] = ∫_e^cξ(1/2 v'(ξ)^2 + V(v(ξ))) dξ = ∫_e^cξ( 1/2(v'(ξ)^2 -c^2/4v(ξ)^2) + c^2-c_0^2/8v(ξ)^2 + (V(v(ξ))-μ_0/2v(ξ)^2) ) dξ , so that, since c is greater than or equal to c_0, if inequality <ref> holds for every ξ in , then _c[v] ≥1/2∫_e^cξ(v'(ξ)^2 -c^2/4v(ξ)^2) dξ , and if in addition v≢0_^d, inequality <ref> follows from inequality <ref> of <ref>. Statement <ref> is proved. Now, let us assume that there exists ξ in such that inequality <ref> holds for every ξ in [ξ,+∞). It again follows from the expression <ref> of _c[v] that _c[v] = ∫_-∞^ξ e^cξ(1/2v'(ξ)^2 + V(v(ξ))) dξ + ∫_ξ^+∞ e^cξ(1/2v'(ξ)^2 + V(v(ξ))) dξ ≥∫_-∞^ξ e^cξ V_min dξ + 1/2∫_ξ^+∞ e^cξ(v'(ξ)^2 + μ_0 v(ξ)^2) dξ = -e^cξ/c + 1/2∫_ξ^+∞ e^cξ(v'(ξ)^2 + μ_0 v(ξ)^2) dξ . Thus, if we consider a quantity λ satisfying λ(c-λ) = -μ_0 λ^2 - cλ - μ_0 = 0 λ = c/2±√(c^2/4+μ_0) = λ_c,±(μ_0) , then it follows from the lower bound <ref> on _c[v] and from inequality <ref> of <ref> that _c[v] ≥ -e^cξ/c + 1/2λ e^cξ v(ξ)^2 , so that, if λ is chosen equal to λ_c,-(μ_0) (which provides a better lower bound than if it is chosen equal to λ_c,+(μ_0)), then inequality <ref> follows. Inequality <ref> of statement <ref> is proved. Let α denote a positive quantity to be chosen below, and let us introduce the quantity Q defined as Q = _c[v] + e^cξ/c - α∫_ξ^+∞ e^cξ(v'(ξ)^2+v(ξ)^2) dξ ; proving the second inequality <ref> of statement <ref> amounts to prove that Q is nonnegative. It follows from the lower bound <ref> on _c[v] that Q ≥∫_ξ^+∞ e^cξ((1/2-α)v'(ξ)^2) - (c_0^2/8+α)v(ξ)^2) dξ . Thus, if α is smaller than or equal to 1/2, it follows from inequality <ref> of <ref> that Q ≥((1/2-α)c^2/4 - (c_0^2/4+α)) ∫_ξ^+∞ e^cξv(ξ)^2 dξ = 1/8(c^2 - c_0^2 - α(8+2c^2)) ∫_ξ^+∞ e^cξv(ξ)^2 dξ , so that, if α is chosen as α = min(c^2-c_0^2/8+2c^2,1/2) , then α is positive and the quantity Q is nonnegative. This proves inequality <ref>, and therefore completes the proof of statement <ref>. In the proof of statement <ref>, using Poincaré inequality <ref> (that is, choosing λ equal to c/2 instead of λ_c,-(μ_0)) would have led to the (slightly weaker) inequality _c[v] ≥ e^cξ(-/c+c/4v(ξ)^2) , which would actually have fulfilled the same needs as the stronger inequality <ref>, in the remaining of the paper. §.§ Basic properties of the variational structure §.§.§ Basic properties of the sets C-infty and C0 Let us recall the notation _-∞ and _0 introduced in <ref>. The following conclusions hold. * The set _0 contains the interval [,+∞); in addition, if c is the speed of a pushed travelling front invading 0_^d, then c< . * The set _-∞ contains the interval (0,). * The set _-∞ is open; equivalently, the set _0 is closed. Conclusion <ref> is close to <cit.>; the quantities and are denoted by μ_- and c_max in this reference, see <cit.>. Let us us consider a speed c in the interval [,+∞) and a function v in H^1_c(,^d) which is not identically equal to 0_^d. According to the last inequality of <ref>, the assumptions of statement <ref> of <ref> hold when the parameter μ_0 involved in this lemma is replaced with . According to this statement, the quantity _c[v] must be positive. This shows that the interval [,+∞) is included in the set _0. In addition, since the energy of a pushed travelling front in the frame travelling at its own speed vanishes (equality <ref> of <ref>), the quantity c cannot be the speed of a pushed travelling front invading 0_^d. Conclusion <ref> is proved. Let us prove conclusion <ref>. If the maximal linear invasion speed is zero (that is, if the least eigenvalue μ_1 of D^2V(0_^d) is nonnegative) there is nothing to prove. Let us assume that is positive, or equivalently that μ_1 is negative, and let c denote a quantity (speed) in the interval (0,). Let u_1 denote a normalized eigenvector of D^2V(0_^d) for the eigenvalue μ_1 and let χ:→ denote a smooth cutoff function satisfying χ(x) = { 1 if x≤ 0 , 0 if 1≤ x , . and, for all x in , 0≤χ(x)≤ 1 . Let ε denote a positive quantity, small enough so that c+ε < , and let us consider the function w:→^d, defined as: w(ξ) = χ(1-ξ) e^-c+ε/2ξ u_1 , and which belongs to H^1_c(,^d). It follows from this expression that, for all ξ in [1,+∞), 1/2 w'(ξ)^2 = 1/8 (c+ε)^2 e^-(c+ε)ξ , and that V(w(ξ)) ∼1/2μ_1 e^-(c+ε)ξ = - 1/8^2 e^-(c+ε)ξ as ξ→+∞ , so that e^cξ(1/2 w'(ξ)^2 + V(w(ξ))) ∼ - 1/8(^2-(c+ε)^2) e^-εξ as ξ→+∞ . It follows that _c[w]→ -∞ as ε→0 , ε >0 . This shows that c belongs to _-∞, and therefore proves conclusion <ref>. To prove conclusion <ref>, let us consider a quantity c in the set _-∞ (this quantity c is therefore positive). According to the definition <ref> of _-∞, there exists a function w in H^1_c(,^d) such that the energy _c[w] is negative. Let us consider again the smooth cutoff function χ satisfying the conditions <ref>. Let denote a (large) positive quantity to be chosen below and let us consider the function w̃ defined as w̃(x) = χ(x-) w(x) . Since w is in H^1_c(,^d), the quantity _c[w̃] goes to _c[w] as goes to +∞; thus, if is large enough positive, the quantity _c[w̃] is (also) negative; it follows that, for c' close enough to c, the quantity _c'[w̃] is again negative, which shows that c' belongs to _-∞ and yields the intended conclusion. §.§.§ A sufficient condition for invasion to occur It follows from conclusion <ref> of <ref> above that, if the maximal nonlinear invasion speed is positive (that is, if the least eigenvalue μ_1 of D^2V(0_^d) is negative), then the set _-∞ (which according to conclusion <ref> of <ref> contains the interval (0,)) is nonempty. The next proposition (which extends <cit.>) sets the ground for the upcoming <ref> which states that the the set _-∞ is actually always nonempty. For every positive quantity and every w in H^1_(,^d), if lim sup_L→+∞∫_-L^+∞(1/2w'(x)^2 + V(w(x))) dx < 0 , then, for every sufficiently small speed c (in the interval (0,]), the energy _c[w] is negative. Let denote a positive quantity and w denote a function in H^1_(,^d). Let us consider the function 𝔢:→ defined as 𝔢(x) = 1/2w'(x)^2 + V(w(x)) , and let us assume that assumption <ref> above holds, that is lim sup_x→-∞∫_x^+∞𝔢(y) dy < 0 . It follows from this assumption that there exists a (small) positive quantity ε and a (large) negative quantity such that x≤∫_x^+∞𝔢(y) dy ≤ -ε , see <ref>. Since w belongs to H^1_(,^d), there exists a (large, positive) quantity such that the following conclusions hold: x≥ V(w(x))≥ 0 , and thus 𝔢(x)≥ 0 , and ∫_^+∞ e^ x𝔢(x) dx ≤ε/2 , see again <ref>. Let c denote a quantity in (0,]. It follows from the implication <ref> that inequality <ref> still holds if is replaced with c, and it follows that _c[w] ≤[w] + ε/2 , where [w] = ∫_-∞^ e^cx𝔢(x) dx . Let us consider the function F:→ defined as: F(x) = ∫_x^𝔢(y) dy , so that F() = 0 and F'(x) = - 𝔢(x) . Integrating by parts the expression of [w] yields: [w] = [-e^cxF(x)]_-∞^ + ∫_-∞^ c e^cxF(x) dx = ∫_-∞^ c e^cxF(x) dx = ∫_-∞^ c e^cxF(x) dx + ∫_^ c e^cxF(x) dx . Since 𝔢(x) is nonnegative for x not smaller than , it follows from inequality <ref> that x≤ F(x)≤-ε . Thus it follows from the expression of [w] above that [w] ≤ -ε e^c + (e^c-e^c)max_x∈[,]F(x) . As a consequence, if the positive quantity c is small enough, the following inequality holds: [w] <-ε/2 , and the intended conclusion follows from inequality <ref>. <Ref> is proved. There exists a positive quantity ε such that (0,ε)⊂_-∞ . Let u_- denote a point of ^d such that V(u_-) is negative (the existence of such a point u_- follows from the negativity of stated in <ref>). Let us consider the cutoff function χ introduced in <ref>, and let us consider the function w defined as w(x) = χ(x)u_- . This function w fulfils the assumptions of <ref> so that, according to its conclusion, the intended conclusion <ref> follows. §.§.§ Lower semi-continuity of the variational invasion speed Let c denote a quantity (speed) in (,+∞), let w denote a function in the space ∩ H^1_c(,^d) (recall that the critical point e is assumed to be equal to 0_^d in this section), and let u denote the solution of the parabolic system <ref> for the initial condition w at time 0. According to the definition of the variational speed <ref>, [w] is equal to [u], and, for every positive quantity ε, there exists a nonnegative time t_0 such that _[w]-ε[u(·,t_0)] < 0 . According to the continuity of the semi-flow of system <ref> (restricted to ∩ H^1_c(,^d)) with respect to initial conditions (last assertion of <ref>), for every function w̃ in ∩ H^1_c(,^d) close enough to w for the -norm and the H^1_c(,^d)-norm, if ũ denotes the solution of system <ref> for the initial condition w̃ at time 0, then _[w]-ε[ũ(·,t_0)] < 0 ; it follows that [w̃] > [w]-ε , which is the intended conclusion. §.§ Expression of the dissipation in a travelling frame The expression of the dissipation in <ref> leads us to consider, for v in the weighted Sobolev space H^2_c(,^d) (defined in <ref>), the dissipation functional _c[v] defined as _c[v] = ∫_ e^cξ(- ∇ V(v) + c v' + v”)^2  dξ . According to this expression (omitting the argument ξ of v in the integrand), _c[v] = ∫_ e^cξ((∇ V(v)^2 + (c v' + v”)·(- 2∇ V(v) + c v' + v”)) dξ = ∫_ e^cξ(∇ V(v)^2 + (c v' + v”)·(- 2∇ V(v) + c v') + cv'· v” + (v”)^2) dξ , and according to the equality, (e^cξv')' = e^cξ(cv' + v”) , it follows from an integration by parts of the middle term of the integrand that _c[v] = ∫_ e^cξ(∇ V(v)^2 + 2 D^2V(v)· v'· v' - cv'· v” + cv'· v” + (v”)^2) dξ = ∫_ e^cξ(∇ V(v)^2 + 2 D^2V(v)· v'· v' + (v”)^2) dξ , see <cit.> for an identical expression in the scalar case. This expression will not be used as such, but it will justify the introduction, in <ref>, of another function F_c(t) with the purpose of controlling the amount of energy to the right of the invasion point, in a frame travelling at a speed close to the invasion speed. § PROOF OF THE MAIN RESULTS §.§ Set-up The proof closely follows the arguments of <cit.>. Let us consider: * a potential function V in ^2(^d,) and a critical point e of V satisfying assumptions <ref> and <ref>; * a solution (x,t)↦ u(x,t) of the parabolic system <ref> satisfying the condition <ref> of <ref>, that is: < [u] < [u] . Let c_0 and denote two quantities (speeds) satisfying: < c_0 < [u] < < [u] . According to the definition <ref> of [u] and [u] (<ref>), it may be assumed that, up to changing the origin of times, _c_0[u(·,0)] < 0 and u(·,0) ∈ H^1_(,^d) . Let us consider the (negative) quantity μ_0 defined as μ_0 = - c_0^2/4 ; It follows from inequalities <ref> that c_0 is less than ; thus, < c_0 < , or equivalently, < μ_0 < μ_1 . §.§.§ Maximal radius of stability for pushed invasion at the speed c0 Let us call maximal radius of stability for pushed invasion at the speed c_0 the quantity (c_0) defined as: (c_0) = inf{u:u∈^d and V(u)<1/2μ_0(u-e)^2} . According to this definition and to inequalities <ref>, 0 < (c_0) < +∞ , and for every u in B_^d(e,(c_0)), V(u)≥1/2μ_0 (u-e)^2 ; in addition, (c_0) is the largest positive quantity satisfying this property <ref>, see <ref>. §.§.§ Invaded critical point at the origin of Rd and upper bound on the solution For convenience, it will be assumed, until the end of <ref>, that e is equal to the origin of ^d. Let us recall the quantity , depending only on V, introduced in inequality <ref>. According to this inequality, up to changing the origin of times (and without loss of generality), it may be assumed that, for every nonnegative time t, sup_x∈u(x,t) + u_x(x,t)≤ . Likewise, it may be assumed that the conclusion <ref> of <ref> holds for every nonnegative time t (and not only for every positive time t). §.§ Invasion point For every nonnegative time t and every quantity δ in (0,(c_0)], let us consider the set δ(t) = {x∈: u(x,t)>δ} . It follows from the properties <ref> of u(·,0) and from <ref> that the quantity _c_0[u(·,t)] is negative, so that the set δ(t) is: * according to inequality <ref> of statement <ref> of <ref>, nonempty, * and according to <ref>, bounded from above. Let us call invasion point in the laboratory frame (at time t) the quantity x(t) defined as x(t) = sup((c_0)(t)) (according to the remark above this quantity is finite), and, for every real quantity c, let us call invasion point in the frame travelling at the speed c the quantity ξ_c(t) defined as ξ_c(t) = x(t) - ct , see <ref>. * The point labelled as “invasion point” in this article is often called “leading edge” in the literature, see for instance <cit.>. * In most places, c will be assumed to be positive; however allowing c to be nonpositive in the notation <ref> above is more convenient for the presentation of <ref> and <ref> in <ref>. According to this notation, v(ξ_c(t),t) = (c_0) and, for all ξ in [ξ_c(t),+∞), v(ξ,t)≤(c_0) , and both quantities x(t) and ξ_c(t) are lower semi-continuous (but not necessarily continuous) with respect to t. This lower semi-continuity will not be used as such (more quantitative estimates on these quantities will be obtained in the next subsec:invasion_speed). §.§ Invasion speed The content of this subsec:invasion_speed owe much to the arguments of <cit.>, <cit.>, and <cit.>. §.§.§ Lower bound on energy in a travelling frame Let c denote a quantity in the interval (c_0,], and let us consider the solution (ξ,t)↦ v(ξ,t) of system <ref> defined in <ref> (for the speed c). For every nonnegative time t, it follows from the second assertion of <ref> and from <ref> that the quantity E_c(t) defined as E_c(t) = _c[v(·,t)] is finite. In addition, it follows from inequality <ref> of <ref> that there exists a positive quantity α, depending on c and c_0 (only) such that, for every nonnegative time t, E_c(t)≥ - e^cξ_c(t)V_min/c + α∫_ξ_c(t)^+∞ e^cξ(v_ξ(ξ,t)^2 + v(ξ,t)^2 ) dξ . §.§.§ Bounds on invasion point For every c in (c_0,[u]), there exists a positive quantity K such that, for every large enough positive time t, K + ct ≤x(t) . According to the definition <ref> of ξ_c(t), the intended inequality <ref> is equivalent to K ≤ξ_c(t) . Since c is assume to be smaller than [u], it follows from the definition <ref> of [u] that there exists a nonnegative time t' such that E_c(t') is negative. Then, according to <ref> and inequality <ref>, for every time t greater than or equal to t', 0> E_c(t') ≥ E_c(t) ≥ - e^cξ_c(t)V_min/c , so that 0 < E_c(t')≤ e^cξ_c(t)V_min/c , and so that, if we consider the quantity K = 1/cln(E_c(t')c/V_min) , then inequality <ref> follows. Besides <ref> below, which provides more control on the asymptotic behaviour of the invasion point, the following elementary lemma will be convenient in some of the upcoming arguments. For every positive time T, the quantity sup_t∈[0,T]x(t) < +∞ . Let us proceed by contradiction and assume that, for some positive time T, the converse holds. Then there exists a sequence (t_n)_n∈ of nonnegative times, converging to some limit t_∞ in [0,T], such that x(t_n) goes to +∞ as n goes to +∞. Since the function x↦ u(x,t_∞) is in H^1_(,^d), it converges to 0_^d to the right end of space. Thus, since the solution varies continuously in with respect to time, u(x,t) is arbitrarily close to 0_^d if x is large enough positive and t is close enough to t_∞, a contradiction with the fact that u(x(t_n),t_n) equals (c_0) for all n in . §.§.§ Upper control on invasion point Let us consider the quantities c_- = lim inf_t→+∞x(t)/t and c_+ = lim sup_t→+∞x(t)/t . It follows from <ref> and from these expressions that [u] ≤c_- ≤c_+ ≤ +∞ . The following equality holds: [u] = c_- = c_+ . Let us proceed by contradiction and assume that [u] < c_+ , and let us consider a sequence (t_n)_n∈ of nonnegative times going to +∞, such that x(t_n)/t_nc_+ . By compactness (<ref>), up to replacing the sequence (t_n)_n∈ by a subsequence, there exists an entire solution u_∞ of system <ref> such that, with the notation of <ref>, D^2,1u(x(t_n)+·,t_n+·)→ D^2,1u_∞ as n→+∞ , uniformly on every compact subset of ^2. Recall that, according to the definition <ref> of x(·), u(x(t_n),t_n) equals (c_0) (for every nonnegative integer n), so that the same is true for u_∞(0,0) (this property will be called upon at the end of the proof). Let us pick a quantity c in the interval ([u],min(c_+,)). Since c is less than c_+, ξ_c(t_n)+∞ , and since c is greater than [u] but less than , the quantity E_c(t) is nonnegative for every nonnegative time t. It follows that, for every positive quantity T, the nonnegative quantity E_c(t_n-T)-E_c(t_n+T) , which, according to equality <ref>, equals ∫_t_n-T^t_n+TD_c(t)dt , goes to 0 as n goes to +∞. Let us consider the function v(ξ,t) defined as in <ref>. For every positive quantity L, the substitutions ξ = x-ct and t = t_n + s and x = x(t_n) + y lead to: ∫_t_n-T^t_n+TD_c(t)dt = ∫_t_n-T^t_n+T(∫_e^cξv_t(ξ,t)^2 dξ) dt = ∫_t_n-T^t_n+T(∫_e^c(x-ct)(u_t+cu_x)^2(x,t) dx) dt = ∫_-T^Te^c(x(t_n)-ct_n-cs)(∫_e^cy(u_t+cu_x)^2(x(t_n)+y,t_n+s) dy) ds ≥∫_-T^Te^c(x(t_n)-ct_n-cs)(∫_-L^Le^cy(u_t+cu_x)^2(x(t_n)+y,t_n+s) dy) ds . Let us consider the integrals _n = ∫_-T^T(∫_-L^L(u_t+cu_x)^2(x(t_n)+y,t_n+s) dy) ds and _∞ = ∫_-T^T(∫_-L^L(∂_t u_∞+c∂_x u_∞)^2(y,s) dy) ds . It follows from inequality <ref> that ∫_t_n-T^t_n+TD_c(t)dt ≥ e^c(x(t_n)-ct_n-cT-L)_n = e^c(ξ(t_n)-cT-L)_n , and according to the limit <ref> the exponential factor of _n on the right-hand side of this inequality goes to +∞ as n goes to +∞. Thus the nonnegative quantity _n must go to 0 as n goes to +∞. On the other hand, according to the limits <ref>, _n goes to _∞ as n goes to +∞, so that _∞ must be equal to 0. Since this holds for all positive quantities T and L, the function ∂_t u_∞+c∂_x u_∞ must be identically equal to 0_^d on ^2. At this stage, the key observation is that, while the entire solution u_∞ defined by the limits <ref> does not depend on c, the previous conclusion must hold not only for one particular value of c, but for every c in the interval (c_-,c_+). It thus follows that both functions ∂_t u_∞ and ∂_x u_∞ must actually be identically equal to 0_^d on ^2. In other words, u_∞ must be identically equal to some point of ^d, which, according to the remark made at the beginning of the proof, must be at distance (c_0) from 0_^d. Besides, it follows from inequality <ref> that, for every nonnegative integer n, E_c(0) ≥ E_c(t_n) ≥ -e^cξ_c(t_n)V_min/c + α∫_ξ_c(t_n)^+∞ e^cξ(v_ξ(ξ,t_n)^2+v(ξ,t_n)^2) dξ . Let us consider the integrals _n = ∫_ξ_c(t_n)^+∞ e^c(ξ - ξ_c(t_n))(v_ξ(ξ,t_n)^2+v(ξ,t_n)^2) dξ and _∞ = ∫_0^+∞ e^c x(∂_x u_∞(x,0)^2 + u_∞(x,0)^2) dx . It follows from inequality <ref> that 1/α(e^-cξ_c(t_n) E_c(0) + V_min/c)≥_n . On the other hand, _n = ∫_x(t_n)^+∞e^c(x-x(t_n))(u_x(x,t_n)^2 + u(x,t_n)^2) dx = ∫_0^+∞ e^cy(u_x^2+u^2)(x(t_n)+y,t_n) dy , so that, in view of the limits <ref> and according to Fatou Lemma, _∞≤lim inf_n→+∞_n , and since according to the limit <ref> ξ_c(t_n) goes to +∞ as n goes to +∞, it follows from inequality <ref> that lim sup_n→+∞_n < +∞ , so that the integral _∞ is finite, a contradiction with the fact that u_∞(·,0) must be identically equal to (c_0). <Ref> is proved. In the following, the positive quantity equal to [u] and to c_- and to c_+ will simply be denoted as c; with symbols, c = [u] = c_- = c_+ . §.§ Scheme of the end of the proof To complete the proof, the crux is to prove that the dissipation goes to 0, on every compact interval around the invasion point, in the frame travelling at the speed c (prop:relaxation); from this stage, the convergence readily follows (subsec:convergence). If the converse holds (if that dissipation does not go to 0), then there exists a sequence of times t_n, going to +∞, at which “some” dissipation occurs. If, up to replacing the sequence (t_n) by a subsequence, the quantities ξ_c(t_n) are bounded from below, then reaching a contradiction is rather straightforward (see <cit.>): for c slightly greater than c, the energy E_c(t) remains bounded from below in spite of an arbitrarily large “amount” of dissipation, which is impossible. Unfortunately, if ξ_c(t_n) goes to -∞ as n goes to +∞, this argument fails: in a frame travelling at a constant speed, adjusted so that the invasion point is (say) at the origin ξ=0 at two times t_n and t_n+p (for some positive integer p), the energy is indeed bounded from below at time t_n+p, but, while the “dissipation bursts” at times t_n and t_n+p contribute to a non-negligible amount of dissipation, the other dissipation bursts occurring in between (at times t_n+q for the integers q that are positive and smaller than p) may be very small, since they may occur far to the left of the origin of this travelling frame, <cit.>. To circumvent this difficulty, the strategy proposed by Gallay and Joly in <cit.> is to follow the invasion point between consecutive dissipation bursts: the benefit of this setting is that each dissipation burst contributes significantly to the decrease of the energy, and that the energy remains bounded from below, but the price to be paid is that the speed of the travelling frame must be adjusted between each dissipation burst. These speed changes induce changes in the value of the energy in the corresponding travelling frames, and these latter changes must be, asymptotically, arbitrarily small if the intended contradiction is to be reached. The crucial step is thus to obtain some control on the variation of the energy with respect to the speed (cor:Lipschitz_cont_energy). This control will in turn follow from the key observation, made for the first time by Gallay and Joly in <cit.>, that, for some speed c^* greater than c but close enough to c, the energy at the invasion point in a frame travelling at the speed c^* (<ref>) remains bounded from above (prop:upper_bound_energy_at_invasion_point). Finally, the proof of this <ref> follows from two arguments: * this energy (at the invasion point) has (due to Poincaré inequality <ref>) the same magnitude as its “kinetic” part (lem:frame_Ec_with_Fc), and, due to the parabolic system <ref> satisfied by the solution, this kinetic part decreases with time, up to some “pollution” issued from the half space to the left of the invasion point, lem:lin_decrease_up_to_pollution_Fc; * the “drift to the left” of the invasion point (which induces an increase of this energy at the invasion point as soon as it occurs, equality def_Ec_of_xi_and_t), is controlled, cor:control_moves_to_left_invasion_point_trav_frame. In order to state the “linear decrease up to pollution” <ref> mentioned above, another invasion point is introduced in the next subsec:invasion_point_defined_by_smaller_radius. §.§ Invasion point defined by a smaller radius Observe that the set {w : w∈^d and σ(D^2V(w))⊄[μ_0,+∞) } is nonempty; indeed, if this set was empty, then we would have, for every w in ^d, V(w)≥1/2μ_0 w^2 , a contradiction with the fact that μ_0 is greater than the quantity (inequalities <ref>). Let us denote by (c_0) the infimum of this set <ref>. Since σ(D^2V(0_^d)) is included in [μ_1,+∞), this quantity (c_0) must be positive. In addition, for every w in B_^d(0_^d,(c_0)), σ(D^2V(w))⊂[μ_0,+∞) , see <ref>, and inequality <ref> holds. According to the definition of (c_0) (in <ref>), it follows that 0 < (c_0) ≤(c_0) . For every nonnegative time t, let us consider the set (c_0)(t) (see definition <ref>). It follows from the second inequality of <ref> that this set contains the set (c_0)(t); it is therefore nonempty, and, for the same reason as for the set (c_0)(t) (namely, according to <ref>), it is bounded from above; let x̂(t) denote its supremum and, as in <ref>, let us write (for a real quantity c) ξ̂_c(t) = x̂(t) - ct , see <ref>. According to these definitions, x(t)≤x̂(t)<+∞ and ξ_c(t)≤ξ̂_c(t)<+∞ . This new “invasion point” x̂(t) (and ξ̂_c(t) in a travelling frame), defined by the smaller radius (c_0), will be used in the next two subsec:control_energy_to_the_right_of_invasion_point. The next lemma shows that this new invasion point and the previous one do not behave much differently. The (nonnegative) quantity x̂(t)-x(t) is bounded, uniformly with respect to t in [0,+∞). In particular, x̂(t)/t→c as t→+∞ . Take a speed c in (c_0,] and let us define the function v(ξ,t) as in <ref>. For every nonnegative time t (omitting the argument (ξ,t) of v and v_ξ), E_c(t) = ∫_-∞^ξ_c(t)e^cξ(1/2v_ξ^2+V(v)) dξ + ∫_ξ_c(t)^+∞e^cξ(1/2v_ξ^2+V(v)) dξ ≥ -/ce^cξ_c(t) + ∫_ξ_c(t)^ξ̂_c(t)e^cξ(1/2v_ξ^2+1/2μ_0 v^2) dξ + ∫_ξ̂_c(t)^+∞e^cξ(1/2v_ξ^2+1/2μ_0 v^2) dξ , so that, applying Poincaré inequality <ref> with λ equals c_0/2 on the interval [ξ_c(t),ξ̂_c(t)] and Poincaré inequality <ref> (that is, <ref> with λ equals c/2) on the interval [ξ̂_c(t),+∞), it follows that E_c(t) ≥ -/ce^cξ_c(t) + c_0/4(c_0)^2 e^cξ_c(t) - c_0/4(c_0)^2 e^cξ̂_c(t) + 1/2(c_0(2c-c_0)/4+μ_0) ∫_ξ_c(t)^ξ̂_c(t)e^cξ v^2 dξ + c/4(c_0)^2 e^cξ̂_c(t) . Since 2c-c_0 is greater than c_0 the factor of the remaining integral on the right-hand side of this inequality is nonnegative, so that E_c(t) ≥ -/ce^cξ_c(t) + c_0/4(c_0)^2 e^cξ_c(t) + c-c_0/4(c_0)^2 e^cξ̂_c(t) Let us assume that t is large enough (positive) so that x(t)/t is in (c_0,] and let us choose c equal to x(t)/t. It follows that ξ_c(t) = 0 thus ξ̂_c(t) = ξ̂_c(t) - ξ_c(t) = x̂(t) - x(t) , so that the previous inequality reads E_c(t) ≥ -/c + c_0/4(c_0)^2 + c-c_0/4(c_0)^2 e^c(x̂(t) - x(t)) . Equivalently, x̂(t) - x(t) ≤1/cln(4/(c-c_0)(c_0)^2(E_c(t) + /c - c_0/4(c_0)^2 )) , and the argument of the logarithm must be positive. Since E_c(t) is less than or equal to E_c(0) and since x(t)/t goes to c as t goes to +∞, it follows that lim sup_t→+∞x̂(t) - x(t) ≤1/cln(4/(c-c_0)(c_0)^2(E_c(0) + /c)) < +∞ . Thus there exists a positive time T such that the quantity x̂(t) - x(t) is bounded, uniformly with respect to t in [T,+∞). On the other hand, the function t↦x̂(t) is bounded from above on the bounded interval [0,T] (the reason is the same as for t↦x(t), see the proof of <ref>). Since according to <ref> the function t↦x(t) is bounded from below on [0,T], the same is true for the difference x̂(t) - x(t), and the conclusion follows. §.§ Delayed control of the energy to the right of the invasion point For c in (0,], let us consider the function t↦ F_c(t), defined on [0,+∞) as F_c(t) = ∫_ e^cξ1/2 v_ξ(ξ,t)^2 dξ , where the function v(ξ,t) is defined as in <ref>. According to the second assumption of <ref> and to <ref>, the function x↦ u(x,t) (and thus the function ξ↦ v(ξ,t)) is in H^1_c(,^d) for every nonnegative time t, so that F_c(t) is well defined (and finite). The reason for introducing this function F_c(·) is that it satisfies the “linear decrease up to pollution” property <ref> stated by the next <ref>, which as a consequence will provide some control over the energy function E_c(t), as stated in <ref> below. It would be more straightforward if this “linear decrease up to pollution” held directly for E_c(t), as it happens when the invaded equilibrium is stable, see <cit.>. Unfortunately this does not seem to hold in the present context where the invaded equilibrium is not stable, see the remark in the proof of <ref> below. Introducing the function F_c(t) is thus a way to circumvent this difficulty. Let us recall that, according to inequality <ref>, the speed c_0 is smaller than the invasion speed c, and let us consider quantities c_0' and μ_0' satisfying: c_0' ∈(c_0,c) and μ_0' = -(c_0')^2/4 , so that μ_0' < μ_0 and μ_0' > μ_0 , see fig:speeds. The value of c_0' does not matter much, provided that it is in the interval (c_0,c); for instance c_0' can be chosen as the mean of c_0 and c. The following lemma is the “parabolic” analogue of <cit.>, and of <cit.>, and the “unstable invaded equilibrium” version of <cit.>. There exist positive quantities ν and K_F (depending only on V and c_0 and c_0') such that, for every c in [c_0',] and for every nonnegative time t, the following inequality holds: F_c'(t) ≤ -ν F_c(t) + K_F e^cξ̂_c(t) . It follows from the expression of F_c(t) that, for every nonnegative time t (omitting the arguments (ξ,t) of v and its partial derivatives and proceeding as in <ref>, and according to the assumptions made in <ref> on the origin of times), F_c'(t) = ∫_ e^cξ v_ξ· v_ξ t dξ = -∫_ e^cξ (cv_ξ + v_ξξ) · v_t dξ = -∫_ e^cξ (cv_ξ + v_ξξ) ·(cv_ξ + v_ξξ - ∇ V(v)) dξ = -∫_ e^cξ((cv_ξ + v_ξξ)·(cv_ξ - ∇ V(v) ) + (c v_ξ + v_ξξ)· v_ξξ) dξ = -∫_ e^cξ(v_ξ·(-cv_ξξ+D^2V(v)· v_ξ) + (c v_ξ + v_ξξ)· v_ξξ) dξ = -∫_ e^cξ(v_ξξ^2 + D^2V(v)· v_ξ· v_ξ) dξ . According to <ref>, the function v(·,t) is in H_c^2(,^d), so that Poincaré inequality <ref> applies to the function v_ξ(·,t), leading to F_c'(t) ≤ -∫_ e^cξ(c^2/4v_ξ^2 + D^2V(v)· v_ξ· v_ξ) dξ ≤ -∫_ e^cξ(μ_0'v_ξ^2 + D^2V(v)· v_ξ· v_ξ) dξ . Observe that this last expression is easier to handle than expression <ref> of _c[·] obtained in <ref>: here the “bad” term D^2V(v)· v_ξ· v_ξ (“bad” since it may be as negative as roughly μ_1 v_ξ^2 when v is small) is, fortunately, not strengthened by a “2” factor as in the expression of _c[·]. According to the property <ref> defining (c_0), it follows from the previous inequality that F_c'(t) ≤ - ∫_-∞^ξ̂_c(t) e^cξ(μ_0'v_ξ^2 + D^2V(v)· v_ξ· v_ξ) dξ - ∫_ξ̂_c(t)^+∞e^cξ(μ_0' -μ_0)v_ξ^2 dξ , so that, for every positive quantity ν, F_c'(t) + ν F_c(t) ≤ - ∫_-∞^ξ̂_c(t)e^cξ((μ_0' - ν/2)v_ξ^2 + D^2V(v)· v_ξ· v_ξ) dξ - ∫_ξ̂_c(t)^+∞e^cξ(μ_0' -μ_0 - ν/2)v_ξ^2 dξ . Thus, if the (positive) quantity ν is chosen as ν = 2(μ_0' -μ_0) , so that ν/2 = μ_0' -μ_0 < μ_0' , then it follows from inequality <ref> that that F_c'(t) + ν F_c(t) ≤ - ∫_-∞^ξ̂_c(t)e^cξ D^2V(v)· v_ξ· v_ξ dξ , and inequality <ref> follows from the bound <ref> on the solution. There exist positive quantities C_1 and C_2 (depending only on V and c_0 and c_0') such that, for every c in [c_0',] and for every nonnegative time t, the following inequality holds: 1/C_1F_c(t) - C_2 e^cξ̂_c(t)≤ E_c(t) ≤ C_1 F_c(t) + C_2 e^cξ̂_c(t) . Let ε denote quantity in (0,1]. For every nonnegative time t (omitting the argument (ξ,t) of v and v_ξ in the integrands and using the Poincaré inequality <ref>), E_c(t) - ε F_c(t) = ∫_-∞^ξ̂_c(t)(1/2(1-ε)v_ξ^2 + V(v)) dξ + ∫_ξ̂_c(t)^+∞(1/2(1-ε)v_ξ^2 + V(v)) dξ ≥ - /c e^cξ̂_c(t) + 1/2∫_ξ̂_c(t)^+∞((1-ε)v_ξ^2 -μ_0 v^2 ) dξ ≥ - /c e^cξ̂_c(t) + 1/2((1-ε)μ_0'-μ_0)∫_ξ̂_c(t)^+∞v^2 dξ , so that if ε is chosen as ε = 1 - μ_0/μ_0' , then the factor of the integral of this last expression vanishes. On the other hand, introducing the quantities μ_0,max and defined as μ_0,max = max_w∈B_^d(0_^d,(c_0))maxσ(D^2V(w)) and = max_w∈^d, w≤ V(w) , it follows from <ref> that E_c(t) ≤/c e^cξ̂_c(t) + ∫_-∞^ξ̂_c(t)1/2v_ξ^2 dξ + ∫_ξ̂_c(t)^+∞(1/2v_ξ^2 + 1/2μ_0,max v^2 ) dξ ≤/c e^cξ̂_c(t) + ∫_-∞^ξ̂_c(t)1/2v_ξ^2 dξ + 1/2(1 + max(μ_0,max,0)4/c^2)∫_ξ̂_c(t)^+∞v_ξ^2 dξ ≤/c e^cξ̂_c(t) + 1/2( 1+ max(μ_0,max,0)/μ_0')F_c(t) . Finally, if C_1 and C_2 are chosen as C_1  = max(μ_0'/μ_0'-μ_0,1/2( 1+ max(μ_0,max,0)/μ_0')) , and C_2 = max(/c_0',max(,0)/c_0') (these quantities depend only on V and c_0 and c_0'), then inequality <ref> holds. The following corollary of <ref> is the analogue of <cit.> and <cit.>. There exist positive quantities K and K' (depending only on V and c_0 and c_0') such that, for every c in [c_0',] and for all nonnegative times t and T, the following inequality holds: E_c(t+T) ≤ K e^-ν TE_c(t) + K' e^cξ̂_c,sup(t,T) , where ξ̂_c,sup(t,T) = sup_t≤ s≤ t+Tξ̂_c(s) . For all nonnegative times t and T, with the notation ξ̂_c,sup(t,T) of <ref>, it follows from <ref> that F_c(t+T) ≤ e^-ν TF_c(t) + K_F/ν e^cξ̂_c,sup(t,T) , so that, according to inequality <ref> of <ref>, E_c(t+T) ≤ C_1 F_c(t+T) + C_2 e^cξ̂_c,sup(t,T) ≤ C_1 e^-ν TF_c(t) + (C_1 K_F/ν+ C_2)e^cξ̂_c,sup(t,T) ≤ C_1^2 e^-ν TE_c(t) + (C_1^2C_2 + C_1 K_F/ν + C_2)e^cξ̂_c,sup(t,T) , and so that, choosing the quantities K and K' as K = C_1^2 and K' = C_1^2C_2 + C_1 K_F/ν + C_2 (these quantities depend only on V and c_0 and c_0'), inequality <ref> follows. §.§ Control on the left drift of the invasion point The following notation is the “parabolic” analogue of the one introduced in <cit.> and <cit.>. For every speed c in (0,], every nonnegative time t and every real quantity ξ, if v(·,·) denotes the function introduced in <ref>, let us consider the quantities E_c(ξ,t) = _c[v(ξ+·,t)] = ∫_e^cζ(1/2v_ξ(ξ+ζ,t)^2 + V(v(ξ+ζ,t))) dζ and D_c(ξ,t) = ∫_e^cζ v_t(ξ+ζ,t)^2 dζ . These quantities differ from the energy E_c(t) and the dissipation D_c(t) defined in <ref> only by the fact that their exponential weight is normalized so that it takes the value 1 at the value ξ (rather than 0) of the travelling abscissa, and they are related to these initial energy and dissipation by: E_c(ξ,t) = e^-cξ E_c(0,t) = e^-cξ E_c(t) and D_c(ξ,t) = e^-cξ D_c(0,t) = e^-cξ D_c(t) , so that, according to <ref>, ∂_t E_c(ξ,t) = - D_c(ξ,t) . In addition, the two-variable function (ξ,t)↦ E_c(ξ,t) is: * non-increasing with respect to the time variable t, * and exponentially decreasing, in size, with respect to the space variable ξ, see <ref>. The following lemma is the analogue of <cit.>. For every speed c in (-∞,c), inf_0≤ t≤ t'ξ̂_c(t') - ξ̂_c(t) > -∞ . If inequality <ref> holds for some speed c, then it also holds for every speed in (-∞,c]; indeed, for such speeds c and and for all times t and t' satisfying 0≤ t≤ t', ξ̂_(t') - ξ̂_(t) = x̂(t') - x̂(t) - (t'-t) = ξ̂_c(t') - ξ̂_c(t) + (c-)(t'-t) ≥ξ̂_c(t') - ξ̂_c(t) . Thus, it is sufficient to prove inequality <ref> for c almost equal to c (unsurprisingly, it is for such speeds that this conclusion will called upon later), and in particular for c greater than or equal to c_0. Let us proceed by contradiction and assume that there exists some speed c in [c_0,c) and sequences of times (t_n)_n∈ and (t_n')_n∈ such that 0≤ t_n≤ t_n' for every nonnegative integer n and ξ̂_c(t_n')-ξ̂_c(t_n)→ -∞ as n→+∞ , see <ref>. According to <ref>, the limit <ref> still holds if ξ̂_c(·) is replaced with ξ_c(·), namely: ξ_c(t_n)-ξ_c(t_n')→ +∞ as n→+∞ , Now it follows from the inequality <ref>, from the limit <ref>, and from <ref> that t_n' must go to +∞ as n goes to +∞. On the other hand, since c is less then c, ξ_c(t) goes to +∞ as t goes to +∞, and it follows that ξ_c(t_n') goes to +∞ as n goes to +∞ (and the same holds for ξ̂_c(t_n')). Thus ξ_c(t_n) also goes to +∞ as n goes to +∞ (and the same holds for ξ̂_c(t_n)), and again due to <ref>, it follows that t_n goes to +∞ as n goes to +∞. In short, the three quantities t_n and ξ̂_c(t_n) and ξ̂_c(t_n)-ξ̂_c(t_n') go to +∞ as n goes to +∞, see <ref>. By compactness (<ref>), up to replacing the sequence (t_n)_n∈ by a subsequence, there exists an entire solution u_∞ of system <ref> such that, with the notation of <ref>, D^2,1u(x̂(t_n)+·,t_n+·)→ D^2,1u_∞ as n→+∞ , uniformly on every compact subset of ^2. Since E_c(ξ̂_c(t_n),0) = exp(-ξ̂_c(t_n)) E_c(0,0) , it follows that E_c(ξ̂_c(t_n),0)→ 0 as n→+∞ , see <ref>; and since, according to <ref>, the function t↦ E_c(ξ̂_c(t_n),t) is non-increasing, it follows that lim sup_n→+∞ E_c(ξ̂_c(t_n),t_n) ≤ 0 , see <ref>. On the other hand, it follows from inequality <ref> (which holds since c was assumed to be in [c_0,c)) that E_c(ξ̂_c(t_n),t_n') = e^-cξ̂_c(t_n)E_c(0,t_n') ≥ -e^c(ξ_c(t'_n)-ξ̂_c(t_n))/c ≥ -e^c(ξ̂_c(t'_n)-ξ̂_c(t_n))/c , so that, according to the limit <ref>, lim inf_n→+∞ E_c(ξ̂_c(t_n),t_n') ≥ 0 , see <ref>. It follows that the (nonnegative) quantity E_c(ξ̂_c(t_n),t_n) - E_c(ξ̂_c(t_n),t_n') , which, according to <ref>, equals ∫_t_n^t_n' D_c(ξ̂_c(t_n),t) dt , must go to 0 as n goes to +∞, see <ref>. Proceeding as in the proof of <ref>, it follows that the function ∂_t u_∞+c∂_x u_∞ must be identically equal to 0 on ×[0,+∞). The key observation is that, for every speed c' in (c,c), the following limits “still” hold: ξ̂_c'(t_n)→ +∞ and ξ̂_c'(t_n)-ξ̂_c'(t_n')→ +∞ as n→+∞ , so that, repeating the same argument, the function ∂_t u_∞+c'∂_x u_∞ must (still) be identically equal to 0 on ×[0,+∞). It thus follows that both functions ∂_t u_∞ and ∂_x u_∞ must actually vanish on ×[0,+∞), and the same contradiction as in the proof of <ref> follows. <Ref> is proved. The following corollary is the analogue of <cit.>. For every positive quantity γ, there exists a positive quantity (γ) such that, for every speed c in (-∞,c+γ] and for all times t and t' satisfying 0≤ t≤ t', ξ̂_c(t')≥ξ̂_c(t) - 2γ(t'-t) - (γ) . For every positive quantity γ and for all times t and t' satisfying 0≤ t≤ t', arguing as in <ref>, ξ̂_c(t') - ξ̂_c(t) + 2γ(t'-t) = ξ̂_c-2γ(t') - ξ̂_c-2γ(t) ≥ξ̂_c-γ(t') - ξ̂_c-γ(t) , and the intended conclusion follows from the conclusion of <ref> for the speed c-γ. §.§ Lipschitz continuity with respect to speed of the energy at the invasion point §.§.§ The energy at the invasion point For every speed c in (0,] and for every nonnegative time t, let us call energy at the invasion point (at time t, in a frame travelling at the speed c) the quantity Ê_c(t) defined as Ê_c(t) = E_c(ξ̂_c(t),t) = e^-cξ̂_c(t) E_c(0,t) = e^-cξ̂_c(t) E_c(t) . Among the family E_c(ξ,t) of energies, this specific energy Ê_c(t) is characterized by an exponential weight which is normalized so that it takes the value 1 at the invasion point ξ̂_c(t). The aim of this subsec:Lipschitz_continuity is to prove <ref> in <ref> below, which states that this “energy at the invasion point” is Lipschitz continuous with respect to the speed c, uniformly in time. This continuity property is key for the relaxation scheme set up in the next subsec:relaxation. The main step leading to <ref> is <ref>, which states that the energy at the invasion point Ê_c^*(t) is bounded from above, uniformly with respect to t, for a speed c^* slightly greater than c to be chosen below. §.§.§ Choice of a speed slightly greater than the invasion speed Let γ denote a (small) positive quantity to be chosen below, and let us consider the speed c^* defined as c^* = c+γ , see <ref>. In order to obtain an upper bound on Ê_c^*(t), the rate ν involved in the exponential decrease of F_c^*(t) (<ref>) and thus (on the long term) of E_c^*(t) (<ref>) must balance the (possible) increase of Ê_c^*(t) due to the (possible) drift to the left of the invasion point ξ̂_c^*(t); according to <ref> this drift to the left does not occur, on the long term, at a speed larger than 2γ provided that c is less than or equal to c^*, and a drift to the left at the speed 2γ induces an increase rate equal to 2γ c^*. This leads us to introduce the positive quantity γ defined as γ = min(-c+√(c^2+ν)/2,-c) . This choice ensures that γ>0 and c<c^* = c+γ≤ and 2γ c^* ≤ν/2 , see <ref>. §.§.§ Factor two shrinkage of the energy at the invasion point According to the last inequality of <ref>, the combination of conclusion <ref> of <ref> and conclusion <ref> of <ref> show that, on the long run and as long as the quantity Ê_c^*(t) is large positive, this quantity decreases at an exponential rate which is at least equal to ν/2, even if the possible drift to the left of the invasion point is taken into account, and provided that the invasion point ξ̂_c^*(t) is not too large positive. Let us consider the quantity (γ), provided by <ref> for the choice <ref> of γ, and let us recall the quantities K and K' involved in the conclusion <ref> of <ref>. In order to formalize these observations in the next <ref>, let us introduce the two parameters and defined as = 2/ν(ln(4K) + c^*(γ)) and = 8K' e^c^*(2γ + (γ)) , so that K e^-ν/2 + c^* (γ)≤1/4 and 2K' e^c^*(2γ + (γ))/≤1/4 . The following lemma is the analogue of the claim in the proof of <cit.>. For every nonnegative time t, if Ê_c^*(t) ≥ , then there exists a time (t) in the interval (t,t+] such that Ê_c^*((t)) ≤1/2Ê_c^*(t) . Let us distinguish two cases, see <ref>. *Case 1. There exists a time t' in the interval (t,t+] such that ξ̂_c^*(t') ≥ξ̂_c^*(t) + ln(2)/c^* . In this case, let us choose (t) = t'; the intended conclusion <ref> follows from the expression <ref> of Ê_c(·). *Case 2. For every time t” in the interval (t,t+], ξ̂_c^*(t”) ≤ξ̂_c^*(t) + ln(2)/c^* . In this case, let us choose (t) = t +. It follows from the expression <ref> of Ê_c(·) and conclusion <ref> of <ref> that Ê_c^*((t)) = e^-c^*ξ̂_c^*((t)) E_c^*(t+) ≤ e^-c^*ξ̂_c^*((t))( K e^-ν E_c^*(t) + K' e^c^*(ξ̂_c^*(t) + ln(2)/c^*)) ≤ e^-c^*(ξ̂_c^*((t))-ξ̂_c^*(t))( K e^-νÊ_c^*(t) + 2K' ) , so that, according to conclusion <ref> of <ref> and the last inequality of <ref>, Ê_c^*((t))/Ê_c^*(t) ≤ e^c^*(2γ + (γ))( K e^-ν + 2K'/) ≤ K e^-ν/2 + c^* (γ) + 2K' e^c^*(2γ + (γ))/ , and it follows from the inequalities <ref> satisfied by and that the right-hand side of this last inequality is smaller than or equal to 1/2, showing that the intended inequality <ref> holds. §.§.§ Uniform upper bound on the energy at the invasion point The following proposition is the analogue of <cit.> and <cit.>. The quantity Ê_c^*(t) is bounded from above, uniformly with respect to t in [0,+∞). Let us consider the sequence (t_n)_n∈ defined as follows: t_0=0, and, for every nonnegative time n, t_n+1 = { t_n + 1 if Ê_c^*(t_n) < , (t_n) if Ê_c^*(t_n)≥ , . where (·) is the time provided by <ref>. The sequence (t_n)_n∈ is thus strictly increasing, and in the second of those cases it follows from the conclusion <ref> of <ref> that Ê_c^*(t_n+1) is less than or equal to Ê_c^*(t_n)/2; as a consequence, the first case where t_n+1 equals t_n+1 must occur for an infinite number of nonnegative integers n, so that t_n goes to +∞ as n goes to +∞. Now, it follows from the expression <ref> of Ê_c(·), from the non-increase of energy with respect to time (<ref>), and from the conclusion <ref> of <ref> that, for every nonnegative integer n and for every time t in the interval [t_n,t_n+1], Ê_c^*(t) = e^c^*(ξ̂_c^*(t_n)-ξ̂_c^*(t))Ê_c^*(t_n) ≤ e^c^*(ξ̂_c^*(t_n)-ξ̂_c^*(t))max(Ê_c^*(t_n),0) ≤ e^c^*(2γ (t-t_n) + (γ))max(Ê_c^*(t_n),0) . It follows that, for every nonnegative integer n, Ê_c^*(t_n) ≤max(e^c^*(2γ+(γ)),Ê_c^*(0)) , and that, for every nonnegative time t, Ê_c^*(t) ≤ e^c^*(2γ+(γ))max(e^c^*(2γ+(γ)),Ê_c^*(0)) , which completes the proof. §.§.§ Uniform bound on the H1c*-norm with weight normalized at the invasion point The following corollary is the analogue of <cit.>. The quantity ∫_0^+∞e^c^*y(u^2 + u_x^2)(x̂(t)+ y,t) dy is bounded, uniformly with respect to t in [0,+∞). For every nonnegative time t, it follows from the definition <ref> of E_c(·) that Ê_c^*(t) = ∫_ e^c^*y(1/2u_x^2 + V(u))(x̂(t)+y,t) dy , so that, according to the definitions of (c_0) and x̂(·) (see <ref>), Ê_c^*(t) +/c^*≥1/2∫_0^+∞ e^c^*y(u_x^2 - μ_0u^2)(x̂(t)+y,t) dy . Let us consider the quantity α defined as α = μ_0'+μ_0/2μ_0' , so that both quantities 1-α = μ_0'-μ_0/2μ_0' and αμ_0' - μ_0 = μ_0'-μ_0/2 are positive. It follows from the previous inequality that 2(Ê_c^*(t) + /c^*) ≥∫_0^+∞ e^c^*y((1-α)u_x^2 + α u_x^2 - μ_0 u^2)(x̂(t)+y,t) dy , so that, applying the Poincaré inequality <ref> (with c^* instead of c) to the term α u_x^2 in the integrand and using the inequality (c^*)^2/4>μ_0 it follows that, denoting by β the minimum of the two (positive) quantities <ref>, 2(Ê_c^*(t) + /c^*) ≥β∫_0^+∞ e^c^*y(u_x^2 + u^2)(x̂(t)+y,t) dy , and, in view of <ref>, the intended conclusion <ref> follows. §.§.§ Lipschitz continuity with respect to speed of the energy at the invasion point The next <ref> is the analogue of <cit.>. There exist a (finite, positive) quantity such that, for all speeds c_1 and c_2 in [c_0,c^*] and every time t in [0,+∞), Ê_c_1(t) - Ê_c_2(t)≤c_1 - c_2 . For all speeds c_1 and c_2 in [c_0,c^*] and every time t in [0,+∞), Ê_c_1(t) - Ê_c_2(t) = ∫_(e^c_1 y-e^c_2 y)(1/2u_x(x̂(t)+y,t)^2 + V(u(x̂(t)+y,t))) dy . Besides, for every y in , e^c_1 y-e^c_2 y≤{c_1-c_2 e^max(c_1,c_2)y≤c_1-c_2 e^c^*y if y≥ 0 , c_1-c_2 e^min(c_1,c_2)y≤c_1-c_2 e^c_0 y if y≤ 0 . . It follows that, if c_1 and c_2 differ, Ê_c_1(t) - Ê_c_2(t)/c_1-c_2 ≤∫_-∞^0 e^c_0 y1/2u_x(x̂(t)+y,t)^2 + V(u(x̂(t)+y,t)) dy + ∫_0^+∞ e^c^*y1/2u_x(x̂(t)+y,t)^2 + V(u(x̂(t)+y,t)) dy . It follows from the bound <ref> on the solution that the first among the two integrals of the right-hand side of this inequality is bounded (uniformly with respect to t in [0,+∞)), and it follows from inequality <ref> of <ref> that the same is true for the second integral. Inequality <ref> (for a large enough positive quantity ) is proved. §.§ Relaxation The aim of this subsec:relaxation is to prove the following proposition, which is the analogue of <cit.> and <cit.>. For every positive quantity T, the following limit holds: ∫_t-T^t D_c(ξ̂_c(t),s) ds → 0 as t→+∞ . Let us proceed by contradiction and assume that the converse holds. In this case, there exists a positive quantity and a sequence (t_n)_n∈ of times, going to +∞ such that, for every n in , t_n-T is nonnegative and ∫_t_n-T^t_n D_c(ξ̂_c(t_n),s) ds ≥ . Up to replacing the sequence (t_n)_n∈ by a subsequence, let us assume that, for every n in , t_n+1 is greater than t_n+T. For every n in , let us consider the speed c_n defined as c_n = (x̂(t_n+1) - cT) - x̂(t_n)/(t_n+1-T) - t_n = c + ξ̂_c(t_n+1) - ξ̂_c(t_n)/(t_n+1-T) - t_n , see <ref>. Again up to replacing the sequence (t_n)_n∈ by a subsequence, it may be assumed, according to <ref>, that c_nc and, for every n in , c_0≤c_n ≤ c^* . Following the notation of <cit.>, let us introduce, for every n in , the quantities Δ_n = Ê_c(t_n) - Ê_c(t_n+1) , and Δ^1_n = Ê_c(t_n) - Ê_c_n(t_n) , and Δ^2_n = Ê_c_n(t_n) - E_c_n(ξ̂_c_n(t_n),t_n+1-T) = E_c_n(ξ̂_c_n(t_n),t_n) - E_c_n(ξ̂_c_n(t_n),t_n+1-T) , and Δ^3_n = E_c_n(ξ̂_c_n(t_n),t_n+1-T) - E_c(ξ̂_c(t_n+1),t_n+1-T) , and Δ^4_n = E_c(ξ̂_c(t_n+1),t_n+1-T) - Ê_c(t_n+1) = E_c(ξ̂_c(t_n+1),t_n+1-T) - E_c(ξ̂_c(t_n+1),t_n+1) , so that Δ_n = Δ^1_n + Δ^2_n + Δ^3_n + Δ^4_n . According to inequality <ref> of <ref>, Δ^1_n≤c-c_n . According to equality <ref> of <ref>, Δ^2_n = ∫_t_n^t_n+1-T D_c_n(ξ̂_c_n(t_n),s) ds ≥ 0 , and according to assumption <ref>, Δ^4_n = ∫_t_n+1-T^t_n+1 D_c(ξ̂_c(t_n+1),s) ds ≥ . Let us consider the quantity ζ_n defined as ζ_n = ξ̂_c_n(t_n+1-T) - ξ̂_c_n(t_n) . According to the first inequality of <ref>, ζ_n = x̂(t_n+1-T) - x̂(t_n) - c_n (t_n+1-T-t_n) = x̂(t_n+1-T) - x̂(t_n+1) + c T = ξ̂_c(t_n+1-T) - ξ̂_c(t_n+1) , and according to inequality <ref> of <ref>, ζ_n ≤ 2γ T + (γ) . According to the two expressions <ref> of ζ_n, Δ^3_n = e^c_nζ_nÊ_c_n(t_n+1-T) - e^cζ_nÊ_c(t_n+1-T) = (e^c_nζ_n - e^cζ_n) Ê_c_n(t_n+1-T) + e^cζ_n(Ê_c_n(t_n+1-T) - Ê_c(t_n+1-T) ) , so that, according to inequality <ref> of <ref> and inequality <ref>, Δ^3_n≤e^c_nζ_n - e^cζ_nÊ_c_n(t_n+1-T) + e^c(2γ T + (γ))c_n - c . Observe that, for every positive quantities z_max and c sup_z∈(-∞,z_max] z e^cz = max(1/ce,z_max e^c z_max) . Thus it follows from inequality <ref> that e^c_nζ_n - e^cζ_n ≤c_n - cζ_nmax(e^c_nζ_n,e^cζ_n) ≤c_n - cmax(1/c_0 e, (2γ T + (γ))e^c^*(2γ T + (γ))) . Since c_n goes to c as n goes to +∞, it follows from inequalities <ref> and from the expression <ref> of Δ_n that lim inf_n→+∞Δ_n ≥ . According to the definition <ref> of Δ_n, for every positive integer n, Ê_c_0(t_0) - Ê_c_n(t_n) = ∑_k=0^n-1Δ_k , so that, according to inequality <ref>, Ê_c_n(t_n) → -∞ as n→+∞ . On the other hand, for every speed c in [c_0,] and for every nonnegative time t, it follows from inequality <ref> that Ê_c(t) ≥ -V_min/ce^-c(ξ̂_c(t)-ξ_c(t)) ≥ -V_min/c , a contradiction with the limit <ref>. <Ref> is proved. In the proof above, inequality <ref> is intimately related to the choice of the “dissipation interval” (namely [t_n+1-T,t_n+1], rather than, say [t_n,t_n+T]), and is the reason for the choice of the integration interval [t-T,t] (rather than, say, [t,t+T]) in inequality <ref> of <ref>. §.§ Convergence and proof of Theorem <ref> Let (c) denote a positive quantity, small enough so that the “local steep stable manifold” <ref> holds for e equal to 0_^d and c equal to c and δ equal to (c). Let us assume in addition that (c) is smaller than or equal to the quantity (c_0) defined in <ref>. For every nonnegative time t, let us consider the set (c)(t) (see definition <ref>). For the same reasons as for the set (c)(t) (see the beginning of <ref>), this set (c)(t) is altogether nonempty and bounded from above; let x̃(t) denote its supremum and let us write (for a positive quantity c) ξ̃_c(t) = x̃(t) - ct , see <ref>. According to these definitions, x(t)≤x̂(t)≤x̃(t)<+∞ and ξ_c(t)≤ξ̂_c(t)≤ξ̃_c(t)<+∞ , and u(x̃(t),t) = (c) . According to <ref>, it turns out that the quantity (c) could actually be chosen equal to the quantity (c_0) introduced in <ref>; with such a choice, the invasion points x̃(t) and ξ̃_c(t) would not differ from x̂(t) and from ξ̂_c(t), respectively. However, this would not significantly simplify the remaining part of the proof; for that reason, this remaining part will be presented for a quantity (c) not necessarily equal to (c_0), in other words without calling upon the conclusions of <ref>. Let us recall the notation 0_^d(c)c introduced in <ref>, and, for every w in ∂ B_^d(0_^d,(c)), the notation ϕ_c,w introduced in <ref>. The following conclusions hold: * lim sup_t→+∞ u(x̃(t),t)· u_x(x̃(t),t)<0. * the set 0_^d(c)c is nonempty; * the following limits hold as t goes to +∞: * (u(x̃(t),t),0_^d(c)c)→ 0; * u_t(x̃(t),t)+cu_x(x̃(t),t)→0; * for every positive quantity L, sup_y∈[-L,+∞)u(x̃(t)+y,t)-ϕ_c,u(x̃(t),t)(y)→ 0 . The proof calls upon some properties of the profiles of pushed travelling waves invading a critical point, namely <ref> stated in the next <ref>. Let us proceed by contradiction and assume that at least one of the conclusions of this lemma does not hold. Then, there exists a sequence (t_n)_n∈ of times going to +∞ such that one of the following properties hold: * the quantity u_t(x̃(t_n),t_n)+cu_x(x̃(t_n),t_n) does not go to 0 as n goes to +∞; * or, either the set 0_^d(c)c is empty or, if it is nonempty, the distance (u(x̃(t_n),t_n),0_^d(c)c) does not go to 0 as n goes to +∞; * or the quantity lim sup_n→+∞ u(x̃(t_n),t_n)· u_x(x̃(t_n),t_n) is nonnegative; * or there exists a positive quantity L_0 such that the quantity sup_y∈[-L_0,+∞)u(x̃(t_n)+y,t_n)-ϕ_c,u(x̃(t_n),t_n)(y) does not go to 0 as n goes to +∞. By compactness (<ref>), up to replacing the sequence (t_n)_n∈ by a subsequence, there exists an entire solution u_∞ of system <ref> such that, with the notation of <ref>, D^2,1u(x̃(t_n)+·,t_n+·) → D^2,1 u_∞ as n→+∞ , uniformly on compact subsets of ^2. Let T denote a positive quantity, and let us consider the quantity _n defined as _n = ∫_t_n-T^t_n D_c(ξ̂_c(t_n),t) dt . According to <ref>, this quantity _n goes to 0 as n goes to +∞. Observe that _n = ∫_t_n-T^t_n e^-cξ̂_c(t_n) D_c(t) dt = ∫_t_n-T^t_n e^-cξ̂_c(t_n)(∫_ e^c(x-ct) (u_t + c u_x)^2(x,t) dx ) dt , so that, substituting t with t_n+s and x with x̃(t_n)+y, _n = ∫_-T^0 e^c(x̃(t_n)-ct_n-ξ̂_c(t_n)-cs)(∫_ e^cy (u_t + cu_x)^2(x̃(t_n)+y,t_n+s) dy ) ds . According to inequality <ref>, x̃(t_n)-ct_n-ξ̂_c(t_n) = ξ̃_c(t_n)-ξ̂_c(t_n) ≥ 0 , and the term -c^2 s in the argument of the exponential factor is nonnegative for s in [-T,0]; it follows that _n ≥∫_-T^0 (∫_ e^cy (u_t + cu_x)^2(x̃(t_n)+y,t_n+s) dy ) ds . Let L denote a positive quantity, and let us consider the integrals _n = ∫_-T^0 (∫_-L^L (u_t + cu_x)^2(x̃(t_n)+y,t_n+s) dy ) ds , and _∞ = ∫_-T^0 (∫_-L^L (∂_t u_∞(y,s) + c∂_x u_∞(y,s))^2 dy ) ds . It follows from the previous inequality that _n≥ e^-cL_n , and since _n goes to 0 as n goes to +∞, the same must therefore be true for the nonnegative quantity _n. On the other hand, it follows from the convergence <ref> that _n goes to _∞ as n goes to +∞. It follows that _∞ must be equal to 0, and since the positive quantity L was any, it follows that the function ∂_t u_∞ + c∂_x u_∞ is identically equal to 0_^d on ×[0,+∞), thus in particular on ×{0}. In view of the convergence <ref>, it follows that the property <ref> above cannot hold. Let us consider the function ϕ_∞:→^d defined as ϕ_∞(ξ) = u_∞(ξ,0) , for all ξ in . Then, since u_∞ is a solution of the parabolic system <ref>, it follows that ϕ_∞ is a solution of the differential systems <ref> (for c equal to c) governing the profiles of waves travelling at the speed c. In addition, since according to inequalities <ref> x̃(t) is greater than or equal to x̂(t), it follows from conclusion <ref> of <ref> that the quantity ∫_0^+∞ e^c^*y (u^2 + u_x^2)(x̃(t_n)+y,t_n) dy is bounded, uniformly with respect to n; thus, according to Fatou Lemma, it follows from the convergence <ref> that ϕ_∞ must belong to the space H^1_c^*(,^d). Thus, according to conclusion <ref> of <ref>, the following limit holds: ϕ_∞(ξ) = o(e^-1/2c^*ξ) as ξ→+∞ . This shows that ϕ_∞ is the profile of a pushed travelling wave invading 0_^d (<ref>). And, since according to equality <ref> ϕ_∞(0) is equal to (c), it follows that ϕ_∞ = ϕ_c,ϕ_∞(0) = ϕ_c,u_∞(0,0) (notation <ref>). Finally, since according to the bound <ref> ϕ_∞(ξ) is bounded, uniformly with respect to ξ, it follows from conclusion <ref> of <ref> that ϕ_∞ must be the profile of a pushed front invading 0_^d at the speed c. And since according to equality <ref> the quantity ϕ_∞(0) is equal to (c), the vector ϕ_∞(0) must belong to the set 0_^d(c)c, which is therefore nonempty. In view of the convergence <ref>, this shows that the property <ref> above cannot hold. In addition, since according to conclusion <ref> of <ref> (applied with δ equal to (c_0) and c equal to c) the scalar product ϕ_∞(0)·ϕ_∞'(0) is negative, it follows from the convergence <ref> that the property <ref> above cannot hold either. It remains to derive a contradiction from property <ref>. Observe that, since u(x̃(t_n),t_n) goes to u_∞(0,0) as n goes to +∞, according to the continuity of the solutions of the differential systems <ref> with respect to initial conditions, ϕ_c,u(x̃(t_n),t_n)(·)→ϕ_c,u_∞(0,0) (·) = ϕ_∞(·) = u_∞(·,0) , uniformly on every compact subset of . Thus, it follows from the convergence <ref> that u(x̃(t_n)+·,t_n) - ϕ_c,u(x̃(t_n),t_n)(·) → 0_^d , uniformly on every compact subset of . Therefore, it follows from property <ref> that there must exist a sequence (y_n)_n∈, going to +∞ as n goes to +∞, such that lim inf_n→+∞u(x̃(t_n)+y_n,t_n) - ϕ_c,u(x̃(t_n),t_n)(y_n) >0 . According to the uniform convergence stated in <ref> (for the same parameters and δ and c as the ones chosen above to apply <ref>), ϕ_c,u(x̃(t_n),t_n)(y_n)→ 0_^d as n→+∞ ; thus, it follows from inequality <ref> that lim inf_n→+∞u(x̃(t_n)+y_n,t_n) >0 , a contradiction with the uniform bound on the quantity <ref>. <Ref> is proved. Let us consider the function f:×[0,+∞)→ defined as: f(x,t) = 1/2 u(x,t)^2. Then, in view of the property <ref>, for every (x,t) in ×[0,+∞), f(x̃(t),t) = 1/2(c)^2 and ∂_x f(x,t) = u_x(x,t)· u(x,t) , and ∂_t f(x,t) = u_t(x,t)· u(x,t) . It thus follows from conclusion <ref> of <ref> and from the Implicit Function Theorem that, for t large enough positive, the function t↦x̃(t) is of class C^1 and satisfies:  x̃'(t) = - u_t(x̃(t),t)· u(x̃(t),t)/u_x(x̃(t),t)· u(x̃(t),t) = c - (u_t(x̃(t),t) + cu_x(x̃(t),t))· u(x̃(t),t)/u_x(x̃(t),t)· u(x̃(t),t) , and it follows from this expression, from conclusions <ref> of <ref>, and from the bound <ref> on u(x,t) that x̃'(t)→c as t→+∞ . In view of the property <ref> and of the conclusions of <ref>, all the conclusions of <ref> are proved. §.§ Proof of Theorem <ref> As in the previous subsec:proof_cor_main, let us assume that the critical point e is equal to 0_^d. Let us assume that condition <ref> of <ref> holds. Since e is assumed to be equal to 0_^d, this means that there exists a quantity c_0, greater than , and a function w in H^1_c_0(,^d) such that the energy _c_0[w] is negative. Let χ:→ denote a smooth cutoff function satisfying the conditions <ref>, let denote a (large) positive quantity to be chosen below, and let us consider the function w̃ defined as in <ref>: w̃(x) = χ(x-) w(x) . Let us consider the solution (x,t)↦ u(x,t) of the parabolic system <ref> for the initial condition u(·,0) = w̃(·). According to the definition of w̃, the quantity [u] (defined in <ref>) is equal to +∞. In addition, since w is in H^1_c_0(,^d), the quantity _c_0[w̃] goes to _c_0[w] as goes to +∞; thus, if is large enough positive, the quantity _c_0[w̃] is (also) negative, so that, in this case the variational speed [u] (also defined in <ref>) is greater than c_0. It follows that < [u] < [u] , or in other words that the condition <ref> of <ref> holds. According to the conclusion of this theorem, the solution u invades the critical point 0_^d through the profiles of pushed fronts, which ensures the existence of (at least) one pushed front invading 0_^d at the speed [u] (see <ref>); in other words, conclusion <ref> of <ref> holds. Conversely, if conclusion <ref> of <ref> holds (with e equal to 0_^d), then there exists a pushed front invading 0_^d at a speed c greater than ; let us denote by ϕ the profile of this pushed front. According to equality <ref> of <ref>, for every speed c' in the interval (,c), the energy _c'[ϕ] is negative. Since ϕ belongs to H^1_c(,^d), it also belongs to H^1_c'(,^d), thus c' belongs to the set _-∞; this ensures that the quantity is greater than or equal to c', and thus greater than ; in other words, condition <ref> of <ref> holds. Let us proceed by contradiction and assume that no pushed front invades 0_^d at the speed . The proof (written above) that condition <ref> (of <ref>) implies condition <ref> that there exists a pushed front invading e at a speed arbitrarily close to in the interval (,). In other words, there exists an increasing sequence (c_n)_n∈ of speeds in the interval (,), going to as n goes to +∞, such that, for every nonnegative integer n, there exists a pushed front invading e at the speed c_n. Let ϕ_n denote the profile of this pushed front, normalized (with respect to space translations) by the condition ϕ_n(0) = (c_0). Up to replacing the sequence (ϕ_n)_n∈ by a subsequence, it may be assumed that the vectors ϕ_n(0) converge, as n goes to +∞, towards a vector u_∞ of ∂ B_^d(0_^d,(c_0)). As shown by <ref>, the map e(c_0) defining the stable manifold of (e,0_^d) for the differential system <ref> governing the profiles of fronts travelling at the speed is defined on the closed ball B_^d(e,(c_0)). Let ϕ_∞ denote the solution of the corresponding second order differential <ref> (still for the speed ) for the initial condition:  (ϕ_∞(0),ϕ_∞'(0)) = (u_∞,e(c_0)(u_∞)) . According to this definition, ϕ_∞ is the profile of a pushed travelling wave invading e at the speed (<ref>). In addition, since according to conclusion <ref> of <ref> the quantities sup_ξ∈ϕ_n(ξ) are bounded from above by a quantity depending only on V, the solution ϕ_∞ is globally defined, and sup_ξ∈ϕ_∞(ξ) is bounded from above by the same quantity. In other words, ϕ_∞ is the profile of a pushed travelling front (and not only a pushed travelling wave). For a generic potential V, the set of profiles of pushed travelling fronts and the set of speeds of pushed travelling fronts are discrete, <cit.>, so that in this case, the speeds c_n introduced in the proof above must be equal to for n large enough, and the last compactness argument is unnecessary. Let ϕ denote the profile of a pushed front invading e at the speed . It follows from equality <ref> of <ref> that its energy _[ϕ] vanishes. On the other hand, it follows from the definition of (<ref>) and from the fact that _0 is closed (conclusion <ref> of <ref>) that belongs to _0; according to the definition <ref> of _0, ϕ is therefore a global minimizer of the energy _[·] in H^1_(,^d), which is the intended conclusion. §.§ Proof of Corollary <ref> According to conclusions <ref> of <ref> and to the definition <ref> of the quantity , (0,) ⊂_-∞⊂ (0,) , and these inclusions show that, if is equal to (that is, not larger than) , then conclusion <ref> of <ref> holds. Let us assume that the converse holds, or in other words let us assume that is larger than . In this case, conclusion <ref> of <ref> states that there exists a pushed front invading e at the speed . Let us denote by ϕ the profile of this pushed front. According to equality <ref> of <ref> (see <ref>), for every c in (0,), the energy _c[ϕ] is negative; this shows that the whole interval (0,) is included in _-∞, and in view of the inclusions <ref>, conclusion <ref> of <ref> again holds. Still in this case where is larger than , it follows from equality <ref> of <ref> that the energy _[ϕ] vanishes. This shows that cannot be equal to , or else conclusion <ref> of <ref> would ensure that the same energy is positive, a contradiction. In view of the second inequality of <ref>, conclusion <ref> of <ref> is proved. According to conclusion <ref> of <ref>, for c equals there exists at least one global minimizer of _c[·] in H^1_c(,^d) which is not identically equal to 0_^d. According to the definition of _-∞, such a global minimizer does not exist for c in (-∞,). To complete the proof, let us proceed by contradiction and assume that a global minimizer of _c[·] in H^1_c(,^d), not identically equal to 0_^d, exists for some c in (,+∞). It follows from <ref> that this minimizer must be the profile of a travelling wave invading e at the speed c (and, since this profile is in H^1_c(,^d), a pushed travelling wave). In addition, it follows from the coercivity assumption <ref> that this profile must be bounded, it is therefore the profile of a pushed travelling front; and again according to equality <ref> of <ref>, it follows that the whole interval (0,c) must be in _-∞, a contradiction with the definition of . <Ref> is proved. § SOME PROPERTIES OF THE PROFILES OF PUSHED TRAVELLING WAVES INVADING A CRITICAL POINT As in the previous sections, let us consider a potential V in ^2(^d,) and a critical point e of V, and let us assume that assumptions <ref> and <ref> (stated in <ref>) hold (in this sec:properties_profiles_pushed_trav_waves it will not be assumed that e equals 0_^d). As in the notation <ref> let us denote by μ_1 the least eigenvalue of D^2V(e), and as in <ref> let us denote by the maximal linear invasion speed of e. Let us recall the notation _c[·] introduced in <ref>, the notation H^1_c(,^d) introduced in <ref>, and the notation and introduced in <ref>. As in <ref>, let us consider a positive quantity c_0 and a negative quantity μ_0 related by c_0 = 2√(-μ_0)μ_0 = -c_0^2/4 , and let us assume that < c_0 < , or equivalently < μ_0 < μ_1 , see fig:correspondence_mu_c. According to this assumption the quantity (c_0) can be defined exactly as in <ref>, so that, for every w in B_^d(e,(c_0)), σ(D^2V(w))⊂ [μ_0,+∞) , see <ref>. Finally, let us consider a speed c greater than c_0, and let us consider the differential systems <ref> governing the profiles ξ↦ϕ(ξ) of waves travelling at the speed c for the parabolic system <ref>: ϕ” = -cϕ' + ∇ V(ϕ) , or equivalently [ ϕ'; ψ' ] = [ ψ; - c ψ + ∇ V(ϕ) ] . §.§ Asymptotics at the two ends of space Let ξ↦ϕ(ξ) denote a solution of the differential system <ref>, defined on a maximal interval (ξ_-,+∞) (for same quantity ξ_- in {-∞}∪) and satisfying the following properties: ϕ(ξ)e and ϕ≢e . The following statements hold. * If ϕ(·) is bounded on (ξ_-,+∞), then ξ_- equals -∞, the quantity sup_ξ∈ϕ(ξ) is bounded from above by a quantity depending only on V, and there exists a negative quantity V_-∞ such that the following limits hold as ξ goes to -∞: ϕ'(ξ)→ 0 and (ϕ(ξ),(V)∩ V^-1({V_-∞})) → 0 . * If ϕ is the profile of a pushed travelling wave (<ref>), then there exists a unique quantity ξ̂ in (ξ_-,+∞) such that ϕ(ξ̂)-e = (c_0) , and such that, for every ξ in (ξ̂,+∞), ϕ(ξ)-e < (c_0) and (ϕ(ξ)-e)·ϕ'(ξ)<0 . The proof of statement <ref> is identical to the proof of the last statement of <cit.> (see also the proof of <cit.>). Let us prove statement <ref>. For ξ in (ξ_-,+∞), let us consider the quantity q(ξ) defined as q(ξ) = 1/2(ϕ(ξ)-e)^2 . Thus, for every ξ in (ξ_-,+∞), q'(ξ) = (ϕ(ξ)-e)·ϕ'(ξ) and q”(ξ) +c q'(ξ) = ϕ'(ξ)^2 + (ϕ(ξ)-e)·∇ V(ϕξ) . In accordance with the notation introduced in <ref>, let us consider the set (c_0) = {ξ∈(ξ_-,+∞): ϕ(ξ)-e> (c_0)} . If this set is empty, then ϕ is bounded, and as a consequence ξ_- is equal to -∞. If it is nonempty, let us consider the quantity ξ̂ = sup((c_0)) , and let us consider the interval (c_0) defined as (c_0) = { if (c_0) is empty, [ξ̂,+∞) if (c_0) is nonempty.. It follows from the inclusion <ref> that, for every ξ in (c_0), q”(ξ) +c q'(ξ) ≥ϕ'(ξ)^2 + μ_0 (ϕ(ξ)-e)^2 , or, equivalently, d/dξ(e^cξq'(ξ)) ≥ e^cξ( ϕ'(ξ)^2 - μ_0(ϕ(ξ)-e)^2) . Since ϕ is assumed to be the profile of a pushed travelling wave (<ref>), it follows that, for every ξ_0 in (c_0), - e^cξ_0 q'(ξ_0) ≥∫_ξ_0^+∞ e^cξ( ϕ'(ξ)^2 - μ_0(ϕ(ξ)-e)^2) dξ , and it follows to Poincaré inequality <ref> applied to the function ϕ(·)-e that - e^cξ_0 q'(ξ_0) ≥ 2( c^2/4 - μ_0) ∫_ξ_0^+∞ e^cξ q(ξ) dξ , so that q'(ξ_0) ≤ - 2( c^2/4 - μ_0) ∫_0^+∞ e^cζ q(ξ_0+ζ) dζ , and since c is assumed to be greater than the quantity c_0 introduced in <ref>, it follows that q'(ξ_0) is negative, so that q(·) is strictly decreasing on (c_0). If in addition the set (c_0) is empty, then q(ξ) must go to some finite positive limit q_-∞ as ξ goes to -∞, and it would follow from the previous inequality that lim sup_ξ→-∞ q'(ξ)≤ - 2( c^2/4 - μ_0) q_-∞/c < 0 , a contradiction. Thus the set (c_0) is nonempty, equality <ref> follows from the definition of ξ̂, and inequalities <ref> from the fact that q(·) is strictly decreasing on [ξ̂,+∞). Statement <ref> is proved. §.§ Uniform convergence at the right end of space Let us keep the notation and assumptions introduced at the beginning of <ref>, and let us consider a positive quantity δ, smaller than or equal to (c_0), such that the conclusions of <ref> (local steep stable manifold) hold. The following lemma calls upon the notation ϕ_c,u(·) introduced in <ref>. The convergence ϕ_c,u(ξ) → e as ξ→+∞ is uniform with respect to u in ∂ B_^d(e,δ). For every u in ∂ B_^d(e,δ), the limit <ref> follows from the definition <ref> of ϕ_c,u (the only thing to prove is that this convergence is uniform). As a consequence, for every positive quantity ε, there exists a positive time ξ(u,ε) such that ϕ_c,u(ξ(u,ε)) is smaller than ε/2. Thus, by continuity of the solutions of system <ref> with respect to initial conditions, there exists an open neighbourhood ν(u,ε) of u in ∂ B_^d(e,δ) such that, for every u' in ν(u,ε), ϕ_c,u'(ξ(u,ε))-e < ε . Since this set ∂ B_^d(e,δ) is compact, there exist a finite set {u_1,…,u_n} of points of this set such that ∂ B_^d(e,δ) ⊂⋃_i=1^n ν(u_i,ε) . According to statement <ref> of <ref> above, for every u in ∂ B_^d(e,δ), the function ξ↦ϕ_c,u(ξ)-e is decreasing on [0,+∞). It follows that, for every time ξ greater than max(ξ(u_1,ε),…,ξ(u_n,ε)), and for every u in ∂ B_^d(e,δ), ϕ_c,u(ξ)-e < ε , which is the intended conclusion. §.§ Extension of the local steep stable manifold until the radius deltaHess(c0) Following the notation of <ref>, let us consider the set e(c_0)c defined as e(c_0)c = {(ϕ_0,ψ_0)∈^d×^d:  the solution ξ↦ϕ(ξ) of the differential system <ref> with initial condition (ϕ(0),ϕ'(0)) = (ϕ_0,ψ_0) satisfies: ϕ(ξ)-e≤(c_0) for all ξ in [0,+∞) and ϕ(ξ)-e = o_ξ→+∞(e^-1/2cξ)} . The following proposition is the analogue <cit.>. The proof is similar, however, for sake of completeness and since the context and the notation significantly differ, a comprehensive proof is provided below. Concerning the proof of the main result provided in <ref>, this <ref> shows that the quantity (c) introduced in <ref> could actually be chosen equal to the quantity (c_0) introduced in <ref> (allowing a slightly simpler presentation without any significant benefit), see the remark following the notation <ref>. The set e(c_0)c is the graph of a ^1-map: B_^d(e,(c_0))→^d. In other words, for c greater than c_0, the conclusions of <ref> (defining the local steep stable manifold of e) hold for a parameter δ equal to (c_0). The proof of this <ref> will follow from the next two lemmas. Let us consider the projectors π_1: ^d×^d→^d , (u,v)↦ u and π_2: ^d×^d→^d , (u,v)↦ v , and the map : e(c_0)c→B_^d(e,(c_0)) defined as the restriction of π_1 to the departure set e(c_0)c and the arrival set B_^d(e,(c_0)). The map is surjective. For every u in ∂ B_^d(e,δ), it follows from statement <ref> of <ref> that there exists a negative quantity ξ̂_u such that the function ξ↦ϕ_c,u(ξ)-e defines a a one-to-one correspondence between the interval [ξ̂_u,+∞) and the interval (0,(c_0)], see <ref>. Let (0,(c_0)]→ [ξ̂_u,+∞) , r↦ξ_u(r) denote the inverse correspondence. Then, for every r in (0,(c_0)], (ϕ_c,u(ξ_u(r)),ϕ_c,u'(ξ_u(r))) ∈e(c_0)c , so that ϕ_c,u(ξ_u(r)) ∈(e(c_0)c) . Let us consider the one-parameter family (h_r)_r∈[δ,(c_0)] of maps from 𝕊^d-1 to 𝕊^d-1 defined as h_r(v) = 1/r(ϕ_c,δ v(ξ_δ v(r))-e) . For every v in 𝕊^d-1, the quantity ξ_δ v(δ) is equal to 0, so that ϕ_c,δ v(ξ_δ v(r)) is equal to δ v, and as a consequence h_δ(v) is equal to v. Thus h_δ is the identity of 𝕊^d-1, so that, for every r in [δ,(c_0)], h_r is isotopic to the identity of 𝕊^d-1, thus surjective (for topological reasons there is no retraction of 𝕊^d-1 to a point, and the property “h_r non surjective” would lead to the existence of such a retraction). This shows that, for every r in [δ,(c_0)], the set ∂ B_^d(e,r) belongs to the image of , which is is therefore surjective. Let us denote by ec the (global) steep stable manifold of the equilibrium (e,0_^d) for the differential system <ref>. This set is a d-dimensional ^1-submanifold of ^2d, containing e(c_0)c. For every (ϕ_0,ψ_0) in e(c_0)c, the intersection between the tangent space T_(ϕ_0,ψ_0)ec and {0_^d}×^d is transverse in ^2d. Take (ϕ_0,ψ_0) in e(c_0)c. If (ϕ_0,ψ_0) equals (e,0_^d), then the conclusion follows from the expression <ref> of the eigenvectors of the linearized differential systems <ref>. Let us assume that (ϕ_0,ψ_0) differs from (e,0_^d), let us take a vector (ϕ̃_0,ψ̃_0) of ^d×^d, and let us consider the solution ξ↦ϕ(ξ) of the differential systems <ref> for the initial condition (ϕ(0),ϕ'(0)) equals (ϕ_0,ψ_0), and the solution ξ↦ϕ̃(ξ) of the differential system ϕ̃” = -c ϕ̃ + D^2 V(ϕ) ·ϕ̃ , for the initial condition (ϕ̃(0),ϕ̃'(0))=(ϕ̃_0,ψ̃_0) . For every ξ in [0,+∞), let us write q̃(ξ) = 1/2ϕ̃(ξ)^2 . Thus, for every ξ in [0,+∞), q̃'(ξ) = ϕ̃(ξ) ·ϕ̃'(ξ) and q̃”(ξ) + c q̃'(ξ) = ϕ̃'(ξ)^2 + D^2V(ϕ(ξ))·ϕ̃(ξ)·ϕ̃(ξ) , and it follows from the inclusion <ref> that q̃”(ξ) + c q̃'(ξ) ≥ϕ̃'(ξ)^2 + μ_0 ϕ̃(ξ)^2  , or equivalently d/dξ(e^cξq̃'(ξ)) ≥ e^cξ(ϕ̃'(ξ)^2 - μ_0ϕ̃(ξ)^2) . The vector (ϕ̃_0,ψ̃_0) belongs to the tangent space T_(ϕ_0,ψ_0)ec if and only if ϕ̃(ξ) = o(e^-1/2cξ) as ξ→+∞ . If this equality <ref> holds, then it follows that, q̃'(0) ≥∫_0^+∞ e^cξ( ϕ̃'(ξ)^2 - μ_0ϕ̃^2) dξ , and it follows from Poincaré inequality <ref> applied to the function ϕ̃(·)-e that q̃'(0) ≥ 2( c^2/4 - μ_0) ∫_0^+∞ e^cξq̃(ξ) dξ , and since c is assumed to be greater than the quantity c_0 introduced in <ref>, it follows that q̃'(0) is negative, so that ψ̃(0) is nonzero, which is the intended conclusion. According to <ref>, the map defines a covering of B_^d(e,(c_0)) by e(c_0)c, and since e(c_0)c is connected and B_^d(e,(c_0)) is simply connected, this covering must be a one-to-one correspondence. Let us denote by ^-1 the inverse correspondence. Then, with the notation π_2 introduced in <ref>, the local steep stable manifold e(c_0)c is the graph of the ^1-map π_2∘e(c_0)c: B_^d(e,(c_0)) →^d , which is the intended conclusion. § AN ADDITIONAL UPPER BOUND ON THE SPEEDS OF PUSHED FRONTS Let us keep the notation and assumptions of the beginning of <ref> (until the conditions <ref>, including these conditions). As in <ref>, let (c_0) denote the maximal radius of stability of e for pushed invasion at the speed c_0, and let us consider the quantity (c_0) defined as (c_0) = 2√()/(c_0) . Both quantities (c_0) and (c_0) can be viewed as functions of the parameter c_0, defined on the interval (,). On this interval, these functions are positive, bounded, and monotone (the function (·) is strictly increasing and the function (·) is strictly decreasing), but not necessarily continuous, see <ref>. The following proposition provides an additional constraint on the speed of a pushed front invading e. Let c denote the speed of a pushed front invading e. Then, c_0 ≤ c c ≤(c_0) . Let c denote the speed of a pushed front invading e, let v denote the profile of this front, and let us assume that c is greater than or equal to c_0. According to equality <ref> of <ref>, the quantity _c[v] vanishes. As a consequence, it follows from inequality <ref> of statement <ref> of <ref> that inequality <ref> cannot hold for every ξ in , so that the set {ξ∈:v(ξ)-e>(c_0)} is nonempty. On the other hand, since v is in H^1_c(,^d), this set is bounded from above. Let us denote by ξ the supremum of this set. Then, v(ξ)-e is equal to (c_0) and inequality <ref> (with e instead of 0_^d) holds for every ξ in [ξ,+∞), so that, according to inequality <ref> of statement <ref> of <ref>, _c[v] ≥ e^cξ(-1/c+1/2λ_c,-(μ_0)(c_0)^2) . Since _c[v] vanishes and since λ_c,-(μ_0) is greater than or equal to c/2, it follows that c≤2√()/(c_0) = (c_0) , which is the intended conclusion. It follows from this <ref> that the condition c_0 ≤(c_0) , or equivalently c_0 ≤2√()/(c_0)≤μ_0(c_0)^2 (see <ref>) is mandatory in order pushed fronts invading e at some speed c greater than or equal to c_0 to exist. In particular, if the condition < lim_c_0→^+(c_0) (see <ref>) is not satisfied, then there exists no pushed front invading e at a speed c greater than . Let us therefore assume that this condition <ref> is fulfilled, and let us consider the quantity defined as the supremum of the (nonempty) set { c_0∈(,): c_0 ≤(c_0) } . The following corollary is an immediate consequence of <ref>. The speed of a pushed front invading e cannot be greater than the quantity . Let us consider the continuous extensions of the functions (·) and (·) to the interval (,] defined by () = lim_c_0→^-(c_0) , and () = lim_c_0→^-(c_0) = 2√()/() , see <ref>. Observe that, if the following condition holds: () < , see <ref>, or equivalently: 2√()/() < > ()^2 , see <ref>, then it follows that < . In this case, the upper bound on the speeds of pushed fronts provided by <ref> is better than the one provided by conclusion <ref> of <ref>. § PULLED AND PUSHED TRAVELLING FRONTS IN FISHER'S MODEL In the scalar case (d equals 1), the speed of a pushed front invading e is necessarily greater than (see the expression <ref> of the eigenvalues of the linearized system <ref>), and it follows from conclusion <ref> of <ref> that, if is not less than μ_1, then there is no pushed travelling front invading e. This result is well known and goes back (at least) to <cit.>; in the same reference, Hadeler and Rothe consider the following reformulation of Fisher's (scalar) model <cit.>: u_t = f(u) + u_xx , f(u) = u(1-u)(1+u/ν) = u + (1/ν-1) u^2 - 1/ν u^3 , where ν is a positive parameter. The reaction term f(u) derives from (that is, is equal to minus the derivative of) the potential V defined as V(u) = -1/2u^2 + 1/3(1-1/ν)u^3 + 1/4ν u^4 , see <ref>. This potential satisfies the coercivity assumption <ref> and has three critical points: a local maximum point at u=0 and two local minimum points at u=-ν and u=1. Thus, the role of the critical point e considered insofar is played by 0, and the quantity μ_1 is equal to V”(0), that is to -1. For every positive quantity ν, there exist exactly two (up to translation) travelling fronts with monotone profiles invading 0 and which are either pulled or pushed: one to the right of 0 (with 1 as the invading equilibrium) and one to the left of 0 (with -ν as the invading equilibrium). When ν is equal to 1 the potential V is even and in this case the quantity called upon as is also equal to -1, so that no pushed front exists (see conclusion <ref> of <ref> and comment above), and both fronts are pulled. As shown in <cit.>, for ν between 0 and 1, the front to the left of 0 is still pulled, and the front to the right of 0 is: pulled if 1/2≤ν≤1 (and more precisely, pulled “variational” if ν equals 1/2, and pulled “non-variational” if 1/2<ν≤1, <cit.>), and pushed if 0<ν < 1/2; see <ref>. For a similar discussion on the subcritical quintic Ginzburg–Landau equation u_t = -μ_1 u + u^3 - u^5 + u_xx , see <cit.>. §.§.§ Acknowledgements The authors are indebted to Thierry Gallay and Romain Joly for their interest and support through numerous fruitful discussions. =2em
http://arxiv.org/abs/2306.03002v1
20230605161119
Unveiling the Two-Faced Truth: Disentangling Morphed Identities for Face Morphing Detection
[ "Eduarda Caldeira", "Pedro C. Neto", "Tiago Gonçalves", "Naser Damer", "Ana F. Sequeira", "Jaime S. Cardoso" ]
cs.CV
[ "cs.CV", "cs.LG" ]
Shell et al.: Bare Advanced Demo of IEEEtran.cls for IEEE Computer Society Journals Morphing attacks keep threatening biometric systems, especially face recognition systems. Over time they have become simpler to perform and more realistic, as such, the usage of deep learning systems to detect these attacks has grown. At the same time, there is a constant concern regarding the lack of interpretability of deep learning models. Balancing performance and interpretability has been a difficult task for scientists. However, by leveraging domain information and proving some constraints, we have been able to develop IDistill, an interpretable method with state-of-the-art performance that provides information on both the identity separation on morph samples and their contribution to the final prediction. The domain information is learnt by an autoencoder and distilled to a classifier system in order to teach it to separate identity information. When compared to other methods in the literature it outperforms them in three out of five databases and is competitive in the remaining. auto-encoder, biometrics, explainability, face recognition, knowledge, distillation, morphing attack detection, synthetic data. Unveiling the Two-Faced Truth: Disentangling Morphed Identities for Face Morphing Detection Eduarda Caldeira*, Pedro C. Neto*, Tiago Gonçalves*, Naser Damer, Ana F. Sequeira, Jaime S. Cardoso * These authors contributed equally. Eduarda Caldeira, Pedro C. Neto, Tiago Gonçalves, Ana F. Sequeira and Jaime Cardoso are with INESC TEC and University of Porto. Naser Damer is with the Fraunhofer Institute for Computer Graphics Research IGD and the TU Darmstadt. July 31, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION § INTRODUCTION Face recognition (FR) systems have had large-scale adoption in the most diverse scenarios <cit.>. Deep learning (DL) techniques have taken this and other biometric recognition systems towards above-human performance. While it also benefited biometric systems adoption, DL methods led to two problems. First, the approaches that improve the recognition power of these systems are the same to be used to design novel and dangerous attacks <cit.>. Some attacks can take the form of adversarial noise addition or be developed with FR systems in mind. The latter comprises both morphing <cit.> and presentation attacks <cit.>. Besides the attacks, deep learning methods are notorious for their black-box behaviour, which compromises the understanding of both the inner workings of the model and the reasoning behind a decision. Furthermore, FR and face attack detection systems are consistently designed using problem-agnostic tools, which do not leverage domain knowledge. For a wider adoption and to be able to deploy these systems on critical scenarios, it is necessary to guarantee that their reasoning process is, at least to some extent, explained. One can explain a decision using a post-hoc approach <cit.>, or directly interfering with the training behaviour of the model as stated by Neto et al. <cit.>. Face morphing attacks, which merge two images from different identities into a single image capable of misdirecting the recognition system, have progressed significantly. In other words, this attack aims to increase the number of false positives of the FR system, granting access to two distinct users. When left undetected, the fusion of two images might allow two different people to pass border control with the same passport, for example. Due to this threat, researchers focused their attention on the development of robust morphing attack detection (MAD) systems <cit.>. Usually, they are designed to detect if the input image is an attack or a bonafide sample and do not include any information regarding the fused identities in their training. OrthoMAD <cit.> aims to learn this information in an unsupervised manner by separating identity information into two orthogonal latent vectors. However, it lacks guarantees regarding the relation between the disentangled information and identity information. The recent blossoming of synthetic data generation methods, such as generative adversarial networks (GAN) and diffusion models, led to the creation of synthetic datasets with a diverse number of identities represented <cit.>. Although these identities are usually represented only once, it suffices to increase identity diversity. The work presented in this document builds on top of OrthoMAD premises that it is possible to disentangle information regarding different identities. The first addition is an auto-encoder model trained on the bonafide samples to minimize the reconstruction error. The latent vector produced by the encoder is considered to be the prior of the identity information that should be present in the disentangled vectors. We further relax the orthogonality constraint to ensure that the angle between the two identity vectors, in the case of an attack, approximates the angle of the priors of their identities. To achieve this, we leverage the latent vectors of the auto-encoder for both images (before being morphed) and a knowledge distillation strategy. Finally, to further approximate the latent space of both identity vectors we replace the concatenation and classification process with a shared linear layer to be used on both vectors separately. The two predicted scores are fused afterwards. The main contributions of this work are the following: 1) an unexplored knowledge distillation approach based on the angle of two vectors that represent identity priors; 2) the improvement on the usage of the diverse identity set to regularize the latent spaces and the identity disentanglement; 3) a novel method designed specifically for this domain and with increased transparency regarding its inner workings; 4) an empirical validation and comparison with similar (state-of-the-art) approaches. This document is divided into five main sections. Besides this introductory section, the following sections include a description of the methodology, an introduction to the databases used for training and evaluation, the experimental setup designed for the experiments, the discussion of the results, and finally the conclusion. The code related to this paper is publicly available in a GitHub repository[<https://github.com/NetoPedro/IDistill>]. § METHODOLOGY Morphing attacks occur when two distinct identities are fused together, resulting in an image that can trick a face recognition system by containing enough information about both identities. To analyse whether information from two distinct identities is present in an image, we designed a regularisation term based in knowledge distillation (KD). As such, we call IDistill to our proposed method. The overall scheme and architecture of our proposed model is represented on Figure <ref>. We start by training an autoencoder to reconstruct bonafide images. This autoencoder is responsible for creating a minimalist representation of a face I. Alternatively to the usage of the autoencoder, we could have leveraged a pretrained face recognition system. The decision to follow with the autoencoder yields three reasons: 1) Fang et al. <cit.> has shown a difference in the reconstruction performance from abnormal and normal face images; 2) Besides being large (512-d), the latent vector face recognition systems might not contain all the information necessary for the reconstruction. 3) Encoder-Decoder have been explored for face de-morphing <cit.>. The proposed autoencoder is based on the U-Net architecture <cit.> and the size of the latent representation of the image was chosen as 128. As in other reconstruction tasks the autoencoder receives the image I, creates a latent representation u of that image using the encoder, and reconstruct it as Ĩ using the decoder network. This approach uses a mean squared error (MSE) loss function (see Eq. <ref>). L_auto = ∑_i,j (I_ij - Ĩ_ij)^2 The architecture of the morphing classifier is based on a ResNet-18 <cit.> where the last fully-connected layer is replaced with two fully-connected layers that output two vectors v of size 128 each. Afterwards, a fully-connected layer is used to infer if the vectors contain information of an identity or not, with each vector producing a score (id). The same layer is used for both vectors individually. id = 1/1+e^-W^Tv: Considering that the produced score holds information regarding the presence of encoded identity information on a vector v, the final prediction for an image I is designed as follows. Given I, the backbone architecture produces v_1 and v_2, which will result in the identity probabilities id_1 and id_2, respectively. The probability of I containing information of two distinct identities is given by id_1*id_2. Consequently, the bonafide presentation score, ỹ is given by: ỹ = 1-id_1*id_2 For the classification task, we have introduced the Binary Cross-Entropy, L_BCE (Equation <ref>) at the level of the final fused prediction ỹ. L_BCE = -(y log(ỹ)+(1-y) log(1-ỹ)) To ensure that the latent vectors v_1 and v_2 extract identity information and are aligned with the information learnt by the autoencoder, we introduce a knowledge distillation term. For attacks, this term aims to extract vectors from the morphed image that have an angle between them equal to the angle produced by the latent vectors u_1 and u_2 extracted with the encoder from the two images that originated the morphed image (Eq. <ref>). We are then promoting a proximity between the autoencoder latent space and the morphing classifier latent space while handling attacks. Furthermore, we only consider the angle formed by these vectors, since their identity intensity might be diminished in the morphing process. For bonafide, we expect one vector to hold identity information, while the other does not. As such, we designed a term that first selects the vector v with the highest cosine similarity (S_cos) to u. With this choice, the proposed term is able to maximize the similarity between u and the selected vector v, while approximating the id of this vector to 1, and the other id to 0 (Eq. <ref>). Ver_term = S_cos(v_1, u) > S_cos(v_2, u) L_KD_1 = (1-id_1)^2 + (id_2)^2 - S_cos(v_1, u) if Ver_term (1-id_2)^2 + (id_1)^2 - S_cos(v_2, u) otherwise L_KD_2 = [S_cos(u_1, u_2)-S_cos(v_1, v_2)]^2 L_KD = yL_KD_1+(1-y)L_KD_2 Both losses are incorporated in a single equation as follows: Loss = L_BCE + L_KD § DATABASES This work builds on top of a proposal by Neto et al. <cit.>, hence, we use the same datasets to train and test our methodology: * FRLL: The Face Research London Lab dataset <cit.> was used to produce the FRLL-Morphs dataset <cit.>, which is frequently used to test morphing attack detection methods. Five different morphing techniques are used in the dataset, including Style-GAN2 <cit.>, WebMorph <cit.>, AMSL <cit.>, FaceMorpher <cit.>, and OpenCV <cit.>. Each of the five methods uses 204 genuine samples and more than one thousand morphed faces made from high-resolution frontal faces. We used this database only for evaluation purposes because it lacks distinct train, validation or test sets. * SMDD: The Synthetic Morphing Attack Detection Development (SMDD) <cit.>, is a novel dataset that uses synthetic images to create a dataset of morph and bonafide samples. It initially generated 500k images of faces using a random Gaussian noise vector sampled from a normal distribution using the official open-source version of StyleGan2-ADA <cit.>. Leveraging the quality estimation method known as CR-FIQA <cit.>, 50k of these photos were chosen for analysis because of their high quality, and 25k of them were determined to be the bonafide samples. The attack photos were paired with five other attack images at random, and 5k of them were chosen as key morphing images. Next, using the OpenCV <cit.> method, they were morphed, yielding 15k attack samples. The original 25k images that were used to generate the morphs are also publicly available. This dataset was divided in test and validation sets, on a proportion of 85-15%. § EXPERIMENTAL SETUP The autoencoder was trained for 300 epochs, with a learning rate of 1×10^-4, a batch size of 32, and Adam <cit.> was used as the optimisation algorithm to minimize the MSE loss. It trained exclusively on bonafide samples. The classifier was trained with the joint loss (Eq. <ref>) utilizing a learning rate of 1×10^-4, a batch size of 16 and was optimised with Adam. Furthermore, to align with the autoencoder, both v_1 and v_2 are 128-d. The training utilized the synthetic dataset SMDD, which allowed for this regularization term to utilise the original samples that originated the morphing samples. To evaluate the performance of the morphing detection, we evaluated our algorithm using different metrics, commonly used in the literature: the Attack Presentation Classification Error Rate (APCER) (i.e., morphing attacks classified as bonafide); and the Bonafide Presentation Classification Error Rate (BPCER) (i.e., the bonafide samples that are classified as morphing attacks). We evaluated these metrics at two different fixed APCER values (1.0% and 20.0%). The equal error rate (EER), which is the BPCER and APCER at the decision thresholds where they are the same, was also evaluated. § RESULTS AND DISCUSSION The literature on face morphing attack detection is large, however, is also disperse. In other words, the datasets used for benchmarking and training are not always the same, and as such, direct comparisons are not trivial. The combination of FRLL and SMDD has been found in at least two different documents in the literature. The first <cit.> introduces the SMDD datasets and evaluate three different methods from the literature: Inception <cit.>, PW-MAD <cit.> and MixFacenet <cit.>. Their results vary and there is not one that beats the others cosistently across the different FRLL morphing methods. Afterwards, OrthoMAD was also evaluated using the exact same protocol <cit.> and achived state-of-the-art results on three out of the five morphing approaches. Since we follow the protocol introduced by Damer et al. <cit.>, the comparison between our method and the ones in the literature focuses on the above mentioned approaches. The results of our method, IDistill, are displayed in Table <ref>. As seen, IDistill has been able to surpass MixFacenet and Inception in all the test databases, and PW-MAD in four out of five databases. OrthoMAD has better results on two databases. A careful analysis of the results highlights an important notion that IDistill is fairly more consistent, as such, the improvements on the databases where it surpasses the literature are much wider than the loss in the performance on the two other databases. Looking at the most extreme examples, in FRLL-OpenCV the EER of our architecture is only 1.73 percentual points larger than the value obtained by OrthoMAD, while IDistill decreases the EER in 11.22 percentual points when tested in the FRLL-WebMorph dataset, which constitutes a much more relevant difference in performance. While looking beyond EER it is also possible to see a wide improvement on the BPCER@APCER at both 1% and 20%. Moreover, on FRLL-OpenCV the higher EER of IDistill is mitigated by a lower BPCER@APCER = 20%. When compared to OrthoMAD, our method presents an architecture with the same computation cost on inference, but significantly more interpretable, since OrthoMAD does not guarantee that the information yield by both vectors is related to identity. We are capable of identifying not only attacks, but justify utilising the information of which vectors contain the identity, and which do not. Due to the approximation between the autoencoder latent space and the IDistill latent space, it might also be possible to reconstruct parts of the identity utilising the decoder. While not used in this study, the information regarding the intensity of the vectors extracted by the morphing classifier, v_1 and v_2 might also allow to infer the morphing percentage associated with each fused identity, which might be useful in future works. § CONCLUSION In this document we have presented a novel method for face morphing attack detection that is interpretable, compact and performs at the state-of-the-art level. The proposed IDistill method was trained utilising a two step scheme based on the training of an autoencoder to reconstruct bonafide images, and a distillation step integrated on the standard training of a morphing classifier, utilizing the encoder as teacher and the first part of the classifier as student. While we relaxed the orthogonality constraint from previous methods, we devised a more consistent and reliable solution to ensure that the identity information is, in fact, separated in two individual vectors. Moreover, we dismiss any concatenation of these vectors, ensuring an interpretable analysis of the scores produced by each and their contribution to the final prediction. As future work on the interpretability capabilities of this study, it would be interesting to explore the reconstruction capabilities utilising the identity vectors and the decoder model from the autoencoder architecture. Another possible direction is to verify whether the intensities of the vectors extracted by the morphing classifier allow to quantify the morphing percentage of the identities that were fused to generate each attack sample. Overall, IDistill surpasses the performance of the previous methods published in the literature, while ensuring the advantages previously mentioned. In some scenarios the performance is drastically better. There is much work to be done on the topic of face morphing attack detection, nonetheless, IDistill is a step forward towards the integration of interpretable approaches that are competitive with fully black-box systems. § ACKNOWLEDGMENTS § ACKNOWLEDGMENT This work is co-financed by Component 5 - Capitalization and Business Innovation, integrated in the Resilience Dimension of the Recovery and Resilience Plan within the scope of the Recovery and Resilience Mechanism (MRR) of the European Union (EU), framed in the Next Generation EU, for the period 2021 - 2026, within project NewSpacePortugal, with reference 11. It was also financed by National Funds through the Portuguese funding agency, FCT - Fundação para a Ciência e a Tecnologia within the PhD grants “2020.06434.BD” and “2021.06872.BD”. The research work has been also funded by the German Federal Ministry of Education and Research and the Hessen State Ministry for Higher Education, Research and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. IEEEtran
http://arxiv.org/abs/2306.04857v1
20230608011019
The Hybrid Extended Bicycle: A Simple Model for High Dynamic Vehicle Trajectory Planning
[ "Agapius Bou Ghosn", "Philip Polack", "Arnaud de La Fortelle" ]
cs.RO
[ "cs.RO" ]
The Hybrid Extended Bicycle: A Simple Model for High Dynamic Vehicle Trajectory Planning Agapius Bou Ghosn^1, Philip Polack^1, and Arnaud de La Fortelle^1,2 ^1 Center for Robotics, Mines Paris, PSL University, 75006 Paris, France [agapius.boughosn, philip.polack, arnaud.delafortelle]@minesparis.psl.eu ^2 Heex Technologies, Paris, France July 31, 2023 =============================================================================================================================================================================================================================================================== empty empty While highly automated driving relies most of the time on a smooth driving assumption, the possibility of a vehicle performing harsh maneuvers with high dynamic driving to face unexpected events is very likely. The modeling of the behavior of the vehicle in these events is crucial to proper planning and controlling; the used model should present accurate and computationally efficient properties. In this article, we propose an LSTM-based hybrid extended bicycle model able to present an accurate description of the state of the vehicle for both normal and aggressive situations. The introduced model is used in an MPPI framework for planning trajectories in high-dynamic scenarios where other simple models fail. § INTRODUCTION The ability to operate a vehicle in emergency scenarios is essential for its safety. Aggressive maneuvers are not highly encountered but can pose threats to the safety of a vehicle not prepared to deal with them. An autonomous vehicle is expected to have the ability to plan feasible trajectories and to apply controls to follow the planned trajectories in all scenarios. Trajectory planning for autonomous vehicles makes use of three main categories of planners <cit.>, graph search based planners such as A* planners (e.g. <cit.>), sampling based planners such as Rapidly-exploring Random Trees (RRT) planners (e.g. <cit.>), or optimal control based planners such as Model Predictive Control (MPC) planners which have been widely used in autonomous vehicle applications in the recent years (e.g. <cit.>, <cit.>). All of the mentioned planners depend on a vehicle model that is required to accurately represent the behavior of the vehicle. The accuracy of the used vehicle model directly affects the vehicle's actions, thus its safety. The difficulty of predicting the vehicle behavior in high dynamics relies in the resulting high nonlinearities (especially at the tire level) which defy the assumptions used for normal driving. In these scenarios, assumptions related to linear tire behavior, or no-slip conditions that are usually employed in simple models are not valid anymore. This urges designers to upgrade to more complex vehicle and tire models. While this guarantees a larger validity domain, it creates two problems: on the one hand, using complex models introduces computational deficiency to the planner that is supposed to make fast decisions; on the other hand, the usage of additional parameters in complex models may put the robustness of these models in jeopardy. The literature includes many works that use a kinematic bicycle model to solve the motion planning problem of a vehicle (e.g. <cit.>, <cit.>, <cit.>). While the kinematic bicycle model is a simple model with computationally efficient properties, its validity domain is limited to lateral accelerations a_y<0.5μ g as proven in <cit.>. Thus, using the kinematic bicycle model to plan trajectories may result in infeasible ones, especially when in high dynamics, i.e. high lateral accelerations. Other works make use of more complex models as the dynamic bicycle model used in <cit.> with a linear tire model or in <cit.> with a Pacejka <cit.> tire model. In the first work, the operational range of the planner is limited to the linear region of the tire model, i.e. low dynamic maneuvers; while in the second work, the use of the Pacejka model requires the knowledge of many parameters that are not easily accessible. In addition, implementing tire dynamics imposes the usage of lower integration time steps due to the fast dynamics of the wheels limiting the planner to lower planning horizons. This article targets the problem of representing the behavior of the vehicle in high dynamics using a simple model to plan feasible trajectories. We present a hybrid vehicle model that describes the motion of the vehicle in low and high dynamic maneuvers. The presented method makes use of the extended bicycle model <cit.>, an augmentation of the kinematic bicycle model. We integrate recurrent neural networks in the extended model to correct the modeling errors and account for slipping angles at the wheels. In summary, this paper presents the following contributions: * We augment the extended bicycle model to a hybrid model that uses recurrent neural networks to represent the motion of the vehicle even at the limits of handling. * We present a planning architecture based on the Model Predictive Path Integral (MPPI) <cit.> that uses the developed hybrid model to plan trajectories in different scenarios. * We compare the proposed planner to a kinematic bicycle based planner knowing that the kinematic model is heavily used in the literature for planning purposes. In the rest of this paper, the reference model is a four-wheel vehicle model with a Pacejka tire model simulated by <cit.> and used as a reference model in several works (e.g. <cit.>, <cit.>, <cit.>). The paper will start by presenting the reference model in Section <ref>, both the kinematic bicycle model and the extended bicycle model are then presented in Sections <ref> and <ref> respectively. The proposed approach, including the data generation procedure, the network architecture and training, and the planner are detailed in Sections <ref> and <ref>. Testing of the proposed approach is then presented in Section <ref> with a comparison to a kinematic bicycle based planner. The paper is concluded in Section <ref>. § MODELS In this section we present the different models to be used for our development and analysis. As stated before, the reference model is a four-wheel vehicle model defined in <cit.>. The kinematic bicycle model and the extended bicycle model are both simplified vehicle models. The extended model will be the basis of the proposed approach while the kinematic bicycle model will be used for comparison purposes only. The reference model will be briefly presented in Section <ref>, followed by the kinematic bicycle model in Section <ref> and the extended bicycle model in Section <ref>. §.§ Reference Model The reference model used to describe the dynamics of the vehicle is a 9 degrees of freedom (DoF) four-wheel dynamic model with a Pacejka tire model <cit.>. It can accurately describe the motion of the vehicle and it was chosen as a reference in several works as mentioned in the previous section. The states, parameters and control inputs describing the model are presented in Table <ref>. As illustrated in Fig. <ref>, the model is made of two front steerable wheels and two rear non-steerable wheels. The motion of the model is represented by the velocity vector issued from the center of gravity (cog) of the vehicle. Furthermore, it is assumed that the roll and pitch occur around the center of gravity and that the aerodynamic forces are represented by a single force at the vehicle's front. The model considers the coupling of longitudinal and lateral slips and the load transfer between tires. The state of the model is defined as Z = [X V_x Y V_y ψ ψ̇ θ θ̇ ϕ ϕ̇ ω_1 ω_2 ω_3 ω_4]^T The equations governing the state evolution of the model can be found at the reference. The presented model serves as the reference vehicle for this paper. It will be used to generate data points to train the proposed approach at a first stage, and will be controlled based on the planned trajectories at a second stage. Planned trajectories will be computed based on the kinematic and extended bicycle models presented next. §.§ Kinematic Bicycle Model The kinematics study of a system is concerned with the motion of the system without reference to the forces or masses entailed in it. Several assumptions are added to the 9 DoF model to reach the kinematic bicycle model: * The four-wheel model is simplified into a bicycle model: front wheels are considered as a single steerable wheel, rear wheels as a single non-steerable wheel; * The pitch and roll dynamics are neglected; * The slip angles at both wheels are neglected. The model, with a center of gravity reference, is illustrated in Fig. <ref> and its states, parameters and inputs are presented in Table <ref>. The state of the model is defined as Z = [ X Y ψ ]^T and its evolution is defined as follows: Ẋ = Vcos(ψ+β) Ẏ = Vsin(ψ+β) ψ̇ = Vtanδcosβ/l_f+l_r β = arctan(l_r tanδ/l_f+l_r) As mentioned earlier the kinematic bicycle model is widely used for planning purposes in the literature; for this reason we will compare the performance of the proposed approach to it. A more accurate but simple model is presented next; it is an extension to the kinematic bicycle model. §.§ Extended Bicycle Model The extended bicycle model (EBM) shown in Fig. <ref> does not take into account the dynamics of the system as well. It is concerned with the kinematics of the vehicle while including the slip angles at the front and rear wheels. This model was introduced in <cit.> as an augmentation of the kinematic bicycle model presented previously to account for slips in the vehicle's motion. As it is the case for the previous model, the state of the extended model taking as reference the center of gravity consists of its X,Y coordinates and its heading ψ. The parameters and inputs of the model are the same as of the kinematic bicycle model presented in Table <ref> in addition to the slip angles at the front and rear wheels α_f and α_r. The state evolution of the system is defined as follows: Ẋ = Vcos(β+ψ) Ẏ = Vsin(β+ψ) ψ̇ = Vcosβ(tan(δ-α_f)+tanα_r)/l_f+l_r β = arctan(-l_f tanα_r + l_r tan(δ-α_f)/l_f+l_r) Although this model introduces accurate properties through the inclusion of the slip angles at the wheels, the difficulty remains in identifying the slip angles without referring to the vehicle's dynamics. The proposed approach solves this problem by using recurrent neural networks as we will discuss next. § THE HYBRID EXTENDED BICYCLE MODEL (HEBM) The proposed approach aims to provide a simple model able to accurately represent the behavior of the vehicle in high dynamics. The developed model will be used later on to plan vehicle trajectories. As mentioned previously, the extended bicycle model will be at the core of the proposed approach while integrating recurrent neural networks to predict the wheels' slip angles. The use of recurrent neural networks requires the creation of a training data set, this will be presented in Section <ref>; then, the network architecture and the training details will be presented in Section <ref>. §.§ Data Generation Introducing a slip angle predictor based on recurrent neural networks necessitates the creation of a training data set. As we mentioned earlier, the model presented in <cit.> and detailed in Section <ref> is used as our reference model; thus, we seek to develop a hybrid extended bicycle with a close behavior to the mentioned reference model. This implies the use of the reference model to generate the training samples. To generate the training samples specific torques and steering angles should be applied as control inputs to the simulator. The data set is made of 5000 2-second trajectories sampled at 100 Hz (a total of 1 million samples). The used procedure to create the data set is the following: * A random initial vehicle state is chosen. * Random controls are drawn from a uniform distribution as follows: * A torque distribution between 0 and 800 if V<10, -1000 and 800 if 10<V<30 and -1000 and 0 if V>30. * A steering angle distribution between -0.5 and 0.5. * The sampled controls are applied to the 9 DoF model and are held constant for a uniformly drawn period between 0.01 and 1. The state of the reference model is updated accordingly. * The procedure repeats from Step (2) until a 2-second trajectory is formed. The created data set is used to train the network defined next. §.§ Slip Angle Predictor The previously presented extended bicycle model lacks the knowledge of the slip angles at the front and rear wheels to be able to operate. In other words, the state evolution of the model described by Equations (<ref>) requires the values of the slips at the wheels to compute the next state. We propose to use recurrent neural networks (RNNs) to access these quantities. This choice is motivated by the ability of recurrent neural networks to perform well with dynamical systems <cit.>. The proposed predictor involves Long Short-Term Memory (LSTM) networks, a variant of RNNs. The proposed architecture is shown in Fig. <ref>. As shown in the figure, to be able to predict the slips at t=k the inputs to the network are split between the vehicle state part that involves the longitudinal and lateral velocities and yaw rate for the last 10 time steps (t=k-1..k-10), and the controls part that involves the controls (V, δ) applied at the last 9 time steps and the ones to be applied to reach the state at the current time step (t=k..k-9). The outputs of the network are the resulting slip angles at the wheels at t=k when the considered controls are applied. The output of the network will allow the computation of the state evolution of the model from t=k-1 to t=k using Equations (<ref>). In brief, at each time step the controls to be applied and the current state of the model are used in addition to the slip angles predicted by the network to compute the next state of the model. The network is made of two LSTM layers consisting of 32 and 64 neurons respectively and three fully connected layers of 128, 256 and 128 neurons respectively. ReLU activation functions are used. The loss function employed consists of using the predicted slips to compute the state evolution of the model and comparing the resulting longitudinal and lateral velocities and yaw rate to the reference data. Thus, after each forward pass through the network, having the outputs α_f, α_r, the side-slip angle is calculated using Equation (<ref>) and the state evolution of the extended bicycle model is calculated using Equations (<ref>)-(<ref>), then the velocities are transformed to the vehicle frame using the following equations: V_x = Ẏsinψ + Ẋcosψ V_y = Ẏcosψ - Ẋsinψ These velocities along with the yaw rate are compared to the reference through an L1 loss that will be back propagated through the network. The loss function used will emphasize on lateral dynamics as we have noticed that learning the longitudinal dynamics was easier for the network. Hence, the loss function used is defined as follows: L = 0.2L_1^V_x+0.4L_1^V_y+0.41/γL_1^ψ̇ with γ=0.05 being a scaling factor to account for the scaling difference between V_x, V_y and ψ̇. The network is implemented using PyTorch and is trained using the Adam optimizer with a batch size equal to 64 and an initial learning rate equal to 1e-04. The trained slip predictor will be used, inside an extended bicycle model as described previously, resulting in the hybrid extended bicycle model. The resulting model will be employed in a planner next. § PLANNER The aim of this work is to create a model able to accurately represent the vehicle's behavior, to be used for planning feasible trajectories even in high dynamics. The hybrid extended model developed in the previous section will be used for this purpose. Given that the developed model involves complexities due to the used RNNs, implementing it in a classical optimal control application would be complicated and computationally demanding. For this reason, a Model Predictive Path Integral (MPPI) based planner is discussed. A review of the MPPI technique will be presented in Section <ref>; the MPPI technique will be applied to plan feasible trajectories using the developed model in Section <ref>; low level controllers that will apply controls to the simulator to follow the planned trajectories are presented in Section <ref>. §.§ Review of MPPI The Model Predictive Path Integral (MPPI) <cit.> is a sampling-based, derivative-free, model predictive control algorithm. It consists of using a Graphics Processing Unit (GPU) to sample a large number of trajectories based on a specific model at each time step which evaluation will lead to the computation of the optimal control. Algorithm <ref> describes the operation of the MPPI method. The algorithm starts by defining the number of samples, which is the number of trajectories τ to be generated, each having N time steps. The different trajectories are generated based on a model described by the function f to which inputs u+δ u are applied. Z_t_0 denotes the initial state of the model. δ u is sampled from a uniform distribution and added to an initial control sequence updated at each iteration. The term S represents the cost function that involves a running cost term q̂ computed at each time step and a terminal cost term ϕ computed at the end of the generation process, q̂ being: q̂ = q(Z) + 1-ν ^-1/2δ u^T R δ u + u^T R δ u + 1/2 u^T R u ν being the exploration noise which determines how aggressively MPPI explores the state space and R being a positive definite control weight matrix. The control sequence is then updated according to the cost of each trajectory while taking into consideration the minimum cost and the inverse temperature λ that impacts the degree of selectiveness of the algorithm. The updated control sequence is smoothed using the Savitzky-Galoy convolutional filter. The first control of the sequence is returned and the process reiterates. The presented algorithm will be used for planning trajectories using the previously presented hybrid model. The implementation details are discussed next. §.§ MPPI-based Hybrid EBM Planner The proposed planner will be based on an MPPI approach with a hybrid extended bicycle model. The planner will make use of Algorithm <ref> detailed in the previous section. In the proposed planner, the function f mentioned above represents the state evolution presented in Equations (<ref>). The architecture of the planner is shown in Fig. <ref>. It takes as inputs the reference path and speed to compute optimal trajectories and output to the low level controller the velocity and steering angle to follow for a minimal cost. The used running cost associated with the MPPI planner is defined as: q(Z) = (Z-Z^ref)^TQ_Z(Z-Z^ref) + (V-V^ref)^TQ_V(V-V^ref) Knowing that the state Z of the extended bicycle model is Z=[ X Y ψ ] which is compared to the reference path's X and Y coordinates and heading. The used terminal cost is ϕ(Z_T) = q(Z_T). The parameters defining the used MPPI method are shown in Table <ref>. The random noise values δ u are issued from a normal distribution between -0.08 and 0.05 for the velocity V control and between -0.02 and 0.02 for the steering angle δ control. The cost matrices are: Q_Z = 4 . Diag(1,1,10), Q_V = 3, R = 1e-02 . Diag(1,1). At each iteration the proposed planner will use the current reference vehicle state as a starting point to sample K=1024 trajectories for N=100 time steps using the hybrid extended model that takes as control inputs V and δ; the MPPI algorithm is then used to compute the optimal control sequence. The first five computed controls are sent to the low level controllers to actuate the reference vehicle accordingly. The planner will run at 20 while the low level controllers introduced next will run at 100. §.§ Low level controllers In the architecture proposed above, the MPPI planner would generate reference velocities and steering for the low level controllers to follow. The low level controllers will apply torques and steering to the reference four-wheel model introduced previously. The low level control is split into longitudinal control (torques) and lateral control (steering angle); each will be treated separately. The longitudinal controller aims to make the reference vehicle's velocity reach the reference velocity received from the planner; thus, the error it is trying to minimize is defined as e_V = V_ref-V_vehicle with V_vehicle being the current simulator velocity computed from the state introduced in Section <ref> using V = √(V_x^2 + V_y^2). A PID controller, with gains K_P,V, K_D,V and K_I,V is used. The lateral controller follows the approach introduced in <cit.>. The lateral control is split to two parts: an open loop part and a closed loop part. The open loop part applies the reference steering angle received by the planner; the closed loop part consists of a PID controller that aims to reduce the heading error projected 5 time steps ahead by applying the reference velocity and steering angle to an HEBM. The gains are defined as K_P,δ, K_D,δ, and K_I,δ. The used gains for both longitudinal and lateral PID are shown in Table <ref>. The proposed approach is evaluated next. § RESULTS The proposed planning architecture is evaluated in this Section. The aim is to control the four-wheel vehicle presented in <ref> based on the approach presented above. As mentioned previously, the planner will get as inputs the reference path and a reference speed which will vary between test cases; will compute optimal trajectories and return reference velocities and steering angles for the low level controllers. The low level controllers will apply the torque and steering controls accordingly to the four-wheel vehicle model. We compare our approach to a kinematic bicycle model (KBM) based approach for the reasons stated previously; for the sake of fairness, we implement the same architecture by replacing the HEBM in Fig. <ref> by the KBM. In the following we start by validating our method based on the oval trajectory validation tests used in <cit.> and <cit.>. We then move to a lane change maneuver validation to assess the performance of our method in real life scenarios. §.§ Oval trajectory testing The test consists of running the vehicle on an oval trajectory with different speeds, clockwise (CW) and counterclockwise (CCW). The curvature of the reference path is shown in Fig. <ref>. To be able to compare the two methods at different speeds, we present in Table <ref> the different tests while showing the mean absolute lateral error (MAE), the max absolute lateral error and the mean velocity associated with each test. The lateral error being defined as: error = (Y_vehicle-Y_ref)cosψ_ref - (X_vehicle-X_ref)sinψ_ref X_ref, Y_ref, ψ_ref being the closest reference points to the vehicle. We remark that for both models higher velocities are associated with higher errors. Though, the KBM based planner is not comfortable with high velocities as it is not able to stick with the reference speed and have minimal errors simultaneously. On the other hand, the proposed HEBM approach is able to perform maneuvers with higher velocities, sticking with the desired velocity of each case, while maintaining low lateral errors. It can be seen e.g. that for a V_desired = 18 (CCW), the HEBM is able to reach high velocities, performing maneuvers with a_y^max=0.76g while keeping a maximum lateral error of 0.88 while the KBM approach can't keep with the desired velocity and shows higher errors. The HEBM planner has lower errors and higher velocities in all of the tested scenarios. A close up on the behavior of the vehicle under the two planners at their highest lateral error points is shown in Fig. <ref>. The plots show that the proposed planner is able to make the vehicle drive closer to the reference path. §.§ Lane change testing To further validate our method, we test its behavior when effecting a lane change maneuver. The lane change maneuver is based on the ISO 3888-1 standard and its curvature is shown in Fig. <ref>. Similarly to the previous test, we assess the performance of both approaches on different speeds. The results are seen in Table <ref>. The table shows that the KBM based planner is able to drive the vehicle into higher velocities as compared with the previous test, this is due to the relaxed heading constraints of the lane change maneuver as opposed to the oval maneuver. The behavior of the KBM based planner lacks accuracy. The HEBM based planner is able to stick to the reference path with lower errors even at high velocities, while the KBM based planner loses accuracy with higher velocities. An illustration of the highest error case (V_desired=25) is seen in Fig. <ref>. The figure shows the behavior of both planners. The lateral acceleration of the vehicle under the HEBM planner reaches a_y^max=0.75g. The KBM planner is not able to effect the maneuver accurately, thus safely. In brief, the proposed approach is able to accurately follow the reference path provided to the planner while keeping with the reference velocity, while the kinematic bicycle model fails to follow the provided path especially at higher velocities i.e. higher lateral accelerations. § CONCLUSION In this paper, we proposed a simple model that combines an extended bicycle with LSTMs to model the state of the vehicle in challenging maneuvers. The aim was to use the proposed model for planning purposes. We started by defining our reference vehicle and then presented two simplified vehicle models: the kinematic bicycle model and the extended bicycle model. The KBM was introduced for comparison purposes given its popularity in the literature while the EBM was introduced as the basis of the proposed approach. We augmented the EBM using LSTMs to be able to predict the slip angles at the wheels. The augmented model was implemented in an MPPI-based planning approach. The proposed planner computes optimal trajectories and outputs reference velocities and steering to low level controllers that control the four-wheel vehicle's torque and steering. Two validation tests were performed, the oval maneuver and the lane change maneuver. The proposed approach was compared to a kinematic bicycle based approach. The performed tests showed that our approach is able to drive the vehicle in a more accurate way while keeping high velocities while the other approach failed to deliver accurate behavior. The proposed approach is able to cope with both low and high dynamic maneuvers making a vehicle using it able to navigate safely despite the harshness of the scenario. Future work would apply this method to real vehicles and explore its limitations. ieeetr
http://arxiv.org/abs/2306.13216v1
20230620171851
Diverse Community Data for Benchmarking Data Privacy Algorithms
[ "Aniruddha Sen", "Christine Task", "Dhruv Kapur", "Gary Howarth", "Karan Bhagat" ]
cs.CR
[ "cs.CR", "cs.LG" ]
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models Boxin Wang^1 Lead authors. Correspondence to: Boxin Wang mailto:[email protected] , Bo Li mailto:[email protected] , Weixin Chen^1*, Hengzhi Pei^1*, Chulin Xie^1*, Mintong Kang^1*, Chenhui Zhang^1*, Chejian Xu^1, Zidi Xiong^1, Ritik Dutta^1, Rylan Schaeffer^2, Sang T. Truong^2, Simran Arora^2, Mantas Mazeika^1, Dan Hendrycks^3,4, Zinan Lin^5, Yu Cheng^5, Sanmi Koyejo^2, Dawn Song^3, Bo Li^1* ^1University of Illinois at Urbana-Champaign ^2Stanford University ^3University of California, Berkeley ^4Center for AI Safety ^5Microsoft Corporation July 31, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== dataset-provisions § DATASET PROVISIONS dataset-url §.§ Dataset URLs * https://data.nist.gov/od/id/mds2-2895The NIST public data repository access point. * https://github.com/usnistgov/SDNist/tree/main/nist%20diverse%20communities%20data%20excerptsDirect data access link. * https://github.com/usnistgov/SDNist/blob/main/nist%20diverse%20communities%20data%20excerpts/data_dictionary.jsonDirect data access to the data dictionary. * The dataset DOI is https://doi.org/10.18434/mds2-289510.18434/mds2-2895. dataset-format-notes §.§.§ Dataset Format Notes The raw data are in CSV format with JSON data dictionaries defining valid values. The NIST data repository has a structured metadata retrieval system that interfaces with https://www.data.govdata.gov and conforms to https://www.go-fair.org/fair-principles/FAIR principles and the https://strategy.data.gov/best practice for Federal Data Strategy. See additional information https://data.nist.gov/sdp/#/abouthere. author-statement §.§ Author Statement The authors bear all responsibility in case of violation of rights. We have confirmed licensing and provide detailed information in the article and in the dataset datasheet. hosting-and-licensing §.§ Hosting and Licensing The data associated with this publication were created, hosted, and maintained, by the National Institute of Standards and Technology in their https://data.nist.gov/od/id/mds2-2895permanent data repository, in perpetuity. The data are in the public domain. https://www.nist.gov/open/copyright-fair-use-and-licensing-statements-srd-data-software-and-technical-series-publicationsNIST statement on software and data: "NIST-developed software is provided by NIST as a public service. You may use, copy, and distribute copies of the software in any medium, provided that you keep intact this entire notice. You may improve, modify, and create derivative works of the software or any portion of the software, and you may copy and distribute such modifications or works. Modified works should carry a notice stating that you changed the software and should note the date and nature of any such change. Please explicitly acknowledge the National Institute of Standards and Technology as the source of the software. NIST-developed software is expressly provided "AS IS." NIST MAKES NO WARRANTY OF ANY KIND, EXPRESS, IMPLIED, IN FACT, OR ARISING BY OPERATION OF LAW, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, AND DATA ACCURACY. NIST NEITHER REPRESENTS NOR WARRANTS THAT THE OPERATION OF THE SOFTWARE WILL BE UNINTERRUPTED OR ERROR-FREE, OR THAT ANY DEFECTS WILL BE CORRECTED. NIST DOES NOT WARRANT OR MAKE ANY REPRESENTATIONS REGARDING THE USE OF THE SOFTWARE OR THE RESULTS THEREOF, INCLUDING BUT NOT LIMITED TO THE CORRECTNESS, ACCURACY, RELIABILITY, OR USEFULNESS OF THE SOFTWARE. You are solely responsible for determining the appropriateness of using and distributing the software and you assume all risks associated with its use, including but not limited to the risks and costs of program errors, compliance with applicable laws, damage to or loss of data, programs or equipment, and the unavailability or interruption of operation. This software is not intended to be used in any situation where a failure could cause risk of injury or damage to property. The software developed by NIST employees is not subject to copyright protection within the United States." datasheet-for-dataset-nist-diverse-communities-data-excerpts § DATASHEET FOR DATASET “NIST DIVERSE COMMUNITIES DATA EXCERPTS” Questions from the https://arxiv.org/abs/1803.09010Datasheets for Datasets paper, v7. Jump to section: * motivationMotivation * compositionComposition * collection-processCollection process * preprocessingcleaninglabelingPreprocessing/cleaning/labeling * usesUses * distributionDistribution * maintenanceMaintenance motivation §.§ Motivation for-what-purpose-was-the-dataset-created §.§.§ For what purpose was the dataset created? The NIST Diverse Communities Data Excerpts (the Excerpts) are demographic data created as benchmark data for deidentification technologies. The Excerpts are designed to contain sufficient complexity to be challenging to de-identify and with a compact feature set to make them tractable for analysis. We also demonstrate the data contain subpopulations with varying levels of feature independence, which leads to small cell counts, a particularly challenging deidentification problem. The Excerpts serve as benchmark data for two open source projects at the National Institute of Standards and Technology (NIST): the https://doi.org/10.18434/mds2-2943SDNist Deidentified Data Report tool and the https://pages.nist.gov/privacy_collaborative_research_cycle/2023 Collaborative Research Cycle (CRC). who-created-the-dataset-e.g.-which-team-research-group-and-on-behalf-of-which-entity-e.g.-company-institution-organization §.§.§ Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Excerpts were created by the https://www.nist.gov/itl/applied-cybersecurity/privacy-engineeringPrivacy Engineering Program of the https://www.nist.gov/itlInformation Technology Laboratory at the https://www.nist.govNational Institute of Standards and Technology (NIST). The underlying data was published by the U.S. Census Bureau as part of the 2019 American Community Survey (ACS) https://www.census.gov/programs-surveys/acs/microdata/access.2019.html#list-tab-735824205Public Use Microdata Sample (PUMS). who-funded-the-creation-of-the-dataset §.§.§ Who funded the creation of the dataset? The data were collected by the U.S. Census Bureau, and the Excerpts were curated by NIST. Both are U.S. Government agencies within the Department of Commerce. Aspects of the Excerpts creation were supported under NIST contract 1333ND18DNB630011. any-other-comments §.§.§ Any other comments? No. composition §.§ Composition what-do-the-instances-that-comprise-the-dataset-represent-e.g.-documents-photos-people-countries §.§.§ What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? The instances in the data represent individual people. The Excerpts consist of a small curated geography and feature set derived from the significantly larger 2019 American Community Survey (ACS) Public Use Microdata Sample (PUMS), a publicly available product of the U.S. Census Bureau. The original ACS schema contains over four hundred features, which poses difficulties for accurately diagnosing shortcomings in deidentification algorithms. The Excerpts use a small but representative selection of 24 features, covering major census categories: Demographic, Household and Family, Geographic, Financial, Work and Education, Disability, and Survey Weights. Several Excerpts features are derivatives of the original ACS features, designed to provide easier access to certain information (such as income decile or population density). There is only one type of instance. All records in the data represent separate, individual people. how-many-instances-are-there-in-total-of-each-type-if-appropriate §.§.§ How many instances are there in total (of each type, if appropriate)? There are three geographic partitions in the data. See the “postcards” and data dictionaries in each respective directory for more detailed information. Instances in partitions: * : 27254 records * : 7634 records * : 9276 records does-the-dataset-contain-all-possible-instances-or-is-it-a-sample-not-necessarily-random-of-instances-from-a-larger-set §.§.§ Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? The data set is a curated sample of the ACS by geography, with a reduced feature set designed to provide a tractable foundation for benchmarking deidentification algorithms (24 features rather than the original ACS's 400 features). Geographically it is comprised of 31 Public Use Microdata Areas * : 27254 records drawn from 20 Public Use Microdata Areas (PUMAs) from across the United States. This excerpt was selected to include communities with very diverse subpopulation distributions. * : 9276 records drawn from six PUMAs of communities surrounding Dallas-Fort Worth, Texas area. This excerpt was selected to focus on areas with moderate diversity. * : 7634 records drawn from five PUMAs of communities from the North Shore to the west of the greater Boston, Massachusetts area. This excerpt was selected to focus on areas with less diversity. what-data-does-each-instance-consist-of §.§.§ What data does each instance consist of? The instances are individual, tabular data records in CSV format with Demographic, Household and Family, Geographic, Financial, Work and Education, Disability, and Survey Weights features. In addition, there is metadata and documentation: https://github.com/usnistgov/SDNist/blob/main/nist is-there-a-label-or-target-associated-with-each-instance §.§.§ Is there a label or target associated with each instance? No. These data are not designed specifically for classifier tasks. is-any-information-missing-from-individual-instances §.§.§ Is any information missing from individual instances? There is no missing information in these excerpts, all records are complete. are-relationships-between-individual-instances-made-explicit-e.g.-users-movie-ratings-social-network-links §.§.§ Are relationships between individual instances made explicit (e.g., users' movie ratings, social network links)? Relationships between records have not been included in this version of the data. Although the Excerpts data does contain multiple individuals from the same household, it does not include the ACS PUMS Household ID or relationship features needed to join them into a network. We expect to include those features in a future update to the Excerpts. are-there-recommended-data-splits-e.g.-training-developmentvalidation-testing §.§.§ Are there recommended data splits (e.g., training, development/validation, testing)? There are three geographic partitions to facilitate benchmarking algorithms on populations with differing levels of heterogeneity/diversity (MA, TX and National). There are no splits designed specifically for training and testing purposes. All of the data presented at this time are from the 2019 ACS collection. In the future we plan to add additional years. are-there-any-errors-sources-of-noise-or-redundancies-in-the-dataset §.§.§ Are there any errors, sources of noise, or redundancies in the dataset? Although ACS data consumers generally assume the data remains representative of the real population, the ACS PUMS data has had basic statistical disclosure control deidentification applied (including swapping and subsampling), which may impact its distribution. For more information, see documentation from the https://www.census.gov/programs-surveys/acs/library/handbooks/general.htmlU.S. Census Bureau. is-the-dataset-self-contained-or-does-it-link-to-or-otherwise-rely-on-external-resources-e.g.-websites-tweets-other-datasets §.§.§ Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? All of the data is self-contained within the repository. The data are drawn from public domain sources and thus have no restrictions on usage. does-the-dataset-contain-data-that-might-be-considered-confidential §.§.§ Does the dataset contain data that might be considered confidential? The Excerpts are a subset https://www.census.gov/programs-surveys/acs/microdata/access.2019.html#list-tab-735824205public data published by the U.S. Census Bureau. The U.S. Census Bureau is bound by law, under Title 13 of the U.S. Code, to protect the identities of individuals represented by the data. https://www.census.gov/about/policies/privacy/data_stewardship.htmlSee here for details on the Census' data stewardship. The Census takes elaborate steps to reduce risk of re-identification of individuals surveyed and provide information regarding their suppression scheme https://www.census.gov/programs-surveys/acs/technical-documentation/data-suppression.htmlhere. does-the-dataset-contain-data-that-if-viewed-directly-might-be-offensive-insulting-threatening-or-might-otherwise-cause-anxiety §.§.§ Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? No. does-the-dataset-relate-to-people §.§.§ Does the dataset relate to people? Yes. does-the-dataset-identify-any-subpopulations-e.g.-by-age-gender §.§.§ Does the dataset identify any subpopulations (e.g., by age, gender)? The data includes demographic features such as Age, Sex, Race and Hispanic Origin which may be used to disaggregate by subpopulation. It additionally includes non-demographic features such as Educational Attainment, Income Decile and Industry Category which also produce subpopulation distributions with disparate patterns of feature correlations. The racial and ethnicity subpopulation breakdown by geography is as follows (note that hispanic origin and race are separate features): * : 4% Hispanic and 89% White, 7% Asian, 2% Black, 2% Other, 0% AIANNH * : 19% Hispanic and 85% White, 7% Black, 4% Other, 3% Asian, 1% AIANNH * : 10% Hispanic and 56% White, 22% Black, 10% Other, 9% Asian, 3% AIANNH is-it-possible-to-identify-individuals-i.e.-one-or-more-natural-persons-either-directly-or-indirectly-i.e.-in-combination-with-other-data-from-the-dataset §.§.§ Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? The Excerpts are survey results from real individuals as collected by the U.S. Census Bureau. %5B###%20Does%20the%20dataset%20contain%20data%20that%20might%20be%20considered%20confidential?See response above for more information about Census' data protections. The subset of the Census' data that we provide here introduces no additional information, and therefore does not increase the risk of identifying individuals. does-the-dataset-contain-data-that-might-be-considered-sensitive-in-any-way-e.g.-data-that-reveals-racial-or-ethnic-origins-sexual-orientations-religious-beliefs-political-opinions-or-union-memberships-or-locations-financial-or-health-data-biometric-or-genetic-data-forms-of-government-identification-such-as-social-security-numbers-criminal-history §.§.§ Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? Yes. These data are detailed demographic records. %5B###%20Does%20the%20dataset%20contain%20data%20that%20might%20be%20considered%20confidential?See response above for more information about Census Bureau's data protections. any-other-comments-1 §.§.§ Any other comments? No. collection-process §.§ Collection process how-was-the-data-associated-with-each-instance-acquired §.§.§ How was the data associated with each instance acquired? This data is a curated geographic subsample of the 2019 American Community Survey Public Use Microdata files. The U.S. Census Bureau details its survey data collection approach https://www.census.gov/programs-surveys/acs/library/handbooks/general.htmlhere. what-mechanisms-or-procedures-were-used-to-collect-the-data-e.g.-hardware-apparatus-or-sensor-manual-human-curation-software-program-software-api §.§.§ What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? See previous response. if-the-dataset-is-a-sample-from-a-larger-set-what-was-the-sampling-strategy-e.g.-deterministic-probabilistic-with-specific-sampling-probabilities §.§.§ If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? The data set is a (deterministic) curated sample by geography. It is comprised of 31 Public Use Microdata Areas * : 27254 records drawn from 20 Public Use Microdata Areas (PUMAs) from across the United States. This excerpt was selected to include communities with very diverse subpopulation distributions. * : 9276 records drawn from six PUMAs of communities surrounding Dallas-Fort Worth, Texas area. This excerpt was selected to focus on areas with moderate diversity. * : 7634 records drawn from five PUMAs of communities from the North Shore to the west of the greater Boston, Massachusetts area. This excerpt was selected to focus on areas with less diversity. who-was-involved-in-the-data-collection-process-e.g.-students-crowdworkers-contractors-and-how-were-they-compensated-e.g.-how-much-were-crowdworkers-paid §.§.§ Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? [See response above.](### How was the data associated with each instance acquired? over-what-timeframe-was-the-data-collected §.§.§ Over what timeframe was the data collected? This data was collected during 2019. were-any-ethical-review-processes-conducted-e.g.-by-an-institutional-review-board §.§.§ Were any ethical review processes conducted (e.g., by an institutional review board)? The Excerpts are a curated subsample of existing public data published by the U.S. Government. No IRB review was necessary by institution policy. does-the-dataset-relate-to-people-1 §.§.§ Does the dataset relate to people? Yes. did-you-collect-the-data-from-the-individuals-in-question-directly-or-obtain-it-via-third-parties-or-other-sources-e.g.-websites §.§.§ Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)? Other Sources. This data is a curated geographic subsample of the 2019 American Community Survey Public Use Microdata files, which are https://www.census.gov/programs-surveys/acs/microdata/access.2019.html#list-tab-735824205available here. were-the-individuals-in-question-notified-about-the-data-collection §.§.§ Were the individuals in question notified about the data collection? Yes. ux5cux23ux5cux23ux5cux2520Howux5cux2520wasux5cux2520theux5cux2520dataux5cux2520associatedux5cux2520withux5cux2520eachux5cux2520instanceux5cux2520acquiredux3fSee response above. did-the-individuals-in-question-consent-to-the-collection-and-use-of-their-data §.§.§ Did the individuals in question consent to the collection and use of their data? Yes. ux5cux23ux5cux23ux5cux2520Howux5cux2520wasux5cux2520theux5cux2520dataux5cux2520associatedux5cux2520withux5cux2520eachux5cux2520instanceux5cux2520acquiredux3fSee response above. if-consent-was-obtained-were-the-consenting-individuals-provided-with-a-mechanism-to-revoke-their-consent-in-the-future-or-for-certain-uses §.§.§ If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? ux5cux23ux5cux23ux5cux2520Howux5cux2520wasux5cux2520theux5cux2520dataux5cux2520associatedux5cux2520withux5cux2520eachux5cux2520instanceux5cux2520acquiredux3fSee response above. has-an-analysis-of-the-potential-impact-of-the-dataset-and-its-use-on-data-subjects-e.g.-a-data-protection-impact-analysis-been-conducted §.§.§ Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? This data is a curated geographic subsample of the 2019 American Community Survey (ACS) Public Use Microdata files. Many investigations have examined ACS data with some information https://www.census.gov/programs-surveys/acs/library/handbooks/general.htmlpublished by the Census Bureau itself. The data presented here, the Excerpts, are a subset of the data and present no additional risks to the subjects surveyed by the Census. any-other-comments-2 §.§.§ Any other comments? No. preprocessingcleaninglabeling §.§ Preprocessing/cleaning/labeling was-any-preprocessingcleaninglabeling-of-the-data-done-e.g.-discretization-or-bucketing-tokenization-part-of-speech-tagging-sift-feature-extraction-removal-of-instances-processing-of-missing-values §.§.§ Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? The original ACS data is clean and no class labeling was done. However, several Excerpts features are new derivatives of ACS features designed to provide easier access to certain information. Population DENSITY divides PUMA population by surface area and allows models to distinguish rural and urban geographies. INDP_CAT aggregates detailed industry codes into a small set of broad categories. PINCP_DECILE aggregates incomes into percentile bins relative to the record's state. And, EDU simplifies the original ACS schooling feature to focus on milestone grades and degrees. was-the-raw-data-saved-in-addition-to-the-preprocessedcleanedlabeled-data-e.g.-to-support-unanticipated-future-uses §.§.§ Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? See the U.S. Census Bureau's documentation for information about https://www.census.gov/programs-surveys/acs/library/handbooks/general.htmlpublished ACS data. is-the-software-used-to-preprocesscleanlabel-the-instances-available §.§.§ Is the software used to preprocess/clean/label the instances available? The preprocessing was minimal (addition of a small set of derivative features), and can be reproduced as described above. The code is not currently available. any-other-comments-3 §.§.§ Any other comments? No. uses §.§ Uses has-the-dataset-been-used-for-any-tasks-already §.§.§ Has the dataset been used for any tasks already? The Excerpts serve as benchmark data for two open source projects at the National Institute of Standards and Technology (NIST): the https://doi.org/10.18434/mds2-2943SDNist Deidentified Data Report tool and the https://pages.nist.gov/privacy_collaborative_research_cycle/2023 Collaborative Research Cycle (CRC). is-there-a-repository-that-links-to-any-or-all-papers-or-systems-that-use-the-dataset §.§.§ Is there a repository that links to any or all papers or systems that use the dataset? No. Users are not mandated to contribute their work to any central repository. We publish user-contributed data https://github.com/usnistgov/privacy_collaborative_research_cyclehere. We recommend that data users cite our work using the https://doi.org/10.18434/mds2-2895dataset DOI. what-other-tasks-could-the-dataset-be-used-for §.§.§ What (other) tasks could the dataset be used for? The Excerpts were designed for benchmarking privacy-preserving data deidentification techniques such as synthetic data or statistical disclosure limitation. However, they can be used to study the behavior of any tabular data machine learning or analysis technique when applied to diverse populations. Synthetic data generators are just an especially verbose application of machine learning (producing full records rather than class labels), so tools designed to improve understanding of synthetic data have potential for a much broader application. is-there-anything-about-the-composition-of-the-dataset-or-the-way-it-was-collected-and-preprocessedcleanedlabeled-that-might-impact-future-uses §.§.§ Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? The U.S. Census Bureau recommends using sampling weights to account for survey undersampling and generate equitable full population statistics. The PWGPT feature included in the Excerpts is the person (record) level sampling weight. For full population statistics, each record should be multiplied by its sampling weight. are-there-tasks-for-which-the-dataset-should-not-be-used §.§.§ Are there tasks for which the dataset should not be used? The Excerpts are suitable for any application relevant to government survey data over the selected feature set. any-other-comments-4 §.§.§ Any other comments? No. distribution §.§ Distribution will-the-dataset-be-distributed-to-third-parties-outside-of-the-entity-e.g.-company-institution-organization-on-behalf-of-which-the-dataset-was-created §.§.§ Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? has-the-dataset-been-used-for-any-tasks-already-1 §.§.§ Has the dataset been used for any tasks already? The Excerpts serve as benchmark data for two open source projects at the National Institute of Standards and Technology (NIST): the https://doi.org/10.18434/mds2-2943SDNist Deidentified Data Report tool and the https://pages.nist.gov/privacy_collaborative_research_cycle/2023 Collaborative Research Cycle (CRC). how-will-the-dataset-will-be-distributed-e.g.-tarball-on-website-api-github §.§.§ How will the dataset will be distributed (e.g., tarball on website, API, GitHub)? https://doi.org/10.18434/mds2-289510.18434/mds2-289 when-will-the-dataset-be-distributed §.§.§ When will the dataset be distributed? The dataset is currently available to the public. will-the-dataset-be-distributed-under-a-copyright-or-other-intellectual-property-ip-license-andor-under-applicable-terms-of-use-tou §.§.§ Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? The data are in the public domain. https://www.nist.gov/open/copyright-fair-use-and-licensing-statements-srd-data-software-and-technical-series-publicationsSee the following statement from NIST. have-any-third-parties-imposed-ip-based-or-other-restrictions-on-the-data-associated-with-the-instances §.§.§ Have any third parties imposed IP-based or other restrictions on the data associated with the instances? No. All data are drawn from public domain sources. do-any-export-controls-or-other-regulatory-restrictions-apply-to-the-dataset-or-to-individual-instances §.§.§ Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? No. All data are drawn from public domain sources and have no known export or regulatory restrictions. any-other-comments-5 §.§.§ Any other comments? No. maintenance §.§ Maintenance who-is-supportinghostingmaintaining-the-dataset §.§.§ Who is supporting/hosting/maintaining the dataset? This dataset is hosted by NIST and maintained by the Privacy Engineering Program. how-can-the-ownercuratormanager-of-the-dataset-be-contacted-e.g.-email-address §.§.§ How can the owner/curator/manager of the dataset be contacted (e.g., email address)? Dataset managers can be reached by https://github.com/usnistgov/SDNist/issuesraising an issue, emailing the [email protected] Engineering Program, or by contacting the project principal investigator, mailto:[email protected] Howarth. is-there-an-erratum §.§.§ Is there an erratum? There have been small updates to the meta-data data dictionary.json files (for example, to improve clarity in descriptive strings for features). The data are maintained in a public GIt repository and thus all changes to the data are recorded in a public ledger. will-the-dataset-be-updated-e.g.-to-correct-labeling-errors-add-new-instances-delete-instances §.§.§ Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? Since the data are excerpts from the 2019 release of the American Community Survey, we do not expect any updates to labels or instances. We do plan on one mayor updated version release in the future with the following improvements: * : Allows joins between individuals in the same household * : Supports reidentification research. * : Including excerpts from 2018 for algorithm development/training and as a baseline for reidentification studies * : Our current low-diversity excerpts, MA and TX, have much fewer records than our high-diversity excerpt, National; this can be a confounding factor for comparative analyses. if-the-dataset-relates-to-people-are-there-applicable-limits-on-the-retention-of-the-data-associated-with-the-instances-e.g.-were-individuals-in-question-told-that-their-data-would-be-retained-for-a-fixed-period-of-time-and-then-deleted §.§.§ If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? These data are in the public domain and as such there are no retention limits. will-older-versions-of-the-dataset-continue-to-be-supportedhostedmaintained §.§.§ Will older versions of the dataset continue to be supported/hosted/maintained? The data are maintained in a public Git repository and thus all changes to the data are recorded in a public ledger. There are specific releases in the repository that capture major data milestones. if-others-want-to-extendaugmentbuild-oncontribute-to-the-dataset-is-there-a-mechanism-for-them-to-do-so §.§.§ If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? We invite the public to use and build on these resources. First, these resources are provided by NIST as a public service, and the public is free to integrate these resources into their own work. Second, we invite the public to raise issues in the dataset repository, allowing for a transparent interaction. Individuals and groups wishing to make substantial contributions are encouraged to contact the project principal investigator, mailto:[email protected] Howarth. any-other-comments-6 §.§.§ Any other comments? No. § MATH APPENDIX §.§ Proofs of Lemmas 2.1 and 2.2 (and additional material) We introduced the concept of dispersal ratio in the main paper with the purpose of a giving the reader a clear and intuitive explanation of the term. In doing so, we omitted some formal results that might be interesting to examine in order to understand the mechanics behind dispersal ratio and independence. The perceptive reader may have noticed that we stated two lemmas in section 2 without proving them. Recall the definition of dispersal ratio. Let the dispersal ratio for a population P with the addition of feature X be defined as Disperse(S, X, P) = |bin_(S+X)(P)| / |bin_S(P) | We begin by providing proofs of Lemma 2.1 (corresponding to Lemma C.1) and Lemma 2.2 (corresponding to Lemma C.2) as stated in section 2. We follow it up with a result that may be of interest. These proofs follows the same framework and terminology as used in the main paper. An uncertainty coefficient of 1 is equivalent to a dispersal ratio of 1. U(X|F) = 1 Disperse(S, X, P) = 1 U(X|F) = 1 ⇒H(X)-H(X|F)/H(X) = 1 ⇒ H(X|F) = 0 Consider the following result. H(X|F)=0 if and only if X is a function of F i.e. ∀ f: p(f)>0, there is only one possible value of x with p(x,f)>0 <cit.>. Let the function g between X and F be denoted by F = g(X). Applying the result here, let there be m elements in the domain of F, which implies there can be no more than m elements in the co-domain of X, to constitute a valid function. Let the elements in the range of F be denoted by f_1,f_2... f_m, and that of X be denoted by x_1,x_2... x_m'. Since X is a function of F, there exists only one element x_i ∈ X corresponding to f_j ∈ F. Thus, all bins in the schema (S+x) can be denoted by (f_i,x_i) = (f_i,g(f_i)). Since there are m bins, corresponding the size of the domain, in F, there will be exactly m bins in F' = (F,X). Therefore, |bin_(S+X)(P)| = |bin_S(P)| ⇒ Disperse(S,X,P) = 1 Similarly, the converse of the lemma can be proved by taking the converse of the above result and considering that the inverse of the function g' = g^-1(x) for x: (F,X) → F is uniquely defined if the dispersal ratio is 1. An uncertainty coefficient of 0 leads to the maximum dispersal ratio. U(X|F) = 0 Disperse(S, X, P) = |Range(X)| U(X|F) = 0 H(X)-H(X|F)/H(X) = 0 H(X|F) = H(X) This implies X and F are independent observations <cit.>. Observe that the range of Y = (X,F) can take maximum n_max = |Range(X)||Range(F)| values since that it is the number of elements in X × F. Note that |Range(F)| = |bin_S(P)|. Here, Range(Y) = n_max due to the independence of X and F since ∀ x,f: Pr[Y = y] = (Pr[X=x] * Pr[F=f]) ≠ 0 As there are n_max non-zero values for the probability distribution of Y, the size of the range of Y is maximum. Note that Y exactly expresses the distribution of values in the schema (S+X). Therefore, |bin_(S+X)(P)| = |bin_S(P)||Range(X)| Disperse(S,X,P) = |Range(X)| which is the maximum dispersal ratio since |bin_S(P)|*|Range(X)| was maximized. We now show an interesting consequence of the relation between dispersal ratio and the initial population. The following lemmas prove that a small population size can lead to small cell counts. Consider a population P with a sub-population P_1, distributed in a table-based partitioned schema. Consider an individual i ∈ P_1, who gets placed in a bin under schema S. We denote the size of that bin as size(bin_S(i)). Let a feature f be added to the schema. The dispersal ratio is always greater than or equal to 1. Disperse(S, f, P_1) ≥ 1 Consider an arbitrary bin_S(i) in the schema S with the m features in the feature set f_1,f_2,f_3...f_m. Adding a new feature f to the schema S with feature values (say) in the set V= {v_1,v_2} will subdivide all records in f_1,f_2,f_3...f_m into f_1,f_2,f_3...f_m,v_1 and f_1,f_2,f_3...f_m,v_2, by the definition of partitioning. bin_S(i) in the schema S will be replaced by at least one bin or more, in the schema (S+f). Thus, the dispersal ratio for the sub-population of bin_S(i): i ∈P_1 is always greater than 1. Since for each disjoint sub-population corresponding to each bin ∈ S, this ratio is greater than one, the dispersal ratio for the overall population P_1 over the schema S and adding a new feature f, is also greater than 1. It is defined as [∑_S(i): i ∈ P_1size(bin_S(i))]/|bin_S(P_1)| If a new feature f is added to the schema denoted by S+f, then the average bin size will stay the same or decrease. [∑_S(i): i ∈ P_1size(bin_S(i))]/|bin_S(P_1)|≥[∑_(S+f)(i): i ∈ P_1size(bin_(S+f)(i))]/|bin_(S+f)(P_1)| For each of the disjoint partitions of some S(i): i ∈ P_1, records of the form i ∈ P_1 do not get merged with any records that were not in the initial bin S(i), by definition of partitioning. Thus, summing over all such bins, [∑_S(i): i ∈ P_1size(bin_S(i))] ≥[∑_(S+f)(i): i ∈ P_1size(bin_(S+f)(i))] Note that there is a '≥' inequality since there may be bins in the schema S+f that do contain records of the form i ∈ P_1, which were previously grouped with records i ∈ P_1 in the schema S. From Lemma C.3, if the dispersal ratio for population P_1 is r_1, then r_1≥1, which implies |bin_S(P_1)| ≤ |bin_(S+f)(P_1)| Combining equations (1) and (2) proves our result, by observing that they are the numerator and denominator respectively of our desired inequality. Assume two sub-populations P_0 and P_1 are distributed in the same arbitrary number of bins |bin_S(P_1)| = |bin_S(P_0)| = m. If on adding a feature f, P_0 and P_1 have the same dispersal ratio (r_0 = r_1 = r'), then |bin_(S+f)(P_1)| = |bin_(S+f)(P_0)| = mr'. The ratio of their average bin sizes for the schema S+f is [∑_S+fsize(bin_(S+f)(i))_P_1]/mr'/[∑_S+fsize(bin_(S+f)(i))_P_0]/mr' The average bin size is directly correlated to the size of the sub-population for the same initial number of bins and the same dispersal ratio. Therefore, if one subgroup (say P_0) is smaller than the other (P_1), then the average bin size for P_0 is less than that of P_1. As the average bin size drops for members of a sub-population, the utility will also drop monotonically for partition-based algorithms. § FEATURE DEFINITIONS AND RECOMMENDED SUBSETS Figure <ref> lists the 24 Excerpts features. The majority are from the 2019 American Community Survey Public Use Micodata; four of them (DENSITY, INDP_CAT, EDU, PINCP_DECILE) were derived from ACS features or public data as described in <ref>. Along with feature type, we've included cardinality (number of possible values). Because some deidentification algorithms require small feature spaces, the NIST CRC program recommends three smaller feature subsets: Demographic-focused, Industry-focused and Family-focused. Each subset showcases different feature mechanics, while sharing common features to delineate subpopulations (SEX, MSP, RAC1P, OWN_RENT, PINCP_DECILE). § DETAILED EVALUATION REPORTS AND METADATA ON SELECTED DEIDENTIFIED DATA SAMPLES As we noted in the main paper, the https://github.com/usnistgov/privacy_collaborative_research_cycle/NIST CRC Data and Metrics Bundle is an archive of 300 deidentifed data samples and evaluation metric results. To demonstrate the efficacy of the Excerpts for identifying and diagnosing behaviors of deidentificaiton algorithms on diverse populations, we selected seven algorithms from the archive to showcase in the paper. Below we provide the complete meta-data and highlighted PCA plot for each sample, as well as links to their detailed evaluation reports (available online in the sample report section of the https://github.com/usnistgov/SDNist/tree/main/sdnist/report/sample-reportsSDNist repository). Each detailed evaluation report contains the metrics listed below, along with complete results, detailed metric definitions accessible to non-technical stakeholders, a human-readable data dictionary, and additional references. SDNist Detailed Report Metrics List: * K-marginal Edit Distance * K-marginal Subsample Squivalent * K-marginal PUMA-specific Score * Univariate Distribution Comparison * Kendall Tau Correlation Differences * Pearson Pairwise Correlation Differences * Linear Regression (EDU vs PINCP_DECILE), with Full 16 RACE + SEX Subpouplation Breakdowns * Propensity Distribution * Pairwise Principle Component Analysis (Top 5) * Pairwise PCA (Top 2, with MSP = 'N' highlighting) * Inconsistencies (Age-based, Work-based, Housing-based) * Worst Performing PUMA Breakdown (Univariates and Correlations) * Privacy Evaluation: Unique Exact Match Metric * Privacy Evaluation: Apparent Match Metric §.§ Deidentified Data Summary Table For convenience, we include the deidentified data summary table from the main paper. §.§ Differentially Private Histogram (epsilon-10) A differentially private histogram is a naive solution that simply counts the number of occurrences of each possible record value, and adds noise to the counts. We use the Tumult Analytics library to efficiently produce a DPHistogram with a very large set of bins. Epsilon 10 is a very weak privacy guarantee, and this simple algorithm provides very poor privacy in these conditions. The points in the 'deidentified' PCA are nearly the exact same points as in the target PCA. The full metric report can be found https://htmlpreview.github.io/?https://github.com/usnistgov/SDNist/blob/main/sdnist/report/sample-reports/report_dphist_e_10_cf8_na2019_05-19-2023T18.01.12/report.htmlhere. §.§ SmartNoise PACSynth (epsilon-10, Industry-focused) We've included two samples from the PACSynth library to showcase its behavior on different feature subsets. The technique provides both differential privacy and a form of k-anonymity (removing rare outlier records). This provides very good privacy, Table <ref>, but it can also erase dispersed subpopulations. The industry feature subset below was used for the regression metric in the main paper, which showed erasure of graduate degree holders among both white men and black women. More information on the technique can be found https://pages.nist.gov/privacy_collaborative_research_cycle/pages/techniques.html#smartnoise-pacsynthhere. The full metric report can be found https://htmlpreview.github.io/?https://github.com/usnistgov/SDNist/blob/main/sdnist/report/sample-reports/report_pac_synth_e_10_industry_focused_na2019_05-19-2023T18.01.12/report.htmlhere. §.§ SmartNoise PACSynth (epsilon-10), Family-focused) On the family-focused feature subset we can see the impact of the k-anonymity protection more dramatically. Because the deidentifed data with removed outliers has reduced diversity, it occupies a much smaller area in the plot as compared to the target data. The deidentified records are concentrated into fewer, more popular feature combinations and thus their points show less variance along the PCA axes. More information on the technique can be found https://pages.nist.gov/privacy_collaborative_research_cycle/pages/techniques.html#smartnoise-pacsynthhere. The full metric report can be found https://htmlpreview.github.io/?https://github.com/usnistgov/SDNist/blob/main/sdnist/report/sample-reports/report_pac_synth_e_10_family_focused_na2019_05-19-2023T18.01.12/report.htmlhere. §.§ SmartNoise MST (epsilon-10) The MST synthesizer uses a probabilistic graphical model (PGM), with a maximum spanning tree (MST) structure capturing the most significant pair-wise feature correlations in the ground truth data as noisy marginal counts. This solution was the winner of the 2019 NIST Differential Privacy Synthetic Data Challenge. Note that it provides good utility with much better privacy than the simple DP Histogram, but its selected marginals fail to capture some constraints on child records (in red). More information on the technique can be found https://pages.nist.gov/privacy_collaborative_research_cycle/pages/techniques.html#smartnoise-msthere. The full metric report can be found https://htmlpreview.github.io/?https://github.com/usnistgov/SDNist/blob/main/sdnist/report/sample-reports/report_mst_e10_demographic_focused_na2019_05-19-2023T18.01.12/report.htmlhere. §.§ R synthpop CART model The fully conditional Classification and Regression Tree (CART) model does not satisfy formal differential privacy, but provides better privacy than some techniques which do (Table <ref>). It uses a sequence of decision trees trained on the target data to predict each feature value based on the previously synthesized features; familiarity with decision trees is helpful for tuning this model. Note that the two PCA distributions have very similar shapes, comprised of different points. You can find more information on the technique https://pages.nist.gov/privacy_collaborative_research_cycle/pages/techniques.html#rsynthpop-carthere. The full metric report can be found https://htmlpreview.github.io/?https://github.com/usnistgov/SDNist/blob/main/sdnist/report/sample-reports/report_cart_cf21_na2019_05-19-2023T18.01.12/report.htmlhere. §.§ MOSTLY AI Synthetic Data Platform MOSTLYAI is a proprietary synthetic data generation platform which uses a partly pretrained neural network model to generate data. The model can be configured to respect deterministic constraints between features (for a comparison, see MOSTLYAI submissions 1 in the CRC Data and Metrics Bundle linked above). It does not provide differential privacy, but does very well on both privacy and utility metrics (Table <ref>). More information on the technique can be found https://pages.nist.gov/privacy_collaborative_research_cycle/pages/techniques.html#mostlyai-sdhere. The full metric report can be found https://htmlpreview.github.io/?https://github.com/usnistgov/SDNist/blob/main/sdnist/report/sample-reports/report_mostlyai_sd_platform_MichaelPlatzer_2_05-19-2023T18.01.12/report.htmlhere. §.§ Synthetic Data Vault CTGAN CTGAN is a type of Generative Adverserial Network designed to operate well on tabular data. Unlike the MostlyAI neural network (which is pretrained with public data), the CTGAN network is only trained on the target data. It is able to preserve some structure of the target data distribution, but it introduces artifacts. In other metrics, we see it also has difficulty preserving diverse subpopulations. More information on the technique can be found https://sdv.dev/SDV/user_guides/single_table/ctgan.htmlhere. The full metric report can be found https://htmlpreview.github.io/?https://github.com/usnistgov/SDNist/blob/main/sdnist/report/sample-reports/report_sdv_ctgan_epochs500_SlokomManel_1_05-19-2023T18.01.12/report.htmlhere. §.§ synthcity ADSGAN ADSGAN is a Generative Adverserial Network focused on providing strong privacy for synthetic data. While it doesn't formally satisfy differential privacy it uses a parameter alpha to inject noise during the training process. Unfortunately, we see it is unable to preserve any meaningful structure from the target data distribution in this submission. More information on the technique can be found https://pages.nist.gov/privacy_collaborative_research_cycle/pages/techniques.html#synthcity-adsganhere. The full metric report can be found https://htmlpreview.github.io/?https://github.com/usnistgov/SDNist/blob/main/sdnist/report/sample-reports/report_adsgan_ZhaozhiQian_1_05-19-2023T18.01.12/report.htmlhere. unsrturl
http://arxiv.org/abs/2306.05038v1
20230608084328
The magnetic, spectroscopic, and photometric variability of the Wolf-Rayet star WR55
[ "S. P. Järvinen", "S. Hubrig", "R. Jayaraman", "A. Cikota", "M. Schöller" ]
astro-ph.SR
[ "astro-ph.SR" ]
firstpage–lastpage Quantum Surrogate Modeling for Chemical and Pharmaceutical Development Jonas Stein0000-0001-5727-9151 LMU Munich [email protected] Michael Poppel0009-0005-1141-0974 LMU Munich [email protected] Philip Adamczyk LMU Munich [email protected] Ramona Fabry LMU Munich [email protected] Zixin Wu0009-0006-4383-3127 LMU Munich [email protected] Michael Kölle0000-0002-8472-9944 LMU Munich [email protected] Jonas Nüßlein0000-0001-7129-1237 LMU Munich [email protected] Daniëlle Schuman0009-0000-0069-5517 LMU Munich [email protected] Philipp Altmann0000-0003-1134-176X LMU Munich [email protected] Thomas Ehmer0000-0002-4586-5361 Merck KGaA, Darmstadt, Germany [email protected] Vijay Narasimhan0000-0002-7727-6860 EMD Electronics, San Jose, California [email protected] Claudia Linnhoff-Popien0000-0001-6284-9286 LMU Munich [email protected] July 31, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Studies of magnetic fields in the most evolved massive stars, the Wolf-Rayet stars, are of special importance because they are progenitors of certain types of supernovae. The first detection of a magnetic field of the order of a few hundred Gauss in the WN7 star WR 55, based on a few FORS 2 low-resolution spectropolarimetric observations, was reported in 2020. In this work we present new FORS 2 observations allowing us to detect magnetic and spectroscopic variability with a period of 11.90 h. No significant frequencies were detected in TESS and ASAS-SN photometric observations. Importantly, magnetic field detections are achieved currently only in two Wolf-Rayet stars, WR 6 and WR 55, both showing the presence of corotating interacting regions. techniques: polarimetric — techniques: spectroscopic — techniques: photometric — stars: individual: WR 55 — stars: magnetic field — stars: Wolf-Rayet § INTRODUCTION The role of magnetic fields in the evolution of massive stars remains poorly understood. Massive O stars have been found to have large-scale organized, predominantly dipolar magnetic fields <cit.>, so that we expect to detect such fields in their descendants, the Wolf-Rayet (WR) stars. <cit.> calculated a sequence of rotating, magnetized pre-collapse models for WR stars with sub-solar metallicity and suggested that the presence of magnetic fields could explain extreme events such as superluminous supernovae and gamma-ray bursts powered by proto-magnetars or collapsars. They concluded that further modelling of WR stars with different metallicities and rotational velocities will allow for comparisons with observed rates of superluminous supernovae and gamma-ray bursts. <cit.> predicted a fractional circular polarization of a few times 10^-4 for WR magnetic fields of about 100 G. Verifying this prediction observationally, however, is difficult due to the WR stars' significantly broadened emission-line spectra arising from a strong stellar wind. Recent work has found evidence for WR magnetic fields: marginal detections in three stars by <cit.>, who used high-resolution spectropolarimetry, and a 3.3σ detection of a mean longitudinal magnetic field in the cyclically variable, X-ray emitting WN5-type star WR 6 by <cit.>, who used low-resolution FORS 2 spectropolarimetry <cit.>. Cyclical variability can arise from a corotating interacting region (CIR), whose signatures include different photospheric absorption features simultaneously propagating with different accelerations. CIRs in massive stars have been found using spectroscopic time series <cit.>. These may relate to magnetic bright spots, which often signal the presence of a global magnetic field <cit.>. <cit.> reported a definite detection in another WR star with similar cyclical variability <cit.>, the hydrogen-deficient WN7-type star WR 55 <cit.>. The measured showed a change in polarity, with the highest field values =-378±85 G (4.4σ) and =205±58 G (3.5σ). This work presents additional FORS 2 spectropolarimetric observations of WR 55, alongside photometry, and aims to constrain the magnetic field strength and identify any periodicities. § MAGNETIC AND SPECTRAL VARIABILITY Twelve new FORS 2 spectropolarimetric observations, in addition to the four observations presented in <cit.>, are listed in Table <ref>. The new spectra were acquired between 2022 February 16 and April 11, with the same instrument setup as in <cit.>. The wavelength was calibrated with a He-Ne-Ar arc lamp, and the extraction of the ordinary and extraordinary beams was done using standard iraf procedures <cit.>. The measurements of the mean longitudinal magnetic field were carried out using procedures presented in prior work <cit.>. Our frequency analysis based on the measurements in Table <ref> was performed using a Levenberg-Marquadt non-linear least-squares fit <cit.>. To detect the most probable period, we calculated the frequency spectrum, and for each trial frequency, we performed a statistical F-test of the null hypothesis – the absence of periodicity <cit.>. The resulting F-statistic can be thought of as the total sum, including covariances, of the ratio of harmonic amplitudes to their standard deviations, i.e., a S/N. As shown in the top panel of Fig. <ref>, the highest peak in the frequency spectrum corresponds to a period P=11.9006±0.0001 h. The distribution of the measured values over this period assuming the ephemeris T_0=59664.2260 is presented in the bottom panel of Fig. <ref>. Assuming a dipolar field structure, this ephemeris corresponds to the maximum positive field extremum in the corresponding sinusoidal field phase curve. Additionally, three spectral lines, the Heii and Hβ blend, the Heii 5412 Å line, and the Civ doublet were investigated for the presence of rotationally-modulated variability of line intensities and radial velocities. The results of the frequency analysis of equivalent widths (EWs) for all three lines are presented in the top panel of Fig. <ref>. The highest peaks in the frequency spectra not coinciding with the window function correspond to slightly longer periods: P=11.95±0.0020 h for the Heii/Hβ blend, and P=11.94±0.0002 h for the Heii and Civ lines. Notably, our frequency analysis indicates a lower significance for the periods obtained using EWs than for the period using the magnetic field measurements directly: the reduced χ^2-values for the fits are 1.10 for the measurements, 5.83 for the Heii/Hβ blend, 15.00 for the Heii 5412 Å line, and 13.30 for the Civ doublet. The distributions of the EW values over the detected periods, assuming the same ephemeris as for the magnetic field measurements, are presented in the bottom panel of Fig. <ref>. We observe a clear phase shift of 0.22 between the maxima of the measured EW and the magnetic maximum. As WR 55 is a CIR-type variable target, a curvature due to a spiral CIR can result in the development of a potentially exploitable phase lag <cit.>. Variability of EWs can be caused either by an inhomogeneous distribution of temperature or chemical elements on the surface of a rotating star, or a companion. Concerning the last possibility, we find no information in the literature on the binarity of WR 55. We do not detect any significant frequency peaks in our measurements of the radial velocities. This may be because the emission-line spectra of WR stars are mainly formed in expanding atmospheric layers, where the wavelength and shapes of emission lines have multiple contributions: from wind velocity, stellar rotation, and (potential) orbital motion. Because CIRs may be explained via a paradigm involving bright spots driving CIR structures, we assume that the observed EW variability can be explained by temperature spots. The individual lines used in the equivalent widths measurements are most likely formed slightly higher up in the expanding atmosphere (we estimate by 0.3% of the stellar radius), so that the rotation period becomes longer due to the conservation of angular momentum. § PHOTOMETRIC VARIABILITY §.§ TESS photometry and contamination WR 55 was observed by TESS in Sectors 11 and 38, from 2019 April 22 to 2019 May 21, and from 2021 April 28 to 2021 May 26. We constructed custom light curves for Sector 11 with TESSCut <cit.>, and for Sector 38 from the Target Pixel File (TPF) output by the Science Processing Operations Center (SPOC) pipeline <cit.>, using lightkurve <cit.>. TESS light curves often suffer significant contamination from nearby stars due to the large plate scale (21”/pixel). For WR 55, there is a nearby bright (G_ RP = 8.63) β Cephei variable star (HD 117704) whose flux could fall into the chosen aperture. Figure <ref> shows the magnitudes (in G_ RP band, which spans a similar range as the TESS passband) and locations of bright (G_ RP < 13) sources near WR 55, from Gaia Data Release 3 <cit.>. To mitigate possible contamination, we used a custom one-pixel aperture including only WR55 (light green shading in the left panel of Fig. <ref>; purple shading corresponds to the SPOC aperture). The 30-minute cadence light curve from Sector 11 was extracted via TESSCut using a similar one-pixel aperture; backgrounds were estimated as in section 2 of <cit.>. These light curves did not need detrending. The light curves and periodograms of Sectors 11 and 38 are similar. §.§ The Low-Frequency Excess We used the Discrete Fourier Transform <cit.> to calculate the periodograms of the light curves. <cit.>, <cit.>, and <cit.> assign a physical interpretation to the low-frequency excess (red noise) in the frequency spectra of massive stars. For WR stars in particular, three explanations have been posited: (a) interactions between clumped stellar wind and pulsations <cit.>, (b) internal gravity waves <cit.>, and (c) a sub-surface convection zone caused by an iron-peak opacity bump <cit.>, which is most pronounced in WR stars. <cit.> also suggest a line-deshadowing instability (LDI) to explain the variability. Finally, CIRs can also induce flux variability, but this was studied only in the radio <cit.>. As in prior literature, we fit the red noise with a semi-Lorentzian function added to a white noise term: A(ν) = A_0/1 + ( ν/ν_ char)^γ + C_W, where A_0 represents the amplitude scale of the low-frequency variability, ν_ char is its characteristic frequency, γ is an exponent, and C_W is the white noise component, which also captures instrumental variability. For our fit, we compare a custom implementation of the Markov Chain Monte Carlo (MCMC) method and the Levenberg-Marquadt non-linear least-squares method <cit.>. To calculate formal uncertainties in power spectra amplitudes, we first assumed a Poisson uncertainty (√(N); N is the count of e^-/second) for each point in both light curves and then bootstrapped 1000 realizations of these points, with Gaussian noise. We then calculated the Lomb-Scargle periodogram using lightkurve <cit.> and then used the standard deviation of the amplitudes in each frequency bin (across all 1000 power spectra) as the formal per-point uncertainty to be used when fitting, in order to derive parameter uncertainties. We contrast this with <cit.>, wherein parameter uncertainties are the square root of the diagonal of the covariance matrix, multiplied by √(χ^2_ best fit), divided by 0.5 × N_ data - 4 (degrees of freedom). The MCMC ran for 5× 10^6 steps; priors (see Table <ref>) were derived iteratively, with initial estimates based on the ranges of values in <cit.> and <cit.>. Best-fit parameter values from both techniques are in Table <ref>; they agree to within their (similar) mutual uncertainties. Posterior distributions for Sector 38 parameters are in Fig. <ref>. This is the first application and comparison of both techniques for the same star, showing that they are equally valid; <cit.> solely used MCMC, while <cit.> used only Levenberg-Marquadt. The best-fit semi-Lorentzians are shown in Fig. <ref>; an F-test shows that a red noise model is strongly favored over a white-noise-only model (p < 10^-16). §.§ Photometric Peaks No significant (S/N > 5) or marginal (3 < S/N < 5) peaks were found in Sector 11. In Sector 38, we find two marginal peaks at 1.8544 ± 0.0003 d^-1 and 2.6485 ± 0.0003 d^-1 (blue dashed lines in Fig. <ref>). To search for longer-period variability, we used the light curve from the All-Sky Automated Survey for Supernovae Sky Patrol web server <cit.>. Data were separated by camera, and the periodograms were calculated with astropy <cit.>. We found marginal peaks at ∼6.4 d (in camera be) and ∼20–25 d^-1 (in both cameras); we do not ascribe physical meaning to these. None of these periods matches P = 11.9 h from Sect. <ref> (the brown dashed line in the bottom right panel of Fig. <ref>), or the observed pulsation frequencies of the nearby β Cep star. Marginal peaks may correspond to pulsation modes, which shift, appear, and disappear over many months, and may underlie the discrepancy in red noise parameters between sectors. Recovery of WR 55's photometric rotation period may also be hindered by circumstellar material. § DISCUSSION AND CONCLUSIONS <cit.> found a magnetic field in WR 55, and our additional observations confirm this and constrain the amplitude to ∼ 200 G. The sinusoidal field phase curve indicates a likely dipolar field structure. After the detection of a longitudinal magnetic field in WR 6, WR 55 is now the second magnetic star showing the presence of a CIR, suggesting that WRs with CIRs are good candidates for future magnetic surveys. Notably, pulsations cannot cause the short periodicity in WR 55, as we see a change in field polarity. While low-amplitude magnetic variability has been found in pulsating magnetic stars, these changes in are ∼10-15 G <cit.>. Similar short periodicities have been reported for other stars (e.g., 9.8 h in WR 123, ; 15.5 h in WR 46, ). With our period and a radius estimate of 5.23 from <cit.>, we find v_ eq=534 , suggesting that WR 55 was perhaps spun up by binary interaction <cit.>. Magnetic spectropolarimetry of WR stars, especially nitrogen-sequence WN stars, is crucial, as they are likely progenitors of type Ib or IIb supernovae <cit.>. Finally, we compare our photometric results for WR 55 with the large samples of <cit.> and <cit.>. Table 2 of <cit.> shows their best-fit values of the four semi-Lorentzian parameters; we focus on WR 78 and WR 87, which are like WR 55: WN-type stars with both 30- and 2-minute cadence data. These two stars have A_0 and C_W decreasing between sectors, but ν_ char and γ increase, which agrees with our findings for WR 55. This effect (especially for A_0 and C_W) may partially arise from the differing cadences and noise properties of the data; this is explored in fig. 5 of <cit.>. However, γ (and ν_ char) correspond to intrinsic properties of the star, so their change across sectors is interesting and may indicate the existence of slow, internal stellar processes. For instance, both our values of γ agree with the predictions of <cit.> and may indicate core-generated internal gravity waves. More data will help untangle the relative effects of sources of flux variability. § ACKNOWLEDGEMENTS We thank our referee, Dr. Pascal Petit, for insightful comments. This work is based on observations made with ESO telescopes at the La Silla Paranal Observatory under programme IDs 0104.D-0246(A) and 109.230H.001. This paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA Science Mission Directorate. Resources supporting this work were also provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center to produce the SPOC data products <cit.>. The 2-min cadence data of WR 55 were obtained via Guest Investigator program G03095 (PI: T. Dorn-Wallenstein). § DATA AVAILABILITY All FORS 2 data are available from the ESO Science Archive Facility at <http://archive.eso.org/cms.html.> TESS data can be downloaded from the Barbara A. Mikulski Archive for Space Telescopes (<mast.stsci.edu>). 99 [Aguilera-Dena et al.2020]Aguilera Aguilera-Dena D. R., Langer N., Antoniadis J., Müller B., 2020, , 901, 114 [Appenzeller et al.1998]Appenzeller1998 Appenzeller I., et al., 1998, The ESO Messenger, 94, 1 [Astropy Collaboration2013]2013A A...558A..33A Astropy Collaboration, et al., 2013, , 558, A33 [Bowman et al.2019a]bowman-blue-sg Bowman D. M., et al., 2019a, Nature Astronomy, 3, 760 [Bowman et al.2019b]bowman-2019 Bowman D. M., et al., 2019b, , 621, A135 [Brasseur et al.2019]2019ascl.soft05007B Brasseur C. E., Phillip C., Fleming S. W., Mullally S. E., White R.  L., 2019, Astrophysics Source Code Library, record ascl:1905.007 [Cantiello et al.2009]canetiello-fecz Cantiello M., et al., 2009, , 499, 279 [Chené & St-Louis2011]Chene-St Chené A.-N., St-Louis N., 2011, , 736, 140 [Chené et al.2011]Chene Chené A.-N., et al., 2011, , 530, A151 [de la Chevrotiére et al.2014]delacherv de la Chevrotiére A., St-Louis N., Moffat A. F. J., MiMeS Collaboration, 2014, , 781, 73 [Cikota et al.2017]Cikota Cikota A., Patat F., Cikota S., Faran T., 2017, , 464, 4146 [Deeming1975]deeming-dft Deeming T. J., 1975, , 36, 137 [Edelmann et al.2019]2019ApJ...876....4E Edelmann P. V. F., Ratnasingam R. P., Pedersen M. G., Bowman D. M., Prat V., Rogers T. M., 2019, , 876, 4 [Gaia Collaboration2016]gaia-mission Gaia Collaboration et al., 2016, , 595, A1 [Gaia Collaboration2021]gaia-dr3 Gaia Collaboration et al., 2021, , 649, A1 [Gayley & Ignace2010]GayleyIgnace Gayley K. G., Ignace R., 2010, , 708, 615 [Grunhut et al.2017]Grunhut Grunhut J. H., et al., 2017, , 465, 2432 [Hamann, Gräfener & Liermann2006]Hamann2006 Hamann W.-R., Gräfener G., Liermann A., 2006, , 457, 1015 [Hamann et al.2019]Hamann2019 Hamann W.-R., et al., 2019, , 625, A57 [Hénault-Brunet et al.2011]Henault-Brunet Hénault-Brunet V., St-Louis N., Marchenko S. V., Pollock A. M. T., Carpano S., Talavera A., 2011, , 735, 13 [Hubrig et al.2004a]Hubrig2004a Hubrig S., Kurtz D. W., Bagnulo S., Szeifert T., Schöller M., Mathys G., Dziembowski W. A., 2004a, , 415, 661 [Hubrig et al.2004b]Hubrig2004b Hubrig S., Szeifert T., Schöller M., Mathys G., Kurtz D. W., 2004b, , 415, 685 [Hubrig et al.2013]Hubrig2013 Hubrig S., et al., 2013, , 551, A33 [Hubrig et al.2016]Hubrig2016 Hubrig S., et al., 2016, , 458, 3381 [Hubrig et al.2020]Hubrig2020 Hubrig S., Schöller M., Cikota A., Järvinen S. P., 2020, , 499, L116 [Hubrig et al.2023]Hubrig2023 Hubrig S., Järvinen S. P., Ilyin I.; Schöller M., Jayaraman R., 2023, , 521, 6228 [Ignace, St-Louis & Prinja2020]Ignace Ignace R., St-Louis N., Prinja R. K., 2020, , 497, 1127 [Jenkins et al.2016]jenkins-spoc Jenkins J. M., et al., 2016, in Chiozzi G., Guzman J. C., eds, Proc. SPIE Conf. Ser. Vol. 9913, SPIE, Bellingham, p. 99133E [Kochanek et al.2017]asassn-lc Kochanek C. S., et al., 2017, , 129, 104502 [Lenoir-Craig et al.2022]lenoir-craig-2022 Lenoir-Craig G., et al., 2022, , 925, 79 [Lightkurve Collaboration2018]2018ascl.soft12013L Lightkurve Collaboration, et al., 2018, Astrophysics Source Code Library, record ascl:1812.013 [Lomb1976]1976Ap SS..39..447L Lomb N. R., 1976, , 39, 447 [de Mink et al.2012]deMink de Mink S. E., Brott I., Cantiello M., Izzard R. G., Langer N., Sana H., 2012, in Drissen L., Robert C., St-Louis N., Moffat A. F. J., eds, ASP Conf. Ser., Vol. 465, San Francisco, p. 65 [Mullan1984]Mullan Mullan D. J., 1984, , 283, 303 [Nazé, Rauw & Gosset2021]naze-2021 Nazé Y., Rauw G., Gosset E., 2021, , 502, 5038 [Newville et al.2014]newville-2014-lmfit Newville M., Stensitzki T., Allen D. B., Ingargiola A., 2014, doi:10.5281/zenodo.11813 [Press et al.1992]press Press W. H., Teukolsky S. A., Vetterling W. T., Flannery B. P., 1992, Numerical Recipes, 2nd edn. Cambridge: Cambridge University Press [Ramiaramanantsoa et al.2014]Ramiaramanantsoa Ramiaramanantsoa T., et al., 2014, , 441, 910 [Scargle1982]1982ApJ...263..835S Scargle J. D., 1982, , 263, 835 [Schöller et al.2017]BOB Schöller M., et al., 2017, , 599, A66 [Shultz et al.2017]Shultz Shultz M., Wade G. A., Rivinius Th., Neiner C., Henrichs H., Marcolino W., 2017, , 471, 2286 [Seber1977]seber Seber G. A. F., 1977, Linear Regression Analysis. Wiley, New York [Stevance2019]Stevance Stevance H. F. 2019, PhD Thesis, University of Sheffield, arXiv:1906.07184 [Toalá et al.2022]toala-wr7 Toalá J. A., et al., 2022, , 514, 2269
http://arxiv.org/abs/2306.11117v1
20230619183758
On the Rényi index of random graphs
[ "Mingao Yuan" ]
math.ST
[ "math.ST", "stat.TH" ]
heterogeneity of network A]Mingao Yuan[label=e1][email protected] [A]Department of Statistics, North Dakota State University, e1 Networks (graphs) permeate scientific fields such as biology, social science, economics, etc. Empirical studies have shown that real-world networks are often heterogeneous, that is, the degrees of nodes do not concentrate on a number. Recently, the Rényi index was tentatively used to measure network heterogeneity. However, the validity of the Rényi index in network settings is not theoretically justified. In this paper, we study this problem. We derive the limit of the Rényi index of a heterogeneous Erdös-Rényi random graph and a power-law random graph, as well as the convergence rates. Our results show that the Erdös-Rényi random graph has asymptotic Rényi index zero and the power-law random graph (highly heterogeneous) has asymptotic Rényi index one. In addition, the limit of the Rényi index increases as the graph gets more heterogeneous. These results theoretically justify the Rényi index is a reasonable statistical measure of network heterogeneity. We also evaluate the finite-sample performance of the Rényi index by simulation. [class=MSC2020] []60K35 [; ]05C80 Rényi index random graph heterogeneity network data § INTRODUCTION A network (graph) consists of a set of individuals (nodes) and a set of interactions (edges) between individuals. It has been widely used to model and analyze many complex systems. For example, in social science and economics, networks play a central role in the transmission of information, the trade of many goods and services, and determining how diseases spread, etc. <cit.>; in biology, network is a method of representing the physical contacts between proteins <cit.>. In the past decade, network data analysis has been a primary research topic in statistics and machine learning <cit.>. In many fields of science and engineering, one of the elemental problems is to measure the statistical heterogeneity of datasets. For instance, in statistical physics the entropy was devised to measure the randomness of systems (<cit.>). In economics, various inequality indices were designed to gauge the evenness of the distribution of wealth in human populations(<cit.>). Motivated by entropy and inequality indices, <cit.> recently introduced the Rényi index to measure statistical heterogeneity of probability distributions defined on the positive half-line. The Rényi index takes values in the range [0,1]. Larger value represents higher level of heterogeneity. Its properties were systematically studied in <cit.> and the Rényi index of several well-known distributions (such as Pareto distribution, Gamma distribution, Beta distribution, etc.) are calculated in <cit.>. Empirical studies have shown that many real-world networks are heterogeneous, that is, the degrees of individuals do not concentrate on a number <cit.>. It is important to be able to compare networks according to heterogeneity that they exhibit, and thus to have a stable summary statistic that provides insight into the structure of a network. Recently the Rényi index was tentatively used to measure heterogeneity of financial networks and interesting findings were obtained <cit.>. However, the validity of the Rényi index in network settings is not theoretically justified, and some of the fundamental questions are not studied in <cit.>. For instance, whether the Rényi index of a homogeneous network is actually close to zero, whether the Rényi index of heterogeneous network is indeed large, and how the Rényi index depends on network model parameters. In this paper, we shall answer the above mentioned questions and provide a theoretical justification for the Rényi index as a network heterogeneity measure. To this end, we derive the limit of the Rényi index of a heterogeneous Erdös-Rényi random graph and a power-law random graph, as well as the convergence rates. Based on our results, the Erdös-Rényi random graph (homogeneous) has asymptotic Rényi index zero, while the well-known power-law random graph (highly heterogeneous) has asymptotic Rényi index one. Moreover, the limit of the Rényi index explicitly depends on model parameters, from which it is clear that the Rényi index increases as the model gets more heterogeneous. These results theoretically justify the Rényi index is a reasonable statistical measure of network heterogeneity. In addition, we run simulations to evaluate finite-sample performance of the Rényi index. The structure of the article is as follows. In Section <ref> we collect the main results. Specifically, in Section <ref>, we present the limit of the Rényi index of a heterogeneous Erdös-Rényi random graph; in Section <ref>, we present the limit of the Rényi index of a power-law random graph. Simulation studies are given in Section <ref> . All proofs are deferred to Section <ref>. Notation: Let c_1,c_2 be two positive constants. For two positive sequences a_n, b_n, denote a_n≍ b_n if c_1≤a_n/b_n≤ c_2; denote a_n=O(b_n) if a_n/b_n≤ c_2; a_n=o(b_n) if lim_n→∞a_n/b_n=0. Let X_n be a sequence of random variables. We use X_n⇒ F to denote X_n converges in distribution to a probability distribution F. X_n=O_P(a_n) means X_n/a_n is bounded in probability. X_n=o_P(a_n) means X_n/a_n converges to zero in probability. 𝒩(0,1) stands for the standard normal distribution. Let I[E] be the indicator function of an event E. We adopt the convention 0 log 0 = 0. Let n be a positive integer. § THE RÉNYI INDEX OF NETWORK A graph or network 𝒢 consists of a pair (𝒱,ℰ), where 𝒱=[n]:={1,2,…,n} denotes the set of vertices and ℰ denotes the set of edges. For i<j, denote A_ij=1 if {i,j}∈ℰ is an edge and A_ij=0 otherwise. Suppose A_ii=0, that is, self loops are not allowed. Then the symmetric matrix A=(A_ij)∈{0,1}^⊗ n^2 is called the adjacency matrix of graph 𝒢. A graph is said to be random if the elements A_ij (1≤ i<j≤ n) are random. Given a positive constant α, the Rényi index of a graph (<cit.>) is defined as ℛ_α= 1-[1/n∑_i=1^n(d_i/d)^α]^1/1-α, if α≠ 1; 1-exp(-1/n∑_i=1^nd_i/dlogd_i/d), if α=1, where d_i is the degree of node i, that is, d_i=∑_j≠ iA_ij and d is the average of degree, that is, d=∑_i=1^nd_i/n. The Rényi index includes several popular indexes as a special case. When α=1, the Rényi index ℛ_1 is a function of the Theil's index. When α=2, the Rényi index ℛ_2 is function of the Simpson's index. For 0<α≤ 1 the Rényi index ℛ_α is the Atkinson's index. The parameter α allows researchers to tune the Rényi index to be more sensitive to different populations. In practice, commonly used values are α=1,2,3 (<cit.>). For any fixed α>0, the Rényi index ℛ_α is between 0 and 1. The Rényi index takes values in [0,1]. It is tentatively used to measure degree heterogeneity of graphs (<cit.>). We shall derive an asymptotic expression of the Rényi index of two random graphs. Note that ℛ_α is a non-linear function of the degrees d_i (1≤ i≤ n), the degrees are not independent and may not be identically distributed. This fact make studying asymptotic properties of the Rényi index a non-trivial task. §.§ The Rényi index of a heterogeneous Erdös-Rényi random graph In this section, we study the asymptotic Rényi index of a heterogeneous Erdös-Rényi random graph. Let f(x,y) be a symmetric function from [0,1]^2 to [0,1]. Define the heterogeneous Erdös-Rényi random graph 𝒢(n,p_n, f) as ℙ(A_ij=1)=p_n f(i/n,j/n), where p_n∈[0,1] may depend on n and A_ij (1≤ i<j≤ n) are independent. If f≡ c for some constant c, then 𝒢(n,p_n, f) is simply the Erdös-Rényi random graph with edge appearance probability cp_n. For non-constant f, the random graph 𝒢(n,p_n, f) is a heterogeneous version of the Erdös-Rényi graph. The spectral properties of this random graph have been extensively studied in <cit.>. We point out that our proof strategy works for other graph models such as the β-model in <cit.> and the degree-corrected model in <cit.> with mild modifications. §.§.§ Asymptotic Rényi index when α≠ 1 In this subsection, we study asymptotic Rényi index of 𝒢(n,p_n, f) with α≠ 1. For convenience, denote f_ij=f(i/n,j/n), 1cm f_i=1/n∑_j≠ i^nf(i/n,j/n), λ_k,l=∑_i≠ jf_i^kf_ij^l/n^2. Note that f_ij, f_i and λ_k;l depend on n. We will focus on f(x,y)≥ϵ for a constant ϵ∈(0,1) as assumed in <cit.>. Later we will provide examples of such functions. Let α≠ 1 be a fixed positive constant, np_n→∞ and f(x,y)≥ϵ for some constant ϵ∈(0,1). Then the Rényi index ℛ_α of 𝒢(n,p_n,f) has the following expression ℛ_α=1-[λ_α,0/(λ_0,1+O_P(1/n√(p_n)))^α+O_P(1/np_n)]^1/1-α, and the error rates 1/np_n and 1/n√(p_n) cannot be improved. Asymptotically, ℛ_α has the following concise expression: ℛ_α=1-(λ_α,0/λ_0,1^α)^1/1-α+o_P(1). Theorem <ref> provides an asymptotic expression of ℛ_α as an explicit function of α and the model parameter f, along with the error rates. It is interesting that ℛ_α mainly depends on f and α through the ratio λ_α,0/λ_0,1^α. The quantities λ_α,0 and λ_0,1^α may or may not converge to some limits as n goes to infinity. Later we will present two examples where λ_α,0 and λ_0,1^α converge. We point out that even though empirical degree distributions are widely studied in literature, it is not immediately clear how to obtain the asymptotic expression of the Rényi index as in Theorem <ref> from the empirical degree distributions. Specifically, suppose Y_n is a random variable with distribution F_emp(x)=1/n∑_i=1^nI[d_i≤ x]. The term 1/n∑_i=1^nd_i^α in the Rényi index is equal to 𝔼(Y_n^α|d_1,…,d_n). Suppose F_emp(x) converges almost surely or in probability to some distribution function F(x) and let Y follow the distribution F(x). The convergence of F_emp(x) to F(x) does not necessarily imply the convergence of 𝔼(Y_n^α|d_1,…,d_n) to 𝔼(Y^α) for arbitrary α>0. Generally speaking, uniform integrability conditions are required to guarantee the convergence of 𝔼(Y_n^α|d_1,…,d_n) to 𝔼(Y^α). Note that 𝔼(Y_n^α|d_1,…,d_n) is random. It is not immediately clear what kind of uniform integrability conditions are needed. Moreover, even if we can conclude that 𝔼(Y_n^α|d_1,…,d_n) converges to 𝔼(Y^α) by assuming some uniform integrability conditions, it does not provide the error rates (that cannot be improved) as in Theorem <ref>. Next we provide two examples of random graphs satisfying the conditions of Theorem <ref> and calculate the ratio explicitly. The first example is the Erdös-Rényi random graph, that is, f(x,y)≡ 1. Since each node of the Erdös-Rényi random graph has the same average degree, the Erdös-Rényi graph is homogeneous. It is clear that λ_α,0/λ_0,1^α=1+o(1), hence ℛ_α=o_P(1). This shows the Rényi index of homogeneous network is actually close to zero. Now we provide a family of non-constant f(x,y) that is bounded away from zero. This model can attain any heterogeneity level, that is, the limit of λ_α,0/λ_0,1^α can take any value in (0,1). Let f(x,y)=e^-κ xe^-κ y with a non-negative constant κ. Then e^-2κ≤ f(x,y)≤ 1 for 0≤ x,y≤ 1. Intuitively, smaller κ would produce less heterogeneous models. In the extreme case κ=0, the random graph is simply the Erdös-Rényi random graph. Given a function f, denote the expected degree of node i as μ_i:=p_n∑_j≠ if_ij . Then for f(x,y)=e^-κ xe^-κ y, μ_i is equal to np_ne^-κi/n(1-e^-κ+o(1)). Note that μ_1/μ_n=e^κ(1-1/n). Large κ will enlarge the difference between the degrees of node 1 and node n. Hence, the random graph with larger κ should be more heterogeneous. Simple calculations yield λ_α,0=(1/κ-1/κ e^κ)^α(1/κα-1/κα e^κα)+o(1), 1cm λ_0,1=(1/κ-1/κ e^κ)^2+o(1). Plugging them into (<ref>) yields ℛ_α=1-((e^κα-1)κ^α-1/α(e^κ-1)^α)^1/1-α+o_P(1), α>0, α≠ 1. Note that lim_κ→∞((e^κα-1)κ^α-1/α(e^κ-1)^α)^1/1-α=0 and lim_κ→0^+((e^κα-1)κ^α-1/α(e^κ-1)^α)^1/1-α=1 for any α>0 and α≠ 1. Asymptotically, ℛ_α with large κ would be close to 1 and ℛ_α with small κ would be close to 0. This justifies that the Rényi index of heterogeneous graph is actually non-zero. In addition, the limit of ℛ_α can assume any value in (0,1) by changing κ. In this sense, this random graph can achieve any heterogeneity level with suitably selected κ. §.§.§ Asymptotic Rényi index when α=1 In this subsection, we study the asymptotic Rényi index of 𝒢(n,p_n, f) with α=1. For convenience, denote f_ij=f(i/n,j/n), 3mm μ_i:=p_n∑_j≠ if_ij, 3mm l_i=log(μ_i/np_nλ_0,1), 3mm s_k=∑_i<j(2+l_i+l_j)^kf_ij(1-p_nf_ij), Let 𝒢(n,p_n,f) be the random graph with np_n→∞ and f(x,y)≥ϵ for some constant ϵ∈(0,1). If s_2≍ n^2, then the Rényi index has the asymptotic expression as 3.5cm ℛ_1=1-e^-r_n+O_P(1/n√(p_n)), 1cm r_n=1/n∑_i=1^nμ_i/np_nλ_0,1log(μ_i/np_nλ_0,1), where the error rate 1/n√(p_n) cannot be improved. Based on Theorem <ref>, ℛ_1 mainly depends on r_n. For the Erdös-Rényi random graph, that is, f(x,y)≡1, it is obvious that λ_0,1=1, μ_i=(n-1)p_n and hence ℛ_1=o_P(1). For f(x,y)=e^-κ xe^-κ y with a positive constant κ, μ_i=np_ne^-κ i/nλ_1,0(1+o(1))≍ np_n, then s_2≍ n^2. The assumption of Theorem <ref> are satisfied. Straightforward calculation yields r_n=g(κ)+o(1), where g(κ)=-1+κ/e^κ-1-log(e^κ-1/κ e^κ). Note that lim_κ→0^+g(κ)=0, lim_κ→∞g(κ)=∞. Hence larger κ produces more heterogeneous random graph. This is consistent with the case α≠1. The assumption that f(x,y)≥ϵ for a constant ϵ∈(0,1) in Theorem <ref> and Theorem <ref> can be relaxed and replaced by less restrictive assumptions. However, the alternative assumptions are difficult to state and interpret and would lead to more complex proofs. For simplicity, we do not pursue this relaxation. In addition, Theorem <ref> and Theorem <ref> hold for sparse networks, since they allow p_n=o(1) as long as np_n→∞. §.§ The Rényi index of a power-law random graph Empirical studies have shown that many real-world networks are highly heterogeneous, that is, the degrees of nodes follow a power-law distribution (<cit.>). This motivates us to study whether the Rényi index of power-law random graph is actually close to one. Given a positive constant τ, let W be a random variable following a distribution with power-law tail as ℙ(W>x)=x^-τ, x≥ 1. This distribution has heavy tail and the k-th moment of W exists if and only if k<τ. The distribution (<ref>) is widely used to define power-law random graphs (<cit.>). Given independent and identically distributed random variables ω_1,…, ω_n from distribution (<ref>), define a power-law random graph 𝒢(n,τ) as ℙ(A_ij=1|W)=pω̃_iω̃_j/n, where W=(ω_1,…, ω_n), ω̃_i=min{ω_i,√(n)}, p∈(0,1) is a constant and A_ij (1≤ i<j≤ n) are independent conditional on W. The random graph 𝒢(n,τ) was first defined in <cit.> and the order of large cliques was studied there. The cutoff √(n) in ω̃_i guarantees the edge appearance probability is less than 1. This cutoff is common in algorithm analysis and random graph theory (<cit.>). We focus on the interesting regime τ∈(1,2) as in literature (<cit.>). Note that the edges A_ij (1≤ i<j≤ n) are not independent and higher moments of ω̃_i are not bounded. It is more challenging to derive the limit of the Rényi index ℛ_α of 𝒢(n,τ) for arbitrary α>0. In this paper, we only study ℛ_2. Let 𝒢(n,τ) be the power-law random graph with τ∈(1,2). Then ℛ_2=1-O_P(1/n^1-τ/2), where the rate 1/n^1-τ/2 cannot be improved. According to Theorem <ref>, the Rényi index ℛ_2 of 𝒢(n,τ) converges to one in probability at rate n^τ/2-1. This indicates 𝒢(n,τ) is extremely heterogeneous, consistent with empirical observations (<cit.>). Note that nodes of 𝒢(n,τ) have the same expected degree p(𝔼[ω_1])^2. In this sense, it seems 𝒢(n,τ) is homogeneous as the Erdös-Rényi random graph. However, the correlation between A_ij and A_ik (1≤ i≤ n,j≠ k) and the power-law tail property of W jointly make the degrees extremely different so that ℛ_2=1+o_P(1). Theorem <ref> provides an alternative justification that power-law random graph can be used as a generative model of extremely heterogeneous networks. To conclude this section, we comment that Theorem <ref>, Theorem <ref> and Theorem <ref> jointly provide a theoretical justification that the Rényi index is a reasonable measure of heterogeneity of networks. For homogeneous network, the Rényi index is asymptotically zero. For extremely heterogeneous network, the Rényi index is asymptotically one. For moderately heterogeneous network, the Rényi index resides between zero and one. § SIMULATION In this section, we conduct simulation study to evaluate finite-sample performance of the Rényi index. In this simulation, 20 graphs were generated from each random graph model described in Section <ref>, and the Rényi index of each graph was calculated with α∈{0.5,1,2,2.5,3,10}. Then the mean and standard deviation (sd) of the Rényi indexes were computed, as well as the limit specified in Theorem <ref> or Theorem <ref>. Firstly, we consider the heterogeneous Erdös-Rényi random graph 𝒢(n,p_n, f) with f(x,y)=e^-κ xe^-κ y for a positive constant κ. The limit of the Rényi index has a closed form given in (<ref>) for α≠1 and (<ref>) for α=1. With a little abuse of notation, we denote the limit as ℛ_α. The model parameters we used to generate graphs, ℛ_α, and the mean and standard deviation of the Rényi indexes are listed in Table <ref>, Table <ref>, Table <ref>. As n increases, the mean gets closer to the limit ℛ_α, and p_n highly affects the convergence speed. These findings coincide with the results in Theorem <ref> and Theorem <ref>. For homogeneous model (κ=0.1), the mean and limit ℛ_α almost vanish, while for heterogeneous model (κ=25) both are pretty large (greater than 0.8). This confirms that the Rényi index can effectively measure heterogeneity of networks. In addition, the Rényi indexes increase as α increases. Now we consider the power-law random graph in Section <ref>. The means and standard deviations (in parentheses) are summarized in Table <ref>. We point out that although the rate 1/n^1-τ/2 in Theorem <ref> only depends on τ, the term O_P(1/n^1-τ/2) does involve constant p in a complex way (see proof of Theorem <ref>). As a result, the values of p,τ may significantly affect how close is the mean to the limit ℛ_2=1 in finite-sample case. Table <ref> shows all the means of the Rényi indexes are larger than 0.6, indicating the power-law random graph is indeed heterogeneous. When n=10,000, most of the means are close to or larger than 0.90. § PROOF OF MAIN RESULTS In this section, we provide detailed proof of the main results. Note that ℛ_α is a non-linear function of degrees d_i as given in (<ref>). The degrees d_i are not independent and may not be identically distributed. It is not feasible to directly apply the classical Law of large number or Central limit theorem to get the limit of ℛ_α. To overcome this issue, our strategy is to adopt the Taylor expansion to express the non-linear function of d_i as a sum of polynomials of d_i plus a remainder term. Then we carefully bound the remainder term and identify the limit and exact order of the polynomial terms. For convenience, let γ_k,l=∑_i≠ jf_i^kf_j^kf_ij^l/n^2. Proof of Theorem <ref>: The main challenge is that the degrees d_i (1≤ i≤ n) are not independent and may not be identically distributed. The classical tools such as the law of large number and the central limit theorem can not be directly applied to derive the limit of ℛ_α. Our proof strategy is: (a) use the Taylor expansion to expand ∑_i=1^n(d_i/n)^α at ∑_i=1^n(μ_i/n)^α as a sum of polynomials in d_i and a reminder term; (b) find the exact order of the polynomial terms; (c) show the reminder term is bounded by the polynomial terms. The key step is (c). We will use a truncation technique to control the reminder term. To fix the idea, we consider the case α∈(0,3]\{1} first. Let μ_i=𝔼(d_i). By Taylor expansion, we have (d_i/n)^α-(μ_i/n)^α = α(μ_i/n)^α-1(d_i-μ_i/n)+α(α-1)/2!(μ_i/n)^α-2(d_i-μ_i/n)^2 +α(α-1)(α-2)/3!X_n,i^α-3(d_i-μ_i/n)^3. where X_n,i is a random variable between d_i/n and μ_i/n. Summing both sides of (<ref>) over i∈[n] yields ∑_i=1^n(d_i/n)^α-∑_i=1^n(μ_i/n)^α = α∑_i=1^n(μ_i/n)^α-1(d_i-μ_i/n)+α(α-1)/2!∑_i=1^n(μ_i/n)^α-2(d_i-μ_i/n)^2 +α(α-1)(α-2)/3!∑_i=1^nX_n,i^α-3(d_i-μ_i/n)^3. Next, we shall find the order of each term in the right-hand side of (<ref>). We begin with the first term. For given i∈[n], simple algebra yields μ_i=∑_j≠ i𝔼(A_ij) = ∑_j≠ ip_nf(i/n,j/n)=np_nf_i, ∑_i=1^n(μ_i/n)^α = p_n^α∑_i=1^nf_i^α. Then ∑_i=1^n(μ_i/n)^α-1(d_i-μ_i/n) = p_n^α-1∑_i=1^nf_i^α-1∑_j≠ i(A_ij-f_ijp_n)/n = p_n^α-1∑_i<j(f_i^α-1+f_j^α-1)(A_ij-f_ijp_n)/n. Since A_ij, (1≤ i<j≤ n) are independent and 𝔼[A_ij-f_ijp_n]=0, then by (<ref>) one has 𝔼[∑_i=1^n(μ_i/n)^α-1(d_i-μ_i/n)]^2 = p_n^2α-2𝔼[∑_i<j(f_i^α-1+f_j^α-1)(A_ij-f_ijp_n)/n]^2 = p_n^2α-2∑_i<j𝔼(f_i^α-1+f_j^α-1)^2(A_ij-f_ijp_n)^2/n^2 = p_n^2α-2(∑_i≠ j(f_i^α-1+f_j^α-1)^2f_ijp_n/2n^2-∑_i≠ j(f_i^α-1+f_j^α-1)^2f_ij^2p_n^2/2n^2) = p_n^2α-1∑_i≠ jf_i^2α-2f_ij+∑_i≠ jf_i^α-1f_j^α-1f_ij/n^2 -p_n^2α-1p_n∑_i≠ jf_i^2α-2f_ij^2+p_n∑_i≠ jf_i^α-1f_j^α-1f_ij^2/n^2 = p_n^2α-1[(λ_2α-2,1+γ_α-1,1)-p_n(λ_2α-2,2+γ_α-1,2)]. Since f(x; y) ≥ϵ > 0, then λ_2α-2,1≍ 1, γ_α-1,1≍ 1, λ_2α-2,2≍ 1, γ_α-1,2≍ 1. Hence the first term in the right-hand side of (<ref>) is bounded by order p_n^α-1√(p_n). By Lemma <ref>, this is the exact order. Secondly, we find the order of the second term in the right-hand side of (<ref>). Note that ∑_i=1^n(μ_i/n)^α-2(d_i-μ_i/n)^2 = 1/n^2∑_i=1^np_n^α-2f_i^α-2∑_j,k≠ i(A_ij-p_nf_ij)(A_ik-p_nf_ik) = p_n^α-21/n^2∑_i≠ j(A_ij-p_nf_ij)^2f_i^α-2+p_n^α-21/n^2∑_i≠ j≠ k(A_ij-p_nf_ij)(A_ik-p_nf_ik)f_i^α-2 = S_1+S_2. We claim S_2=o_P(S_1). To this end, we compute the second moment of S_2 and the first moment of S_1. Straightforward calculations yield 𝔼[S_1] = p_n^α-21/n^2∑_i≠ j𝔼(A_ij-p_nf_ij)^2f_i^α-2 = p_n^α-2(1/n^2∑_i≠ jp_nf_ijf_i^α-2-1/n^2∑_i≠ jp_n^2f_ij^2f_i^α-2) = p_n^α-1(λ_α-2,1-p_nλ_α-2,2). Note that λ_α-2,1≍ 1, λ_α-2,2≍ 1, due to the assumption f(x; y) ≥ϵ > 0. Then S_1 is bounded by order p_n^α-1. By Lemma <ref>, this is the exact order. Since 0≤ f_i≤ 1 and 0≤ f_ij≤ 1, then 𝔼[S_2^2] ≤ p_n^2(α-2)/n^4∑_i≠ j≠ k r≠ s≠ t𝔼(A_ij-p_nf_ij)(A_ik-p_nf_ik)(A_rs-p_nf_rs)(A_rt-p_nf_rt) = p_n^2(α-2)/n^4O(∑_i≠ j≠ k𝔼(A_ij-p_nf_ij)^2(A_ik-p_nf_ik)^2) = p_n^2(α-2)/n^4O(∑_i≠ j≠ kp_n^2f_ijf_ik) = O(p_n^2α-2/n). Then S_2=O_P(p_n^α-1/√(n)). Hence the exact order of the second term in the right-hand side of (<ref>) is p_n^α-1 and S_1 is the leading term. Next, we show the third term of (<ref>) is bounded by p_n^α-1O(1/√(np_n)). If α=2, then the third term in (<ref>) vanishes. The desired result holds trivially. We only need to focus on the cases α≠ 1, 2. Note that X_n,i≥0. Then 𝔼[|∑_i=1^nX_n,i^α-3(d_i-μ_i/n)^3|]≤∑_i=1^n𝔼[X_n,i^α-3|d_i-μ_i/n|^3]. We shall show the right-hand side of (<ref>) is bounded by p_n^α-1O(1/√(np_n)). Consider first the case α=3. In this case, the expansion in Equation (<ref>) holds with X_n;i = 1, so that the analysis of Equation (<ref>) is simpler. Since X_n,i=1 for α=3. By the Cauchy-Schwarz inequality, we have ∑_i=1^n𝔼[|d_i-μ_i/n|^3] ≤ ∑_i=1^n√(𝔼[(d_i-μ_i/n)^6]) = ∑_i=1^n√(∑_j_1,j_2,…,j_6≠ i𝔼(A_ij_1-p_nf_ij_1)(A_ij_2-p_nf_ij_2)…(A_ij_6-p_nf_ij_6)/n^6) = ∑_i=1^n√(15∑_j_1≠ j_2≠ j_3≠ i𝔼(A_ij_1-p_nf_ij_1)^2(A_ij_2-p_nf_ij_2)^2(A_ij_3-p_nf_ij_3)^2/n^6) +∑_i=1^n√(15∑_j_1≠ j_2≠ i𝔼(A_ij_1-p_nf_ij_1)^4(A_ij_2-p_nf_ij_2)^2/n^6) +∑_i=1^n√(20∑_j_1≠ j_2≠ i𝔼(A_ij_1-p_nf_ij_1)^3(A_ij_2-p_nf_ij_2)^3/n^6) +∑_i=1^n√(∑_j_1≠ i𝔼(A_ij_1-p_nf_ij_1)^6/n^6) = O(n√(n^3p_n^3)+√(n^2p_n^2)+√(np_n)/√(n^6)) = p_n^2O(1/√(np_n)+1/np_n+1/np_n√(np_n))=p_n^2O(1/√(np_n)). Hence, for α=3, it follows that 𝔼[|∑_i=1^nX_n,i^α-3(d_i-μ_i/n)^3|]=p_n^α-1O(1/√(np_n)). Next we assume α∈(0,3) and α≠ 1,2. Let δ∈(0,1) be an arbitrary small constant. Note that 𝔼[|∑_i=1^nX_n,i^α-3(d_i-μ_i/n)^3|]=𝔼[|∑_i=1^nX_n,i^α-3(d_i-μ_i/n)^3(I[X_n,i≥δμ_i/n]+I[X_n,i<δμ_i/n])|] ≤ 𝔼[∑_i=1^nX_n,i^α-3|d_i-μ_i/n|^3I[X_n,i≥δμ_i/n]]+𝔼[∑_i=1^nX_n,i^α-3|d_i-μ_i/n|^3I[X_n,i<δμ_i/n]] Note that, when α<3, then α-3<0. If X_n,i≥δμ_i/n, then X_n,i^α-3≤(δμ_i/n)^α-3≤δ^α-3p_n^α-3f_i^α-3. So, it is possible to use the same approach as for the case α=3. By (<ref>) and a similar calculation as in (<ref>) , we get 𝔼[|∑_i=1^nX_n,i^α-3(d_i-μ_i/n)^3I[X_n,i≥δμ_i/n]|]≤δ^α-3p_n^α-3𝔼[∑_i=1^n|d_i-μ_i/n|^3f_i^α-3] =p_n^α-1O(1/√(np_n)). The difficult case is X_n,i< δμ_i/n. Suppose X_n,i< δμ_i/n. Recall that d_i/n≤ X_n,i≤μ_i/n or μ_i/n≤ X_n,i≤d_i/n. Then X_n,i< δμ_i/n implies d_i/n≤ X_n,i≤δμ_i/n. In this case, d_i/μ_i≤δ. Dividing both sides of (<ref>) by (μ_i/n)^α yields (d_i/μ_i)^α-1=α(d_i/μ_i-1)+α(α-1)/2(d_i/μ_i-1)^2+α(α-1)(α-2)/6X_n,1^α-3/(μ_i/n)^α-3(d_i/μ_i-1)^3, from which it follows that α(α-1)(α-2)/6X_n,i^α-3/(μ_i/n)^α-3(d_i/μ_i-1)^3 = -(α-1)(α-2)/2+(d_i/μ_i)^α+α(α-2)d_i/μ_i-α(α-1)/2(d_i/μ_i)^2. Note that d_i/μ_i≥ 0. For a fixed α, there exists a sufficiently small constant δ>0 such that if d_i/μ_i≤δ then the right-hand side of (<ref>) is bounded away from zero and infinity. This implies that X_n,i^α-3≤ C(μ_i/n)^α-3 for some constant C>0 and C is independent of i∈[n]. Then similar to (<ref>), we have 𝔼[|∑_i=1^nX_n,i^α-3(d_i-μ_i/n)^3I[X_n,i<δμ_i/n]|] =p_n^α-1O(1/√(np_n)). According to (<ref>), (<ref>) and (<ref>), (<ref>) holds for α∈(0,3) and α≠ 1,2. By (<ref>), (<ref>) and (<ref>), it follows that the third term of (<ref>) is bounded by p_n^α-1O_P(1/√(np_n)). Then the first two terms are the leading terms. By (<ref>), we get ∑_i=1^n(d_i/n)^α-p_n^α∑_i=1^nf_i^α = α∑_i=1^n(p_nf_i)^α-1(d_i-μ_i/n)+α(α-1)/2!p_n^α-21/n^2∑_i≠ j(A_ij-p_nf_ij)^2f_i^α-2 +O_P(p_n^α-1/√(np_n)+p_n^α-1/√(n)) = O_P(p_n^α-1√(p_n))+O_P(p_n^α-1)+O_P(p_n^α-1/√(np_n)+p_n^α-1/√(n)). By Lemma <ref>, the rates O_P(p_n^α-1√(p_n)) and O_P(p_n^α-1) in (<ref>) are optimal. Besides, d/n=p_nλ_0,1+O_P(√(p_n)/n) and the rate √(p_n)/n cannot be improved according to Lemma <ref>. Then (<ref>) follows for α∈(0,3]\{1}. Now assume α∈(k-1,k] for any fixed integer k≥4. By Taylor expansion, we have ∑_i=1^n(d_i/n)^α-∑_i=1^n(μ_i/n)^α = α∑_i=1^n(μ_i/n)^α-1(d_i-μ_i/n)+α(α-1)/2!∑_i=1^n(μ_i/n)^α-2(d_i-μ_i/n)^2 +…+α(α-1)…(α-k+1)/k!∑_i=1^nX_n,i^α-k(d_i-μ_i/n)^k, where X_n,i is between d_i/n and μ_i/n. To complete the proof, it suffices to show the first two terms of (<ref>) are the leading terms. More specifically, we show only the first two terms “matter" among the first k- 1 terms. Then we show the remainder term is negligible using a truncation argument, analogous to the one used for the case α<3. For integer t with 3≤ t≤ k, we have 𝔼[|∑_i=1^n(μ_i/n)^α-t(d_i-μ_i/n)^t|] ≤ ∑_i=1^n(μ_i/n)^α-t𝔼[|d_i-μ_i/n|^t] ≤ ∑_i=1^n(μ_i/n)^α-t√(𝔼[(d_i-μ_i/n)^2t]) = O(p_n^α-t/n^t-1√(𝔼[(d_i-μ_i)^2t])). Note that 𝔼[(d_i-μ_i)^2t]=𝔼[∑_j_1,j_2,…,j_2t(A_ij_1-p_nf_ij_1)… (A_ij_2t-p_nf_ij_2t)]. Since A_ij and A_il are independent if j≠ l, then 𝔼[(A_ij-p_nf_ij)(A_il-p_nf_il)]=0 if j≠ l. If there exists an index j_s such that j_s is not equal to j_r for any r=1,2,…,s-1,s+1,…, 2t, then 𝔼[∑_j_1,j_2,…,j_2t(A_ij_1-p_nf_ij_1)… (A_ij_2t-p_nf_ij_2t)]=0. Hence, each index j_s must equal another index j_r with r≠ s. Then 𝔼[(d_i-μ_i)^2t]=∑_s=1^t∑_j_1,j_2,…,j_s:distinct𝔼[(A_ij_1-p_nf_ij_1)^λ_1… (A_ij_s-p_nf_ij_s)^λ_s], where λ_r≥2 are integers and λ_1+λ_2+…+λ_s=2t. It is easy to verify that for λ_r≥2 (r=1,2,…,s), 𝔼[(A_ij_r-p_nf_ij_r)^λ_r]=(1-p_nf_ij_r)^λ_rp_nf_ij_r+(-p_nf_ij_r)^λ_r(1-p_nf_ij_r)=O(p_n). Then 𝔼[(d_i-μ_i)^2t]=O(∑_s=1^tn^sp_n^s)=O(n^tp_n^t). By (<ref>), one has 𝔼[|∑_i=1^n(μ_i/n)^α-t(d_i-μ_i/n)^t|] = O(p_n^α-1/(np_n)^t/2-1), 1cm 3≤ t≤ k. If X_n,i≤δμ_i/n for a small constant δ>0, by a similar argument as in (<ref>), one can get X_n,i^α-k≤ M(μ_i/n)^α-k for a large constant M>0. By (<ref>), (<ref>) holds. Then the proof is complete. Under the assumption of Theorem <ref>, the following results are true. Then ∑_i=1^n(p_nf_i)^α-1(d_i-μ_i/n)/p_n^α-1√(p_n(λ_2α-2,1+γ_α-1,1)-p_n^2(λ_2α-2,2+γ_α-1,2)) ⇒ 𝒩(0,1), 1/n^2∑_i≠ j(A_ij-p_nf_ij)^2f_i^α-2 = p_n(λ_α-2,1-p_nλ_α-2,2)+o_P(1), and ∑_i<jA_ij-p_nf_ij/n√(p_nλ_0,1+p_n^2λ_0,2)⇒𝒩(0,1). Proof of Lemma <ref>: (I). By (<ref>) and (<ref>), we get ∑_i=1^n(p_nf_i)^α-1(d_i-μ_i/n)/p_n^α-1√(p_n(λ_2α-2,1+γ_α-1,1)-p_n^2(λ_2α-2,2+γ_α-1,2))=∑_i<j(f_i^α-1+f_j^α-1)(A_ij-f_ijp_n)/n√(p_n(λ_2α-2,1+γ_α-1,1)-p_n^2(λ_2α-2,2+γ_α-1,2)). Note that A_ij, (1≤ i<j≤ n) are independent and 0<ϵ≤ f(x,y)≤ 1. Then λ_2α-2,2≍ 1, λ_2α-2,1≍ 1, γ_α-1,1≍ 1, γ_α-1,2≍ 1 and ∑_i<j𝔼[(f_i^α-1+f_j^α-1)(A_ij-f_ijp_n)/n√(p_n(λ_2α-2,1+γ_α-1,1)-p_n^2(λ_2α-2,2+γ_α-1,2))]^4 = O(∑_i<j(f_i^α-1+f_j^α-1)^4f_ij/n^4p_n)=o(1). By the Lyapunov central limit theorem, (<ref>) holds. (II). Note that 𝔼[1/n^2∑_i<j(f_i^α-2+f_j^α-2)[(A_ij-p_nf_ij)^2-p_nf_ij(1-p_nf_ij)]]^2 = 1/n^4∑_i<j(f_i^α-2+f_j^α-2)^2𝔼[(A_ij-p_nf_ij)^2-p_nf_ij(1-p_nf_ij)]^2 = O(1/n^4∑_i<j(f_i^α-2+f_j^α-2)^2p_nf_ij) = O(p_n/n^4∑_i≠ jf_ijf_i^2(α-2)+p_n/n^4∑_i≠ jf_ijf_i^α-2f_j^α-2)=o(1). Hence (<ref>) holds. (III). Note that 𝔼[∑_i<j(A_ij-p_nf_ij)/n^2]^2=∑_i<j𝔼(A_ij-p_nf_ij)^2/n^4=∑_i<j(p_nf_ij-p_n^2f_ij^2)/n^4=p_nλ_0,1+p_n^2λ_0,2/n^2. Since f(x,y)≥ϵ>0, then λ_0,1≍ 1, λ_0,2≍ 1 and ∑_i<j𝔼(A_ij-p_nf_ij)^4/(n√(p_nλ_0,1+p_n^2λ_0,2))^4=O(∑_i<jf_ij/n^4p_n)=o(1). By the Lyapunov central limit theorem, (<ref>) holds. Proof of Theorem <ref>: The proof strategy is the same as the proof of Theorem <ref>. Note that d/n=p_nλ_0,1+O_P(√(p_n)/n) by Lemma <ref> and d=1/n∑_i=1^nd_i. Then we have 1/n∑_i=1^nd_i/dlogd_i/d = 1/n∑_i=1^nd_i/dlog(np_nλ_0,1/dd_i/np_nλ_0,1) = 1/n∑_i=1^nd_i/dlog(np_nλ_0,1/d)+1/n∑_i=1^nd_i/dlog(d_i/np_nλ_0,1) = log(np_nλ_0,1/d)+np_nλ_0,1/d1/n∑_i=1^nd_i/np_nλ_0,1logd_i/np_nλ_0,1 = O_P(1/n√(p_n))+np_nλ_0,1/d1/n∑_i=1^nd_i/np_nλ_0,1logd_i/np_nλ_0,1. It suffices to get the limit of ∑_i=1^nd_i/np_nλ_0,1logd_i/np_nλ_0,1. Recall that μ_i=𝔼(d_i)=∑_j≠ ip_nf_ij. By the Taylor expansion, we have d_i/np_nλ_0,1log(d_i/np_nλ_0,1) = d_i/np_nλ_0,1log(μ_i/np_nλ_0,1)+d_i/μ_i(d_i-μ_i/np_nλ_0,1) -np_nλ_0,1d_i/2μ_i^2(d_i-μ_i/np_nλ_0,1)^2+1/3X_n,i^3d_i/np_nλ_0,1(d_i-μ_i/np_nλ_0,1)^3, where d_i/np_nλ_0,1≤ X_n,i≤μ_i/np_nλ_0,1 or μ_i/np_nλ_0,1≤ X_n,i≤d_i/np_nλ_0,1. Summing both sides of (<ref>) over i∈[n] yields ∑_i=1^nd_i/np_nλ_0,1log(d_i/np_nλ_0,1) = ∑_i=1^nd_i/np_nλ_0,1log(μ_i/np_nλ_0,1)+∑_i=1^nd_i/μ_i(d_i-μ_i/np_nλ_0,1) -∑_i=1^nnp_nλ_0,1d_i/2μ_i^2(d_i-μ_i/np_nλ_0,1)^2+∑_i=1^n1/3X_n,i^3d_i/np_nλ_0,1(d_i-μ_i/np_nλ_0,1)^3. Next we isolate the leading terms in the right-hand side of (<ref>). More specifically, we show the first term is the leading term, and the second term, the third terms and the remainder term are of smaller order. For the remainder term, a truncation technique as in the proof of Theorem <ref> will be used. Firstly, we consider the second of (<ref>). Note that ∑_i=1^nd_i/μ_i(d_i-μ_i/np_nλ_0,1) =∑_i=1^n(d_i-μ_i)^2/μ_inp_nλ_0,1+∑_i=1^nd_i-μ_i/np_nλ_0,1. We find the order of each term in the right-hand side of (<ref>). Recall that A_ij,(1≤ i<j≤ n) are independent. Then straightforward calculations yield 𝔼[∑_i=1^n(d_i-μ_i)^2/μ_inp_nλ_0,1] = ∑_i=1^n∑_j≠ i𝔼(A_ij-p_nf_ij)^2/μ_inp_nλ_0,1 = ∑_i=1^n∑_j≠ ip_nf_ij(1-p_nf_ij)/μ_inp_nλ_0,1 = ∑_i=1^n∑_j≠ ip_nf_ij-∑_j≠ ip_n^2f_ij^2/μ_inp_nλ_0,1 = 1/p_nλ_0,1(1-p_n1/n∑_i=1^n∑_j≠ if_ij^2/∑_j≠ if_ij), and 𝔼[∑_i=1^nd_i-μ_i/np_nλ_0,1]^2=∑_i≠ j^n𝔼(A_ij-p_nf_ij)^2/n^2p_n^2λ_0,1^2=∑_i≠ j^np_nf_ij-∑_i≠ j^np_n^2f_ij^2/n^2p_n^2λ_0,1^2=1/p_nλ_0,1-λ_0,2/λ_0,1^2. Note that 1/n∑_i=1^n∑_j≠ if_ij^2/∑_j≠ if_ij≤ 1. Then (<ref>) has order O_P(1/p_n). Next, we get the order of the third term in the right-hand side of (<ref>). Simple algebra yields ∑_i=1^nnp_nλ_0,1d_i/μ_i^2(d_i-μ_i/np_nλ_0,1)^2=1/np_nλ_0,1∑_i=1^n(d_i-μ_i)^3/μ_i^2+1/np_nλ_0,1∑_i=1^n(d_i-μ_i)^2/μ_i. Now we get an upper bound of the two terms in (<ref>). By the Cauchy-Schwarz inequality, one gets 1/np_nλ_0,1𝔼[|∑_i=1^n(d_i-μ_i)^3/μ_i^2|] ≤ 1/np_nλ_0,1∑_i=1^n𝔼[|d_i-μ_i|^3/μ_i^2] ≤ 1/np_nλ_0,1∑_i=1^n1/μ_i^2√(𝔼(d_i-μ_i)^6) ≤ 1/np_nλ_0,1∑_i=1^n1/μ_i^2√(∑_j_1≠ j_2≠ j_3≠ i𝔼(A_ij_1-p_nf_ij_1)^2(A_ij_2-p_nf_ij_2)^2(A_ij_3-p_nf_ij_3)^2) +1/np_nλ_0,1∑_i=1^n1/μ_i^2√(∑_j_1≠ j_2≠ i𝔼(A_ij_1-p_nf_ij_1)^3(A_ij_2-p_nf_ij_2)^3) +1/np_nλ_0,1∑_i=1^n1/μ_i^2√(∑_j_1≠ j_2≠ i𝔼(A_ij_1-p_nf_ij_1)^4(A_ij_2-p_nf_ij_2)^2) +1/np_nλ_0,1∑_i=1^n1/μ_i^2√(∑_j_1≠ i𝔼(A_ij_1-p_nf_ij_1)^6) = 1/np_nλ_0,1O(∑_i=1^n1/√(μ_i)+∑_i=1^n1/μ_i+∑_i=1^n1/√(μ_i^3)) = 1/p_nO(1/√(np_n)). Then the first term of (<ref>) is bounded by 1/p_nO_P(1/√(np_n)). By (<ref>), the second term is bounded by O_P(1/p_n). Next, we consider the last term in the right-hand side of (<ref>). Let δ∈(0,1) be an arbitrary small constant. We shall find an upper bound of the last term of (<ref>) in two cases: X_n,i≥δμ_i/np_nλ_0,1 and X_n,i< δμ_i/np_nλ_0,1. If X_n,i≥δμ_i/np_nλ_0,1, then 1/3X_n,i^3d_i/np_nλ_0,1|d_i-μ_i/np_nλ_0,1|^3≤1/3δ^3d_i/np_nλ_0,1|d_i-μ_i/μ_i|^3. Suppose X_n,i< δμ_i/np_nλ_0,1. If X_n,i<d_i/np_nλ_0,1, then X_n,i cannot be between d_i/np_nλ_0,1 and μ_i/np_nλ_0,1. Therefore, d_i/np_nλ_0,1≤ X_n,i< δμ_i/np_nλ_0,1. Then d_i/μ_i≤δ. Since -log x→∞ as x→0^+ and d_i/μ_i≥0, for small enough δ, by (<ref>) we have (μ_i/np_nλ_0,1)^3/3X_n,i^3=-log(d_i/μ_i)+(d_i/μ_i-1)-1/2(d_i/μ_i-1)^2/(1-d_i/μ_i)^3≤ -2log(d_i/μ_i). Consequently, it follows that 1/3X_n,i^3d_i/np_nλ_0,1|d_i-μ_i/np_nλ_0,1|^3≤ -2log(d_i/μ_i)d_i/μ_i|d_i-μ_i|^3/μ_i^2np_nλ_0,1. Note that lim_x→0^+xlog x=o(1). For small enough δ, it follows that -2log(d_i/μ_i)d_i/μ_i≤ 1 and hence 1/3X_n,i^3d_i/np_nλ_0,1|d_i-μ_i/np_nλ_0,1|^3≤|d_i-μ_i|^3/μ_i^2np_nλ_0,1. By (<ref>) and (<ref>), for a fixed small constant δ∈(0,1), one has ∑_i=1^n1/3X_n,i^3d_i/np_nλ_0,1|d_i-μ_i/np_nλ_0,1|^3 = ∑_i=1^n1/3X_n,i^3d_i/np_nλ_0,1|d_i-μ_i/np_nλ_0,1|^3I[X_n,i<δμ_i/np_nλ_0,1] + ∑_i=1^n1/3X_n,i^3d_i/np_nλ_0,1|d_i-μ_i/np_nλ_0,1|^3I[X_n,i≥δμ_i/np_nλ_0,1] ≤ 1/np_nλ_0,1∑_i=1^n|d_i-μ_i|^3/μ_i^2+1/3δ^3∑_i=1^nd_i/np_nλ_0,1|d_i-μ_i/μ_i|^3. By (<ref>), it follows that 1/np_nλ_0,1𝔼[|∑_i=1^nd_i/μ_i(d_i-μ_i)^3/μ_i^2|] ≤ 1/np_nλ_0,1∑_i=1^n√(𝔼(d_i/μ_i)^2)√(𝔼(d_i-μ_i)^6/μ_i^4) ≤ 1/np_nλ_0,1∑_i=1^n(1+1/√(μ_i))√(𝔼(d_i-μ_i)^6/μ_i^4) = 1/p_nO(1/√(np_n)). Hence by(<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), we get that ∑_i=1^nd_i/np_nλ_0,1log(d_i/np_nλ_0,1)-∑_i=1^nμ_i/np_nλ_0,1log(μ_i/np_nλ_0,1) = ∑_i=1^nd_i-μ_i/np_nλ_0,1(1+log(μ_i/np_nλ_0,1)) +1/2np_nλ_0,1∑_i=1^n(d_i-μ_i)^2/μ_i+1/p_nO_P(1/√(np_n)). Further, it follows from Lemma <ref> that 1/√(s_2p_n)∑_i=1^nd_ilog(d_i/np_nλ_0,1)-1/√(s_2p_n)∑_i=1^nμ_ilog(μ_i/np_nλ_0,1) = ∑_i=1^nd_i-μ_i/√(s_2p_n)(1+l_i)+1/2∑_i=1^n∑_j≠ i(A_ij-p_nf_ij)^2/μ_i√(s_2p_n)+O_P(√(np_n)/p_n√(s_2p_n)+√(n)/√(s_2p_n)) = O_P(1)+O_P(p_nτ_1/√(s_2p_n)), and the rates O_P(1) and O_P(p_nτ_1/√(p_ns_2)) cannot be improved. Then (<ref>) follows from (<ref>) and (<ref>). Under the assumptions of Theorem <ref>, the following results are true. ∑_i=1^nd_i-μ_i/√(s_2p_n)(1+l_i) ⇒ 𝒩(0,1), ∑_i=1^n(d_i-μ_i)^2/μ_i√(s_2p_n) = p_nτ_1-p_n^2τ_1,2/√(s_2p_n)+o_P(1). Proof of Lemma <ref>. We firstly prove (<ref>). Note that ∑_i=1^n(d_i-μ_i)(1+l_i) = ∑_i<j(A_ij-p_nf_ij)(2+l_i+l_j). Then 𝔼[∑_i<j(A_ij-p_nf_ij)(2+l_i+l_j)]^2 = ∑_i<j𝔼(A_ij-p_nf_ij)^2(2+l_i+l_j)^2 = ∑_i<j(2+l_i+l_j)^2p_nf_ij(1-p_nf_ij)=s_2p_n. Besides, ∑_i<j𝔼(A_ij-p_nf_ij)^4(2+l_i+l_j)^4/s_2^2p_n^2=O(∑_i<j(2+l_i+l_j)^4p_nf_ij/s_2^2p_n^2)=O(s_4/s_2^2p_n)=o(1). By the Lyapunov central limit theorem, we have ∑_i<j(A_ij-p_nf_ij)(2+l_i+l_j)/√(s_2p_n)⇒𝒩(0,1). Next we prove (<ref>) . Note that ∑_i=1^n(d_i-μ_i)^2/μ_i = ∑_i=1^n∑_j≠ k≠ i(A_ij-p_nf_ij)(A_ik-p_nf_ik)/μ_i +∑_i=1^n∑_j≠ i(A_ij-p_nf_ij)^2/μ_i. Since 𝔼[∑_i=1^n∑_j≠ k≠ i(A_ij-p_nf_ij)(A_ik-p_nf_ik)/μ_i]^2 = ∑_i=1^n∑_j≠ k≠ i𝔼(A_ij-p_nf_ij)^2(A_ik-p_nf_ik)^2/μ_i^2=O(n), then ∑_i=1^n(d_i-μ_i)^2/μ_i = ∑_i=1^n∑_j≠ i(A_ij-p_nf_ij)^2/μ_i+O_P(√(n)). Note that ∑_i=1^n∑_j≠ i(A_ij-p_nf_ij)^2/μ_i=∑_i<j(1/μ_i+1/μ_j)(A_ij-p_nf_ij)^2, ∑_i<j(1/μ_i+1/μ_j)𝔼(A_ij-p_nf_ij)^2=∑_i<j(1/μ_i+1/μ_j)p_nf_ij(1-p_nf_ij), and Var(∑_i<j(1/μ_i+1/μ_j)(A_ij-p_nf_ij)^2)=p_nτ_2(1+o(1)). Since τ_2=o(s_2), then ∑_i<j(1/μ_i+1/μ_j)(A_ij-p_nf_ij)^2/√(s_2p_n)=∑_i<j(1/μ_i+1/μ_j)p_nf_ij(1-p_nf_ij)/√(s_2p_n)+o_P(1). For positive k with k≠τ, we have 𝔼(ω̃_1^k)=n^k-τ/2k/k-τ-τ/k-τ. Proof of Lemma <ref>: Recall that ω̃_i=min{ω_i,√(n)}. By definition, the k-th moment of ω̃_1 is equal to 𝔼(ω̃_1^k) = ∫_1^+∞ (ω_i∧√(n))^kτω_1^-τ-1dω_1 = ∫_1^√(n)τω^k-τ-1dω+ ∫_√(n)^+∞ n^k/2τω^-τ-1dω = τ/k-τω^k-τ|_1^√(n)+τ n^k/21/(-τ)ω^-τ|_√(n)^+∞ = τ/k-τ(n^k-τ/2-1)+n^k-τ/2 = n^k-τ/2(τ/k-τ+1)-τ/k-τ = n^k-τ/2k/k-τ-τ/k-τ, kτ. Proof of Theorem <ref>. The proof strategy is similar to the proof of Theorem <ref>. Let μ=τ^2/(τ-1)^2. By Lemma <ref>, μ_i=𝔼(d_i)=pμ. Simple algebra yields ∑_i=1^n(d_i/n)^2-p^2μ^2/n = 2pμ/n∑_i=1^nd_i-μ_i/n+∑_i=1^n(d_i-μ_i/n)^2. We now find the order of each term in the right-hand side of (<ref>). The first term of (<ref>) can be decomposed as ∑_i=1^nd_i-μ_i/n=∑_i≠ j(A_ij-pω̃_iω̃_j/n)/n+∑_i≠ j(pω̃_iω̃_j/n-pμ/n)/n. Note that A_ij (1≤ i<j≤ n) are conditionally independent given W. Then 𝔼[ ∑_i< j(A_ij-pω̃_iω̃_j/n)/n]^2=∑_i< j𝔼(A_ij-pω̃_iω̃_j/n)^2/n^2≤𝔼[pω̃_iω̃_j/n]=O(1/n). The second moment of the second term of (<ref>) can be bounded as 𝔼[ p∑_i< j(ω̃_iω̃_j-μ)/n^2]^2 = p^2/n^4O(∑_i≠ j≠ k𝔼(ω̃_iω̃_j-μ)(ω̃_iω̃_k-μ)) +p^2/n^4O(∑_i< j𝔼(ω̃_iω̃_j-μ)^2) = p^2μ^2/nO(n^2-τ/2)+p^2/n^2O(n^2-τ) = O(n^-τ/2p^2μ^2), where we used Lemma <ref> in the second equality. Hence the first term of (<ref>) is O_P(n^-1-τ/4pμ). Now we consider the second term of (<ref>). By (<ref>) and (<ref>), we have ∑_i=1^n(d_i-μ_i/n)^2 = 1/n^2∑_i=1^n(∑_i≠ j(A_ij-pω̃_iω̃_j/n))^2+p^2/n^4∑_i=1^n(∑_i≠ j(ω̃_iω̃_j-μ))^2 +2p/n^3∑_i=1^n([∑_i≠ j(A_ij-pω̃_iω̃_j/n)][∑_i≠ j(ω̃_iω̃_j-μ)]) = O_P(1/n)+p^2/n^4∑_i≠ j≠ k(ω̃_iω̃_j-μ)(ω̃_iω̃_k-μ)+O_P(1/n^1/2+τ/4), where the last term O_P(1/n^1/2+τ/4) follows from the Cauchy-Schwarz inequality, (<ref>) and (<ref>). Note that 𝔼[∑_i≠ j≠ kω̃_i^2ω̃_jω̃_k]≍ n^3+2-τ/2. Then ∑_i≠ j≠ k(ω̃_iω̃_j-μ)(ω̃_iω̃_k-μ) = ∑_i≠ j≠ k(ω̃_i^2ω̃_jω̃_k-ω̃_iω̃_jμ-ω̃_iω̃_kμ+μ^2) = (1+o_P(1))∑_i≠ j≠ kω̃_i^2ω̃_jω̃_k = (1+o_P(1))2∑_i<j<k(ω̃_i^2ω̃_jω̃_k+ω̃_iω̃_j^2ω̃_k+ω̃_iω̃_jω̃_k^2). Hence, by Lemma <ref>, the second term of (<ref>) is the leading term and its exact order is O_P(n^-τ/2). Moreover, by (<ref>), (<ref>) and (<ref>), we obtain d/n=pμ/n+O_P(pμ/n^1+τ/4). Then the desired result follows. Let θ_n=3μ(n^2-τ/22/2-τ-τ/2-τ) and U_n=1/n3∑_i<j<k(ω̃_i^2ω̃_jω̃_k+ω̃_iω̃_j^2ω̃_k+ω̃_iω̃_jω̃_k^2-θ_n). Then √(4-τ)U_n/6μ n^1/2-τ/4⇒𝒩(0,1). Proof of Lemma <ref>. Note that U_n is a U-statistic of order 3. We shall use the asymptotic theory of U-statistics to get the desired result (<ref>). Let ϕ(ω̃_1,ω̃_2,ω̃_3)=ω̃_1^2ω̃_2ω̃_3+ω̃_1ω̃_2^2ω̃_3+ω̃_1ω̃_2ω̃_3^2 and ϕ_1(ω̃_1)=𝔼[ϕ(ω̃_1,ω̃_2,ω̃_3)|ω̃_1]. Direct calculation yields ϕ_1(ω̃_1)=ω̃_1^2μ+2ω̃_1η_n with η_n=(n^2-τ/22/2-τ-τ/2-τ)τ/τ-1. Note that 𝔼[U_n|ω̃_1] =1/n3∑_1≤ i<j<k≤ n𝔼[ϕ(ω̃_i,ω̃_j,ω̃_k)-θ_n)|ω̃_1]=3/n(ϕ_1(ω̃_1)-θ_n). Let Ũ_n=3/n∑_i=1^n(ϕ_1(ω̃_i)-θ_n) and σ_n^2=Var(Ũ_n). Then σ_n^2 = 9/n𝔼(ϕ_1(ω̃_1)-θ_n)^2 = 9/n[𝔼(ω̃_1^4)μ^2+4η_n^2𝔼(ω̃_1^2)+4μη_n𝔼(ω̃_1^3)-θ_n^2] = (1+o(1))9/n[4μ^2/4-τn^4-τ/2+8η_n^2/2-τn^2-τ/2+12μη_n/3-τn^3-τ/2-θ_n^2] = (1+o(1))36μ^2/4-τn^4-τ/2-1. Let Y_i=3/n(ϕ_1(ω̃_i)-θ_n). Then Y_i (1≤ i≤ n) are independent, 𝔼(Y_i)=0 and Ũ_n=∑_i=1^nY_i. Since ∑_i=1^n𝔼(Y_i^4)/σ_n^4 = 81/n^4σ^4∑_i=1^n𝔼[(ϕ_1(ω̃_i)-θ_n)^4]=O(𝔼(ω̃_1^8+ω̃_1^4η_n^4)/n^5-τ) = O(n^8-τ/2+n^4-τ/2+2(2-τ))/n^5-τ)=o(1), by the Lyapunov Central Limit Theorem, we get that Ũ_n/σ_n⇒𝒩(0,1). To finish the proof, it suffices to show U_n/σ_n=Ũ_n/σ_n+o_P(1). Note that 𝔼[Ũ_nU_n] =𝔼[3/n∑_i=1^n(ϕ_1(ω̃_i)-θ_n)U_n] =3/n∑_i=1^n𝔼[(ϕ_1(ω̃_i)-θ_n)𝔼(U_n|ω̃_i)] =3^2/n^2∑_i=1^n𝔼[ϕ_1(ω̃_i)-θ_n]^2 =3^2/n𝔼[ϕ_1(ω̃_1)-θ_n]^2 =3^2/nVar(ϕ_1(ω̃_1))=Var(Ũ_n). Then 𝔼[U_n-Ũ_n/σ_n]^2 = 1/σ_n^2[𝔼(U_n)^2+𝔼(Ũ_n^2)-2𝔼(Ũ_nU_n)] = 1/σ_n^2[𝔼(U_n^2)-𝔼(Ũ_n^2)]. Next, we find 𝔼(U_n^2). 𝔼(U_n^2) = 1/n3^2∑_i<j<k, i_1<j_1<k_1𝔼(ϕ(ω̃_i,ω̃_j,ω̃_k)-θ_n)(ϕ(ω̃_i_1,ω̃_j_1,ω̃_k_1)-θ_n) = 1/n3^2∑_1≤ i<j<k≤ n𝔼(ϕ(ω̃_i,ω̃_j,ω̃_k)-θ_n)^2 +1/n3^2∑_i<j<k, i_1<j_1<k_1 |{i,j,k}∩{i_1,j_1,k_1}|=2𝔼(ϕ(ω̃_i,ω̃_j,ω̃_k)-θ_n)(ϕ(ω̃_i_1,ω̃_j_1,ω̃_k_1)-θ_n) +1/n3^2∑_i<j<k, i_1<j_1<k_1 |{i,j,k}∩{i_1,j_1,k_1}|=1𝔼(ϕ(ω̃_i,ω̃_j,ω̃_k)-θ_n)(ϕ(ω̃_i_1,ω̃_j_1,ω̃_k_1)-θ_n) = O(1/n^3/2τ-1)+O(1/n^τ-1)+σ_n^2(1+o(1)). Combining (<ref>) and (<ref>) yields U_n/σ_n=Ũ_n/σ_n+o_p(1). Then the proof is complete. Proof of Proposition <ref>: When α>1, the function f(x)=x^α is convex for x>0. By Jensen inequality, we have 1/n∑_i=1^n(d_i/d)^α=1/n∑_i=1^n(d_i/n)^α/(1/n∑_i=1^nd_i/n)^α≥1/n∑_i=1^n(d_i/n)^α/1/n∑_i=1^n(d_i/n)^α=1. Then ℛ_α∈ [0,1]. When α∈(0,1), the function f(x)=-x^α is convex for x>0. By Jensen inequality, we have -1/n∑_i=1^n(d_i/d)^α≥-(1/n∑_i=1^nd_i/d)^α=-1. Then ℛ_α∈ [0,1]. When α=1, the function f(x)=xlog x is convex for x>0. By Jensen inequality, we have 1/n∑_i=1^n d_i/dlog(d_i/d)≥ f(1)=0. Then ℛ_α∈ [0,1]. § ACKNOWLEDGEMENT The author is grateful to Editor and anonymous reviewers for valuable comments that significantly improve the manuscript. 9 A18 Abbe, E., Community Detection and Stochastic Block Models: Recent Developments. Journal of Machine Learning Research. 2018, 18, 1-86. ACB13 Amini, A., Chen, A. and Bickel, P. (2013). Pseudo-likelihood methods for community detection in large sparse networks. Annals of Statistics, 41(4), 2097-2122. BS16 Bickel, P. J. and Sarkar, P. (2016). Hypothesis testing for automated community detection in networks. Journal of Royal Statistical Society, Series B, 78, 253-273. BM05 Bianconi, G. and Marsili, M. (2005), Emergence of large cliques in random scale-free network,Europhysics Letters,74,740. BM06 Bianconi, G. and Marsili, M. (2006). Number of cliques in random scale-free network ensembles,Physica D: Nonlinear Phenomena, 224,:1-6. BCH20 Bogerd,K., Castro, R., and Hofstad, R.(2020). Cliques in rank-1 random graphs: The role of inhomogeneity,Bernoulli, 26(1): 253-285 . BDM06 T. Britton, M. Deijfen, A. Martin-Lof,(2006). Generating simple random graphs with prescribed degree distribution, Journal of Statistical Physics, 124:1377–1397. CY06Chen, J. and Yuan, B. (2006). Detecting functional modules in the yeast prote in protein interaction network. Bioinformatics, 22(18), 2283-2290. CGL16 Chiasserini, C.F., Garetto, M. and Leonardi, E. (2016). Social Network De-Anonymization Under Scale-Free User Relations. IEEE/ACM Transactions on Networking 24 (6):3756–3769. CSN09 Clauset, A., Shalizi, C. R. and Newman, M.(2009). Power-law Distributions in Empirical Data. SIAM review 51(4), 661–703. CHHS21 Chakrabarty, A., Hazra, S. R., Hollander, F. D. and Sfragara, M.(2021). Spectra of adjacency and Laplacian matrices of inhomogeneous Erdös-Rényi random graphs, Random matrices: Theory and applications, 10(1),215009. CHHS20 Chakrabarty, A., Hazra, S. R., Hollander, F. D. and Sfragara, M.(2020). Large deviation principle for the maximal eigenvalue of inhomogeneous Erdös-Rényi random graphs, Journal of Theoretical Probability, https://doi.org/10.1007/s10959-021-01138-w CCH20 Chakrabarty, A., Chakrabarty, S. and Hazra, R. S.(2020). Eigenvalues outside the bulk of of inhomogeneous Erdös-Rényi random graphs, Journal of Statistical Physics, 181: 1746-1780. C89 Coulter, P. B. (1989). Measuring inequality: a methodological handbook, Westview Press, Boulder, 1989. C18 Cruz, C.(2018). Social Networks and the Targeting of Vote Buying. Comparative Political Studies, 52(3),382-411. E11 Eliazar, I.(2011). Randomness, evenness, and Rényi’s index Physica A: Statistical Mechanics and its Applications, 390(11), 1982-1990. ES12 Eliazar, I. and Sokolov, I.(2012). Measuring statistical evenness: A panoramic overview, Physica A: Statistical Mechanics and its Applications, 391, 1323-1353. GZFA Goldenberg, A., Zheng, A. X. S., Fienberg, E., and Airoldi, E. M. (2010). A survey of statistical network models. Foundations and Trends in Machine Learning 2, 2, 129-233. JLN10 S. Janson, T. Luczak, I. Norros,(2010). Large cliques in a power-law random graph, J. Appl. Prob. 47: 1124–1135 JLS19 A. Janssen, J. Leeuwaarden, S. Shneer, (2019). Counting cliques and cycles in scale-free inhomoge- neous random graphs, Journal of Statistical Physics, 175:161–184. KRTB22 Kulahci, I. ect.(2022). Social networks predict selective observation and information spread in ravens.Royal Society Open Science,.3: 160256. K07 Kardar, M.(2007). Statistical physics of particles, Cambridge University Press, Cambridge, 2007. KN11 B. Karrer and M. E. Newman(2011). Stochastic blockmodels and community structure in networks. Physical Review E, 83(1):016107. NSL16 Nie, C. X., Song, F. T. and Li, S.P.(2016). Rényi indices of financial minimum spanning trees,Physica A:Statistical Mechanics and its Applications, 444, 883-889. NS19 Nie, C. X. and Song, F. T.(2019). Global Renyi index of the distance matrix, Physica A:Statistical Mechanics and its Applications, 514, 902-915. NS21 Nie, C. X. and Song, F. T. (2021). Entropy of graphs in financial markets. Computational Economics, 57: 1149-1166. N21 Nie, C. X.(2021). Studying the correlation structure based on market geometry,Journal of Economic Interaction and Coordination ,16, 411-441. N03 Newman, M. (2003). The Structure and Function of Complex Networks. SIAM review 45, (2), 167–256 RPF13 Rinaldo, Petrovic and Fienberg(2013). Maximum likelihood estimation in the β-model. The Annals of Statistics, 41(3): 1085-1110. REE08 Read JM, Eames KT, Edmunds WJ.(2008) Dynamic social networks and the implications for the spread of infectious disease. J R Soc Interface. 5(26):1001-7. VHHK19 Voialov, I. etc.(2019) Scale-free networks well done. PHYSICAL REVIEW RESEARCH , 1, 033034. YXL21 Yu,L., Xu, J. and Lin,X.(2021). The power of D-hops in matching power-law graphs.Proceedings of the ACM on Measurement and Analysis of Computing Systems,5(2):1–43. YS21 Yuan, M. and Shang, Z. (2021). Informatin limits for detection a subhypergraph. STAT, e407. YS21b Yuan, M. and Shang, Z.(2022) Sharp detection boundaries on testing dense subhypergraph. Bernoulli 28 (4), 2459-2491.
http://arxiv.org/abs/2307.00236v1
20230701055717
Visualizing departures from marginal homogeneity for square contingency tables with ordered categories
[ "Satoru Shinoda", "Takuya Yoshimoto", "Kouji Tahata" ]
stat.ME
[ "stat.ME" ]
=1.2 plain Visualizing departures from marginal homogeneity for square contingency tables with ordered categories Satoru Shinoda^1, Takuya Yoshimoto^2 and Kouji Tahata^3 ^1Department of Biostatistics, Yokohama City University, School of Medicine, Japan ^2Biometrics Department, Chugai Pharmaceutical Co., Ltd., Japan ^3Department of Information Sciences, Faculty of Science and Technology, Tokyo University of Science, Japan E-mail: [email protected] Abstract Square contingency tables are a special case commonly used in various fields to analyze categorical data. Although several analysis methods have been developed to examine marginal homogeneity (MH) in these tables, existing measures are single-summary ones. To date, a visualization approach has yet to be proposed to intuitively depict the results of MH analysis. Current measures used to assess the degree of departure from MH are based on entropy such as the Kullback-Leibler divergence and do not satisfy distance postulates. Hence, the current measures are not conducive to visualization. Herein we present a measure utilizing the Matusita distance and introduce a visualization technique that employs sub-measures of categorical data. Through multiple examples, we demonstrate the meaningfulness of our visualization approach and validate its usefulness to provide insightful interpretations. Key words: Marginal homogeneity, Matusita distance, power-divergence, visualization. 1. Introduction Numerous research areas employ categorical data analysis. Such data is summarized in a contingency table (see e.g., Agresti, 2013; Kateri, 2014). A special case is a square contingency table where the row and column variables have the same ordinal categories. When we cannot obtain data as continuous variables for the evaluation of the efficacy and safety/toxicity of treatments in clinical studies, ordered categorical scales are used alternatively. For example, Sugano et al. (2012) conducted a clinical study where they examined the modified LANZA score (MLS) after 24 weeks’ treatment with esomeprazole 20 mg once daily or a placebo. The MLS is a popular evaluation scale with five stages (from 0 to +4) and is used for clinical evaluations of gastroduodenal mucosal lesions. Table 1 shows a square contingency table that summarizes the location shift of the MLS from pre-treatment to post-treatment for each patient. Such research is interested in whether the treatment effect tends to improve or worsen after an intervention relative to before the intervention. Thus, the evaluation is interested in the similarity from marginal homogeneity (MH), but not independence. Stuart (1955) introduced the MH model to indicate homogeneity with respect to two marginal distributions. We are also interested in the structure of inhomogeneity of the two marginal distributions when the MH model does not hold. This is because we are more interested in the deviation between the pre-treatment and post-treatment marginal distributions (i.e., intervention results) than whether the MH model that represents the structure shows an equal marginal distribution for the data in Table 1. Consequently, our strategy is to estimate measures representing the degree of departure from MH. Measures must quantify the differences in probability distributions, mainly using information divergences such as Kullback-Leibler divergence or power-divergence. To this end, Tomizawa, Miyamoto and Ashihara (2003) proposed a measure using the marginal cumulative probability for square contingency tables with ordered categories. This measure ranges from 0 to 1 and directly represents the degree of departure from MH. However, it cannot distinguish the direction of degree of departure. The two marginal distributions are interpreted as equal (no intervention effect) when the value is 0. When the values are greater than 0, an improvement is indistinguishable from a worsening effect. Yamamoto, Ando and Tomizawa (2011) proposed a measure, which lies between -1 and 1, to distinguish the directionality. This measure cannot represent the degree of departure directly from MH. Even if the value of the measure is 0, the marginal distribution cannot be exactly interpreted as having no intervention effect. To simultaneously analyze the degree and directionality of departure from MH, Ando, Noguchi, Ishii and Tomizawa (2021) proposed a two-dimensional visualized measure that combines the measure proposed by Tomizawa et al. (2003) and the measure proposed by Yamamoto et al. (2011). They also considered visually comparing the degrees of departure from MH in several tables because their measure is independent of the dimensions (i.e., number of categorical values) and sample size. Appendix 1 explains the main points of the above measures. These measures proposed by Tomizawa et al. (2003), Yamamoto et al. (2011) and Ando et al. (2021) are single-summaries. They are expressed using the sub-measure weights at each categorical level. For a given category level, different behaviors cannot be distinguished as a single-summary measure. The artificial data examples in the data analysis section provide specific situations. Hence, a single-summary-measure may overlook different behaviors in a given categorical level. To address this limitation, we apply visualization as a method utilizing sub-measures defined at each category level. This visualization also assumes that satisfying distance postulates can achieve a natural interpretation. To date, a measure for ordered categories does not exist because the Kullback-Leibler divergence or power-divergence used in existing measures do not satisfy the distance postulates. Therefore, we consider a measure using the Matusita distance to capture the discrepancy between two probability distributions while satisfying the distance postulates (see Matusita, 1954, 1955; Read and Cressie, 1988, p.112). Both academia and general society employ methods to visualize quantitative data. Examples include pie charts, histograms, and scatterplots. Although visualizing categorical data has attracted attention recently, different visualization techniques from those for quantitative data are necessary (see, e.g., Blasius and Greenacre, 1998; Friendly and Meyer, 2015; Kateri, 2014). Visualization of categorical data has two main objectives: revealing the characteristics of the data and intuitively understanding analysis results (Friendly and Meyer, 2015). Methods for the former include the “mosaic plot” and “sieve diagram” (see e.g., Friendly, 1995; Hartigan and Kleiner, 1981, 1984; Riedwyl and Schüpbach, 1983, 1994). Methods for the latter include the “fourfold display” for odds ratios and the “observer agreement chart” for Cohen’s κ (see e.g., Bangdiwala 1985, 1987; Fienberg, 1975; Friendly, 1994). Although the visualization objectives for categorical data may vary, they share common techniques: (i) separating data by categorical levels and (ii) adjusting the size of figure objects based on the frequency of each cell. Our research aims to realize a visualization for an intuitive understanding of the analysis results for MH. To date, such a visualization has yet to be proposed. Although the “mosaic plot” and “sieve diagram” can be applied to square contingency tables, they are not suitable for examining the structure of MH. These visualizations are designed to observe the data itself and identify features or patterns without making hypotheses before analyzing the data. Therefore, our proposed visualization provides an intuitive understanding of the structure of MH using categorical data visualization techniques (i) and (ii). This paper conducts a comprehensive analysis of the degree and directionality of departure from MH for square contingency tables with ordered categories. Our approach has two components: (i) measures to quantify the degree of departure of MH using information divergence satisfying distance postulates and (ii) a visualization technique designed for categorical data. The rest of this paper is organized as follows. Section 2 defines the proposed measure and visualization. Section 3 derives an approximated confidence interval for the proposed measure. Section 4 provides examples of the utility for the proposed measure and visualization. Section 5 presents the discussion. Finally, Section 6 closes with concluding remarks. 2. Proposed measure and visualization Here, we detail the proposed measure and visualization. Section 2.1 explains the probability structure of the MH model using formulas. Section 2.2 defines the sub-measures and single-summary-measure expressed using weights for the sub-measures at each categorical level along with the properties of the proposed measure. Section 2.3 details the visualization of the proposed measures. 2.1. MH model Consider an r × r square contingency table with the same row and column ordinal classifications. Let X and Y denote the row and column variables, respectively, and let Pr(X = i , Y = j) = p_ij for i = 1, … , r; j = 1, … , r. The MH model can be expressed with various formulas. For example, the MH model is expressed as p_i · = p_· i for  i = 1, … , r, where p_i · = ∑^r_t=1p_it and p_· i = ∑^r_s=1p_si. See e.g., Stuart (1955) and Bishop, Fienberg and Holland (1975, p.294). This indicates that the row marginal distribution is identical to the column marginal distribution. To consider ordered categories, the MH model can be expressed using the marginal cumulative probability as F_1(i) = F_2(i) for  i = 1, … , r-1, where F_1(i) = ∑^i_s=1 p_s · = Pr(X ≤ i) and F_2(i) = ∑^i_t=1 p_· t = Pr(Y ≤ i). The MH model can also be expressed as G_1(i) = G_2(i) for  i = 1, … , r-1, where G_1(i) = ∑^i_s=1∑^r_t=i+1 p_st = Pr(X ≤ i, Y ≥ i+1) and G_2(i) = ∑^r_s=i+1∑^i_t=1 p_st = Pr(X ≥ i+1, Y ≤ i). Furthermore, the MH model can be expressed as G^c_1(i) = G^c_2(i)( = 1/2) for  i = 1, … , r-1, where G^c_1(i) = G_1(i)/G_1(i) + G_2(i), G^c_2(i) = G_2(i)/G_1(i) + G_2(i). The MH model states that the conditional probability of X ≤ i is given if either X or Y ≤ i and the other ≥ i+1 is equal to the conditional probability that Y ≤ i for the same conditions. 2.2. Measure of departure from MH Several measures have been proposed for various formulas of the MH model. Here, we consider a measure that is independent of the diagonal probabilities because the MH model does not have constraints on the main-diagonal cell probabilities. For instance, Tomizawa et al. (2003) and Yamamoto et al. (2011) proposed measures that do not depend on the diagonal probabilities. First, we consider a sub-measure satisfying the distance postulates. Assuming that G_1(i) + G_2(i)≠ 0, the degree of departure from MH at each categorical level i (i=1, …, r-1) is given as γ_i = [ 2+√(2)/2( υ_1(i)^2 + υ_2(i)^2 ) ]^1/2, where υ_1(i) = √(G^c_1(i)) - √(1/2), υ_2(i) = √(G^c_2(i)) - √(1/2). The sub-measure γ_i has the following characteristics: (i) 0 ≤γ_i ≤ 1 (ii) γ_i = 0 if and only if G^c_1(i) = G^c_2(i) (= 1/2) (iii) γ_i = 1 if and only if G^c_1(i) =1 (then G^c_2(i) = 0) or G^c_1(i) =0 (then G^c_2(i) = 1) The sub-measure γ_i is the Matusita distance between ( G^c_1(i), G^c_2(i)) and ( 1/2, 1/2), and satisfies all three distance postulates. When the value of the sub-measure is 0, it means the marginal cumulative probabilities are equivalent until categorical level i. The value of the sub-measure increases as the separation between the marginal cumulative distributions increases. The separation is maximized when the value of the sub-measure is 1. Noting that a distance d is defined on a set W if for any two elements x, y ∈ W, a real number d(x, y) is assigned that satisfies the following postulates: (i) d(x, y) ≥ 0 with equality if and only if x=y; (ii) d(y, x) = d(x, y); (iii) d(x, z) ≤ d(x, y) + d(y, z) for x, y, z ∈ W (the triangle inequality). See also Read and Cressie (1988, p.111). Then the power-divergence I^(λ) (especially, the Kullback-Leibler divergence I^(0)) does not satisfy postulates (ii) and (iii). The Matusita distance, which is the square root of I^(-1/2), satisfies all three postulates. Assuming that { G_1(i) + G_2(i)≠ 0 }, we consider a measure using sub-measure γ_i to represent the degree of departure from MH, which is given as Γ = ∑^r-1_i=1( G^∗_1(i) + G^∗_2(i)) γ_i, where Δ = ∑^R-1_i=1( G_1(i) + G_2(i)), and G^∗_1(i) = G_1(i)/Δ, G^∗_2(i) = G_2(i)/Δ, for i=1, …, r-1. The measure Γ has the following characteristics: (i) 0 ≤Γ≤ 1 (ii) Γ = 0 if and only if the MH model holds (iii) Γ = 1 if and only if the degree of departure from MH is a maximum, in the sense that G^c_1(i)=1 (then G^c_2(i)=0) or G^c_1(i)=0 (then G^c_2(i)=1), for i = 1, …, r-1 Thus, this measure is the weighted sum of the Matusita distance for the two distributions ( G^c_1(i), G^c_2(i)) and ( 1/2, 1/2). 2.3. Visualization of the proposed measure To visualize the proposed measure, we used the techniques for visualizing categorical data. First, for the fixed i (i=1, …, r-1), γ_i, which represents the relationship between G^c_1(i) and G^c_2(i), is defined by the following steps: (i) Plot the x-axis is G^c_1(i) and the y-axis is G^c_2(i) point for each ( G^c_1(i), G^c_2(i)) coordinate (ii) Adjust the point size according to the weight ( G^∗_1(i) + G^∗_2(i)) (iii) Display the value of γ_i as a text label at each ( G^c_1(i), G^c_2(i)) point (iv) Color the points red when (G^c_1(i) < G^c_2(i)) and blue when ( G^c_1(i)≥ G^c_2(i)) (v) Draw the dashed line within the diagonal point’s range of movement and color the dashed line using the same rules Therefore, the top-left side is red, while the bottom-right side is blue with respect to the point ( 1/2, 1/2) in the visualization. Table 2 shows a visualization image. Table 3 presents the necessary information to visualize Table 2, including G^c_1(i) and G^c_2(i) used for the coordinates of the point, the weight used for the point size, and the sub-measure γ_i used for the text label. Step 1 visualizes each level i. As an example, Figure 1 depicts how γ_i is visualized at level i=1. Next, we provide additional definitions to integrate each γ_i in step 1 into one figure: (i) Consider the x-axis as i for G^c_1(i) and the y-axis as i for G^c_2(i) (ii) Place the figure of γ_i on the diagonal Figure 2 shows the integrated figure using the example from Table 2 in step 2 according to the definition of the proposed visualization. The visualization of the proposed measure using the categorical data methods has the following benefits. First, the visualization provides information about each i, allowing trends in MH to be identified in a square contingency table. Since the figure visualizes each γ_i, points do not overlap even if their coordinates are close. Thus, points are easily identifiable. It is important to visualize each γ_i separately since each one is assumed to be nearly the same value. Ando et al. (2021) used a Kullback-Leibler divergence-type measure, but the Kullback-Leibler divergence does not satisfy the distance postulates. To naturally interpret the point distances in the figure, the distance postulates must be satisfied. (Section 4.1.1. gives a specific example). Additionally, the proposed visualization can be considered as utilizing sub-measures. 3. Approximate the confidence interval for the measure Let n_ij denote the observed frequency in the ith row and jth column of a table (i =1, …, r; j = 1, …, r). The sample version of Γ (i.e., Γ̂) is given by Γ in which {p_ij} is replaced by {p̂_ij}, where p̂_ij = n_ij/n and n = ∑∑ n_ij. It should be noted that the sample version of G^c_k(i), γ_i and F_k(i), which are Ĝ^c_k(i), γ̂_i and F̂_k(i), respectively, are given in a similar manner (i=1, …, r-1; k=1, 2). Given that {n_ij} arises from a full multinomial sampling, we can estimate the standard error for Γ̂ and construct a large-sample confidence interval for Γ. The delta method can approximate the standard error. √( n)(Γ̂ - Γ) has an asymptotic (as n →∞) normal distribution with mean zero and variance σ^2[ Γ ]. See Appendix 2 for the details of σ^2[ Γ ]. Let σ̂^2[ Γ ] denote σ^2[ Γ ] where {p_ij} is replaced by {p̂_ij}. Then σ̂ [ Γ ]/√( n) is the estimated approximate standard error for Γ̂, and Γ̂± z_p/2σ̂ [ Γ ]/√( n) is an approximate 100(1-p) percent confidence interval for Γ, where z_p/2 is the 100 (1-p/2)th percentile of the standard normal distribution. The asymptotic normal distribution may not be applicable when estimating measures on small sample datasets. In small dataset, the sample proportion of (i, j) cell may fall 0 (i.e., p̂_ij = 0). Thus, we consider Bayesian methods. Although the sample proportion is typically used to estimate the approximate standard error for Γ̂, herein we consider the Bayes estimator derived from the uninformed prior probability. To have a vague prior, the Haldane prior is used for the prior information (see Haldane 1932; Berger 1985, p.89). We set all parameters of the Dirichlet distribution to 0.0001 when estimating the approximate variance of the proposed measure. 4. Data analysis 4.1. Artificial data 4.1.1. Role of distance postulates for visualization To illustrate the concept of visualization, we used artificial datasets in two scenarios: one that satisfies the structure of MH and one that has location-shifted marginal distributions. The visualization in Table 4(a) shows that all values of sub-measure γ̂_i are equal to zero, and the value of the proposed measure Γ̂ is zero (i.e., the MH model holds). In terms of information divergences, the two marginal distributions can be interpreted as the same. Therefore, the values of the label, which is the sub-measure using the Matusita distance, are zero, and points are drawn at ( 1/2, 1/2) in the visualization (Figure 3(a)). The visualization in Table 4(b) shows that all values of sub-measure γ̂_i are equal to 0.341 because the assumed structure shows location-shifted marginal distributions. Since we estimated (Ĝ^c_1(i) < Ĝ^c_2(i)), the point on the graph is drawn from ( 1/2, 1/2) to the upper left (Figure 3(b)). Because the label values are sub-measures using the Matusita distance that satisfies distance postulate (ii), it can be interpreted as the distance between ( G^c_1(i), G^c_2(i)) and ( 1/2, 1/2). However, the direction is crucial when using the Kullback-Leibler divergence (see Appendix 1). When using the Kullback-Leibler divergence in Table 4(b), the distance from ( 1/2, 1/2) to ( G^c_1(i), G^c_2(i)) and the distance from ( G^c_1(i), G^c_2(i)) to ( 1/2, 1/2) differ (Table 5). Therefore, the label value must be selected carefully because this divergence may hinder an intuitive interpretation. In addition, it can be evaluated appropriately in indirect comparisons between two points for the distance from a reference since the proposed measure satisfies the triangular inequalities. Thus, the visualization must use a divergence that satisfies the distance postulates. In addition, the proposed visualization gives a natural and intuitive interpretation because we can understand the degree of departure from MH for each level i, and the sub-measure calculated by Ĝ^c_1(i) and Ĝ^c_2(i) compares the marginal cumulative distributions ( F̂_1(i)  and F̂_2(i)). This section shows the visualization in monotonic differences of the marginal cumulative distributions, but the next section illustrates the relationship between marginal cumulative distributions and visualizations in several patterns. 4.1.2. Perception of different behaviors between categorical levels Our visualization can interpret the relationships between the marginal cumulative distributions, which is difficult using a single-summary-measure. Here, we treat artificial data where the values of the single-summary-measure are the same, but the visualizations of the sub-measures behave differently. Tables 6(a)–(d) show the artificial data, which are setup so that the value of the measure is 0.341. Figures 4(a)–(d) show the visualizations of Tables 6(a)–(d). Table 6(a) illustrates a scenario where the marginal cumulative distribution is location-shifted constantly. This structure would be expected based on the value of the measure. In a clinical study, assuming such a situation implies a constant treatment effect from pre-treatment to post-treatment. In contrast, Table 6(b) represents a scenario where the marginal cumulative distribution spreads as the categorical level i increases. Moreover, Tables 6(c)–(d) show situations where the marginal cumulative distribution differs at the categorical level i. In a clinical study, assuming such a situation suggests that the treatment effect depends on the pre-intervention condition. 4.2. Simulation studies Monte Carlo simulations were performed to theoretically derive the coverage probabilities of the approximate 95% confidence intervals assuming random sampling of an underlying bivariate normal distribution. Here, we considered random variables Z_1 and Z_2 with means E(Z_1) = 0 and E(Z_2) = d, variances Var(Z_1) = Var(Z_2) = 1, and correlation Corr(Z_1, Z_2 ) = 0.2. Assuming a 6 × 6 table is formed using the cutoff points for each variable at -1.2, -0.6, 0, 0.6, 1.2, we evaluated several simulation scenarios where d = 0.00  to 4.00 by 0.25 and n = 36, 180, 360, 3600 (sparseness index=1, 5, 10, 100). The simulation studies were performed based on 100,000 trials per scenario. Figure 5 plots the mean of random variable Z_2 along with the true value of the measure based on a bivariate normal distribution. When d=0, the true value of the measure is observed as 0 because there is no difference in the means whose condition is stronger than the structure of the MH. Although the true value increases monotonically for d=0, …, 1, a large mean difference between random variables is necessary for the true value to reach 1. Figure 6 shows the coverage probability according to the true values. For a small sample size, it is difficult to obtain a nominal coverage probability, whereas the coverage probability is maintained at a 95% confidence interval for a sufficient sample size. 4.3. Example As an example, consider the data in Table 1. In the original work (Sugano et al., 2012), the proportion of improvement or deterioration for the esomeprazole group (drug group) and placebo group were described. Table 7 shows the results of applying the proposed measure Γ to these data to statistically consider the treatment effects for the drug or placebo. The estimate of asymptotic variance using the sample proportion cannot be calculated because Ĝ^c_1(4)=0 in Table 1(b). Hence, a Bayes estimator is used to estimate the asymptotic variance. The 95% confidence intervals do not cross zero, suggesting that both groups have a higher degree of deviation from MH. That is, the marginal distribution after the treatment shifts compared to that before the treatment. For an intuitive understanding, Figure 7 plots the trend, where blue indicates an improving trend and red a deteriorating one. The drug group shows an improving trend (Ĝ^c_1(i)≥Ĝ^c_2(i)), while the placebo group displays a deteriorating trend (Ĝ^c_1(i) < Ĝ^c_2(i)). For the drug group, i=1, 2, 3 show an improvement trend, while i=4 shows a deteriorating trend although the circle is small (i.e., the proportion of observed frequencies comprising Ĝ^c_1(4) and Ĝ^c_2(4) is small relative to the total). These results imply that there might be differences in treatment effects between i levels. 5. Discussion In the proposed measure, sub-measures are used in the visualization to capture features overlooked by a single summary measure. Previous studies have adopted similar approaches, except that the sub-measures are not used for interpretation (Tomizawa et al., 2003; Yamamoto et al., 2011). This study demonstrates that sub-measures allow two kinds of marginal inhomogeneities to be visualized, providing a more detailed interpretation of the single-summary-measure. The proposed visualization is analyzed using Table 1. First, because the Matusita distance satisfies the distance postulates, the visualization that draws points on two-dimensional coordinates can give a natural and intuitive interpretation. In particular, the values of existing measures based on the power-divergence (Kullback-Leibler divergence) that do not satisfy distance postulate (ii) would give different values if the distance from the start point to the end point is swapped. That is, the data in Table 1 would create two visualization patterns. In contrast, for the Matusita distance, the same value is obtained even if the distance from the start point to the end point is swapped. Hence, a special annotation is unnecessary for a visual interpretation. Furthermore, the point in Figure 7(a) where i=1, 2, 3 and i=4 show different directions is difficult to discern using the existing measure proposed by Yamamoto et al. (2011) because it is a single-summary-measure. However, the different directions can be considered intuitively through visualization by level i. The proposed visualization does not draw the points on one coordinate because the degree of departure from MH is likely the same for each level in real data analysis (Figure 7). This is because identifying which level i of points is drawn is difficult. Hence, it is important to satisfy the distance postulates and to consider methods for visualizing categorical data of square contingency tables. The visualization program was implemented in the R programming language (R Core Team, 2023). Noting that a graphical layout in package “ggplot2” is defined by “gtable” (and also “grid”). In addition, the arrangement of multiple figure objects can be set by package “gridExtra”. We used “grid” and “gridExtra” packages for visualization purposes. We referenced the function “agreementplot()” by the “vcd” package, which is the categorical data visualization package for the “observer agreement chart”. 6. Conclusion The proposed measure Γ is the weighted sum of the sub-measures that satisfy all three distance postulates. Here, we demonstrate the approximated confidence interval for Γ. The proposed visualization using the Matusita distance provides a natural visual interpretation of MH in a square contingency table. In addition, we show that the visualization can provide useful interpretations using an example. 99 Agresti, A. (2013). Categorical Data Analysis, 3rd edition. Wiley, Hoboken, New Jersey. Ando, S., Noguchi, T., Ishii, A. and Tomizawa, S. (2021). A two-dimensional index for marginal homogeneity in ordinal square contingency tables . SUT Journal of Mathematics 57, 211–224. Bangdiwala, S.I. (1985). A graphical test for observer agreement. Proceeding of the International Statistics Institute 1, 307–308. Bangdiwala, S.I. (1987). Using SAS software graphical procedures for the observer agreement chart. Proceedings of the SAS User’s Group International Conference 12, 1083–1088. Berger, J.O. (1985). Statistical Decision Theory and Bayesian Analysis, 2nd edition. Springer, New York. Bishop, Y.M.M., Fienberg, S.E. and Holland, P.W. (1975). Discrete Multivariate Analysis: Theory and Practice. The MIT Press, Cambridge, Massachusetts. Blasius, J. and Greenacre, M. (1998). Visualization of Categorical Data. Academic Press, San Diego, California. Cressie, N. and Read, T.R.C. (1984). Multinomial goodness-of-fit tests. Journal of the Royal Statistical Society, Series B 46, 440–464. Fienberg, S.E. (1975). Perspective Canada as a social report. Social Indicators Research 2, 153–174. Friendly, M. (1994). A fourfold display for 2 by 2 by k tables. Technical Report 217, York University, Psychology Department. Friendly, M. (1995). Conceptual and visual models for categorical data. The American Statistician 49, 153–160. Friendly, M. and Meyer, D. (2015). Discrete Data Analysis with R: Visualization and Modeling Techniques for Categorical and Count Data. CRC Press, Boca Raton, Florida. Haldane, J.B.S. (1932). A Note on Inverse Probability. Mathematical Proceedings of the Cambridge Philosophical Society 28, 55–61. Hartigan, J.A. and Kleiner, B. (1981). Mosaics for contingency tables. Computer Science and Statistics: Proceedings of the 13th Symposium on the Interface, 268–273. Hartigan, J.A. and Kleiner, B. (1984). A mosaic of television ratings. The American Statistician 38, 32–35. Kateri, M. (2014). Contingency Table Analysis: Methods and Implementation Using R. Birkhäuser/Springer, New York. Matusita, K. (1954). On the estimation by the minimum distance method. Annals of the Institute of Statistical Mathematics 5, 59–65. Matusita, K. (1955). Decision rules based on the distance, for problems of fit, two samples, and estimation. Annals of the Institute of Statistical Mathematics 26, 631–640. R Core Team (2023). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL: https://www.R-project.org/ Read, T.R.C. and Cressie, N. (1988). Goodness-of-Fit Statistics for Discrete Multivariate Data. Springer, New York. Riedwyl, H. and Schüpbach, M. (1983). Siebdiagramme: Graphische Darstellung von Kontingenztafeln. Technical Report 12, Institute for Mathematical Statistics. University of Bern, Bern, Switzerland. Riedwyl, H. and Schüpbach, M. (1994). Parquet Diagram to Plot Contingency Tables. In F. Faulbaum, ed., Softstat ’93: Advances in Statistical Software, 293–99. New York: Gustav Fischer. Sugano, K., Kinoshita, Y., Miwa, H. and Takeuchi, T. (2012). Randomised clinical trial: esomeprazole for the prevention of nonsteroidal anti-in ammatory drug-related peptic ulcers in Japanese patients. Alimentary Pharmacology and Therapeutics 36, 115–125. Stuart, A. (1955). A test for homogeneity of the marginal distributions in a two-way classification. Biometrika 42, 412–416. Tomizawa, S. (1995). Measures of departure from marginal homogeneity for contingency tables with nominal categories. Journal of the Royal Statistical Society, Series D 44, 425–439. Tomizawa, S., Miyamoto, N. and Ashihara, N. (2003). Measure of departure from marginal homogeneity for square contingency tables having ordered categories. Behaviormetrika 30, 173–193. Yamamoto, K., Ando, S. and Tomizawa, S. (2011). A measure of departure from average marginal homogeneity for square contingency tables with ordered categories. Revstat 9, 115–126. Appendix 1 Assuming that { G_1(i) + G_2(i)≠ 0 }, the power-divergence-type measure representing the degree of departure from MH proposed by Tomizawa, Miyamoto and Ashihara (2003) for λ > -1 is given as Φ^(λ) = λ(λ+1)/2^λ-1∑^r-1_i=1( G^∗_1(i) + G^∗_2(i))                × I^(λ)_i ( { G^c_1(i), G^c_2(i)} ; {1/2, 1/2}), where I^(λ)_i (·, ·) = 1/λ(λ+1)[ G^c_1(i){( G^c_1(i)/1/2)^λ - 1 }                       + G^c_2(i){( G^c_2(i)/1/2)^λ - 1 }], and the value at λ=0 is taken to the limit as λ→ 0. Note that I^(λ)_i (·, ·) is the power-divergence between two distributions (see Cressie and Read, 1984; Read and Cressie, 1988, p.15). Namely, I^(0)_i (·, ·) = G^c_1(i)log( G^c_1(i)/1/2) + G^c_2(i)log( G^c_2(i)/1/2). This measure has the following characteristics: (i) Φ^(λ) = 0 if and only if the MH model holds (ii) Φ^(λ) = 1 if and only if the degree of departure from MH is a maximum, in the sense that G^c_1(i)=1 (then G^c_2(i)=0) or G^c_1(i)=0 (then G^c_2(i)=1), for i = 1, …, r-1 Second, assuming that { G_1(i) + G_2(i)≠ 0 }, the measure representing two kinds of marginal inhomogeneities proposed by Yamamoto, Ando and Tomizawa (2011) is given as Ψ = 4/π∑^r-1_i=1( G^∗_1(i) + G^∗_2(i)) ( θ_i - π/4), where θ_i = cos^-1( G_1(i)/√( G^2_1(i) + G^2_2(i))). This measure has the following characteristics: (i) Ψ = -1 if and only if there is a structure of maximum upper-marginal inhomogeneity (ii) Ψ = 1 if and only if there is a structure of maximum lower-marginal inhomogeneity (iii) If the MH model holds then Ψ = 0, but the converse does not hold Yamamoto et al. (2011) defined this structure (Ψ = 0) as the average MH model. Third, assuming that { G_1(i) + G_2(i)≠ 0 }, the two-dimensional measure that can simultaneously analyze the degree and directionality of departure from MH proposed by Ando, Noguchi, Ishii and Tomizawa (2021) is given as τ = [ Φ^(0); Ψ ]. This two-dimensional measure has the following characteristics: (i) τ = (0, 0)^t if and only if the MH model holds (ii) τ = (1, -1)^t if and only if there is a structure of maximum upper-marginal inhomogeneity (iii) τ = (1, 1)^t if and only if there is a structure of maximum lower-marginal inhomogeneity Appendix 2 Using the delta method, √( n)(Γ̂ - Γ) has an asymptotic variance σ^2[ Γ ], which is given as σ^2[ Γ ] = ∑^r-1_k=1∑^r_l=k+1( p_kl D^2_kl + p_lk D^2_lk), where D_kl = 1/Δ√(2+√(2)/2)∑^r-1_i=1 I(k ≤ i, l ≥ i+1) A_i - (l-k)/ΔΓ, D_lk = 1/Δ√(2+√(2)/2)∑^r-1_i=1 I(k ≤ i, l ≥ i+1) B_i - (l-k)/ΔΓ, A_i = 1/2√(C_i)( 2C_i + υ_1(i)G^c_2(i)/√(G^c_1(i)) - υ_2(i)√(G^c_2(i))), B_i = 1/2√(C_i)( 2C_i - υ_1(i)√(G^c_1(i)) + υ_2(i)G^c_1(i)/√(G^c_2(i))), C_i = υ_1(i)^2 + υ_2(i)^2, and I(·) is indicator function.
http://arxiv.org/abs/2306.11855v1
20230620192018
A Model-free Closeness-of-influence Test for Features in Supervised Learning
[ "Mohammad Mehrabi", "Ryan A. Rossi" ]
cs.LG
[ "cs.LG", "stat.ME" ]
[ A Model-free Closeness-of-influence Test for Features in Supervised Learning Mohammad Mehrabisch Ryan A. Rossicomp schDepartment of Data Sciences and Operations, University of Southern California, Los Angeles, USA compAdobe, San Jose, USA Mohammad [email protected] feature influence, nonparametric testing 0.3in ] Understanding the effect of a feature vector x∈^d on the response value (label) y∈ is the cornerstone of many statistical learning problems. Ideally, it is desired to understand how a set of collected features combine together and influence the response value, but this problem is notoriously difficult, due to the high-dimensionality of data and limited number of labeled data points, among many others. In this work, we take a new perspective on this problem, and we study the question of assessing the difference of influence that the two given features have on the response value. We first propose a notion of closeness for the influence of features, and show that our definition recovers the familiar notion of the magnitude of coefficients in the parametric model. We then propose a novel method to test for the closeness of influence in general model-free supervised learning problems. Our proposed test can be used with finite number of samples with control on type I error rate, no matter the ground truth conditional law ℒ(Y|X). We analyze the power of our test for two general learning problems i) linear regression, and ii) binary classification under mixture of Gaussian models, and show that under the proper choice of score function, an internal component of our test, with sufficient number of samples will achieve full statistical power. We evaluate our findings through extensive numerical simulations, specifically we adopt the datamodel framework (Ilyas, et al., 2022) for CIFAR-10 dataset to identify pairs of training samples with different influence on the trained model via optional black box training mechanisms. § INTRODUCTION In a classic supervised learning problem, we are given a dataset of n iid data points {(x_i,y_i)}_i=1:n with feature vectors x∈^d and response value (label) y∈. From the inferential point of view, understanding the influence of each individual feature i∈{1,…,d} on y is of paramount importance. Considering a parametric family of distributions for (Y|X) is among the most studied techniques for this problem. In this setting, the influence of each feature can be seen by their corresponding coefficient value in the parametric model. Essentially such methods can result in spurious statistical findings, mainly due to model misspecification, where in the first place the ground-truth data generating law (Y|X) does not belong to the considered parametric family. A natural remedy for this problem is to relax the parametric family assumption, removing concerns about model misspecification. Besides the difficulties with the new model-free structure of the problem, we need a new notion to capture the influence of features, as there is no longer a coefficient vector as per class of parametric models. In this paper, we follow the model-free structure, but take a new perspective on the generic problem of investigating the influence of features on the response value. In particular, as a first step towards this notoriously hard question under no class of parametric distribution assumption or whatsoever, we are specifically interested in assessing the closeness of influence of features. For this end, we posit the following fundamental question: (*) In a general model-free supervised learning problem, for two given features, is it possible to assess the closeness of their influence on the response value (label) in a statistically sound way? In this paper, we answer question (*) affirmatively. We characterize a notion of closeness for the influence of features on y under the general model-free framework. We show that this notion aligns perfectly well with former expectations in parametric models, where small difference in the coefficient values imply close influence on the response value. We then cast the closeness of influence question as a hypothesis testing problem, and show that we can control associated type I error rate with finite number of samples. §.§ Motivation Behind Question (*) Beyond the inferential nature of Question (*) that helps to better understand the data-generating process of on-hand data, being able to answer this question has a myriad of applications for other classic machine learning tasks. In fact, inspired by the recent advancements in interpretable machine learning systems, it is desired to strike a balance between model flexibility in capturing the ground-truth law ℒ(Y|X) and using few number of explanatory variables. For this goal, feature aggregation has been used to distill a large amount of feature information into a smaller number of features. In several parametric settings, features with equal coefficients are naturally grouped together, e.g, in linear regression new feature x_1+x_2 is considered rather than (x_1,x_2), in case that x_1,x_2 have equal corresponding regression coefficients <cit.>. In addition, identifying features with near influence on the response value can be used for tree-based aggregation schemes <cit.>. This is of paramount importance in learning problems involving rare features, such as the count of microbial species <cit.>. In addition, in many learning problems, an honest comprehensive assessment for characterizing the behavior of Y with respect to a certain attribute A is desired. This can be used to assess the performance of model with respect to a sensitive attribute (fair machine learning), or to check if two different treatments (different values of A) have close influence on potential outcomes. §.§ Related Work In machine learning, the problem of identifying a group of features that have the largest influence on the response value is often formulated as variable selection. With a strong parametric assumption, the conditional law (Y|X) is considered to belong to a known class of parametric models, such as linear regression. For variable selection in the linear regression setting, the LASSO <cit.> and Dantzig selector <cit.> are the most widely used. In fact, there are several other works for variable selection in the linear regression setting with output solutions satisfying certain structures, such as <cit.>. There has been another complimentary line in the past years from model-X perspective <cit.>. In this setting, despite the classical setup, in which a strong parametric assumption is considered on the conditional law, it shifts the focus to the feature distribution X and assumes an extensive knowledge on the distribution of the features. This setting arises naturally in many learning problems. For example, we can get access to distributional information on features in learning scenarios where the sampling mechanism can be controlled, e.g,. in datamodel framework <cit.>, and gene knockout experiments <cit.>. Other settings include problems where an abundant number of unlabeled data points (unsupervised learning) are available. The other related line of work is to estimate and perform statistical inference on certain statistical model parameters. Specifically, during the past few years, there have been several works <cit.> for inferential tasks on low-dimensional components of model parameters in high-dimensional (d>n) settings of linear and generalized linear models. Another complementary line of work, is the conditional independence testing problem X_j Y| X_-j to test if a certain feature X_j is independent of the response value Y, while controlling for the effect of the other features. This problem has been studied in several recent works for both parametric <cit.>, and model-X frameworks <cit.>. Here are couple of points worth mentioning regarding the scope of our paper. * (Feature selection methods) However Question (*) has a complete different nature from well-studied variable selection techniques– with the goal of removing redundant features, an assessment tool provided for (*) can be beneficial for post-processing of feature selection methods as well. Specifically, we expect that two redundant features have close (zero) influence on the response value, therefore our closeness-of-influence test can be used to sift through the set of redundant features and potentially improve the statistical power of the baseline feature selection methods. * (Regression models) We would like to emphasize that however fitting any class of regression models would yield an estimate coefficient vector, but comparing the magnitude of coefficient values for answering Question (*) is not statistically accurate and would result in invalid findings, mainly due to model misspecification. Despite such inaccuracies of fitted regression models, our proposed closeness-of-influence test works under no parametric assumption on the conditional law. * (Hardness of non-parametric settings) The finite-sample guarantee on type-I error rate for our test does not come free. Specifically, this guarantee holds when certain partial knowledge on the feature distributions (X) is known. This setup is often referred as model-X framework <cit.>, where on contrary to the classic statistic setups, the conditional law (Y|X) is optional, and adequate amount of information on features distribution (X) is known. Such requirements for features distribution makes the scope of our work distant from completely non-parametric problems. §.§ Summary of contributions and organization In this work, we propose a novel method to test the closeness of influence of a given pair of features on the response value. Here is the organization of the three major parts of the paper: * In Section <ref>, we propose the notion of symmetric influence and formulate the question (*) as a tolerance hypothesis testing problem. We then introduce the main algorithm to construct the test statistic, and the decision rule. We later show that the type-I error is controlled for finite number of data points. * In Section <ref>, for two specific learning problems: 1) linear regression setup, and 2) binary classification under a mixture of Gaussians, we analyze the statistical power of our proposed method. Our analysis reveals guidelines on the choice of the score function, that is needed for our procedure. * In Section <ref>, we combine our closeness-of-influence test with datamodels <cit.> to study the influence of training samples on the trained black box model. We consider CIFAR-10 dataset and identify several pairs of training samples with different influence on the output models. Finally, we empirically evaluate the performance of our method in several numerical experiments, we show that our method always controls type-I error with finite number of data points, while it can achieve high statistical power. We end the paper by providing concluding remarks and interesting venues for further research. §.§ Notation For a random variable X, we let (X) denote the probability density function of X. For two density functions p,q let d_(p,q) denote the total variation distance. We use Φ(t) and φ(t) respectively for cdf and pdf of standard normal distribution. For and integer n let [n]={1,…,n} and for a vector x∈^d and integers i,j∈ [d] let x_(i,j) be a vector obtained by swapping the coordinates i and j of x. We let (μ,Σ) denote the probability density function of a multivariate normal distribution with mean μ and covariance matrix Σ. § PROBLEM FORMULATION We are interested in investigating that if two given features i,j have close influence on the response value y. Specifically, in the case of the linear regression setting (Y|X)=(X^þ,σ^2), two features i and j have an equal effect on the response variable y, if the model parameter þ has equal coordinates in i and j. In this parametric problem, the close influence analysis can be formulated as the following hypothesis testing problem H_0: |þ_i-þ_j| ≤τ , H_A :|þ_i-þ_j| > τ . In practice, the considered parametric model may not hold, and due to model misspecification, the reported results are not statistically sound and accurate. Our primary focus is to extend the definition of close influence of features on the response value to a broader class of supervised learning problems, ideally with no parametric assumption on (Y|X) (model-free). For this end, we first propose the notion of symmetric influence. [Symmetric influence] We say that two features i,j ∈ [d] have a symmetric influence on the response value y if the conditional law p_Y|X does not change once features i and j are swapped in x. More precisely, if (Y|X)=(Y|X_(i,j)), where X_(i,j) is obtained from swapping coordinates i and j in X. While the perfect alignment between density function p_Y|X and p_Y|X_(i,j) is considered as equal influence, it is natural to consider small (but nonzero) average distance of these two density functions as having close influence of features i,j on the response value. Inspired by this observation, we cast the problem of closeness-of-influence testing as a tolerance hypothesis testing problem (<ref>). Before further analyzing this extended definition, for two simple examples we show that the symmetric influence definition recovers the familiar equal effect notion in parametric problems. It is worth noting that this result can be generalized to a broader class of parametric models. Consider the logistic model (Y=1|X=x)=1/1+exp(-x^þ). In this model, features i and j have symmetric influence on y if and only if þ_i=þ_j. In addition, for the linear regression setting y=x^þ+ with ∼(0,σ^2), features i and j have symmetric influence on y if and only if þ_i=þ_j. We refer to Appendix A for proofs of all propositions and theorems. §.§ Closeness-of-influence testing Inspired by the definition of symmetric influence given in Definition (<ref>), we formulate the problem of testing the closeness of the influence of two features i,j on y as the following: ℋ_0 : [ d_(p_Y|X, p_Y|X_(i,j))] ≤τ , ℋ_A : [ d_(p_Y|X, p_Y|X_(i,j))] > τ . Specifically, this hypothesis testing problem allows for general non-negative τ values. We can test for symmetric influence by simply selecting τ=0. In this case, we must have p_Y|X=p_Y|X_(i,j) almost surely (with respect to some measure on ). For better understanding of the main quantities in the left-hand-side of (<ref>), it is worth to note that p_Y|X_(i,j)(y|x)=p_Y|X(y|x_(i,j)) and the quantity of interest can be written as [ d_(p_Y|X, p_Y|X_(i,j))] =1/2∫|p_Y|X(y|x)-p_Y|X(y|x_(i,j))|p_X(x) y x . We next move to the formal process to construct the test statistics of this hypothesis testing problem. Test statistics. We first provide high-level intuition behind the test statistics used for testing (<ref>). In a nutshell, for two i.i.d. data points (x^(1),y^(1)) and (x^(2),y^(2)), if the density functions p_Y|X is close to p_Y|X_(i,j), then for an optional score functions applied on (x^(1),y^(1)) and (x_(i,j)^(2),y^(2)), with equal chance (50%) one should be larger than the other one. This observation is subtle though. Since we intervene in the features of the second data point (by swapping its coordinates), this shifts the features distribution, thereby the joint distribution of (x^(1),y^(1)) and (x_(i,j)^(2),y^(2)) are not equal. This implies that we must control for such distributional shifts on features as well. The formal process for constructing the test statistics U_n is given in Algorithm <ref>. We next present the decision rule for hypothesis problem (<ref>). Decision rule. For the data set (,) of size n and test statistic U_n as per Algorithm <ref> at significance level α consider the following decision rule ψ_n (,) =(|U_n-1/2|≥τ+ τ_X+√(log (2/α)/n)) , with τ_X being an upper bound on the total variation distance between the original feature distribution, and the obtained distribution by swapping coordinates i,j. More precisely, for two independent features vectors X^(1),X^(2) let be such that ≥ d_((X^(1)) ,(X^(2)_(i,j)) ). In fact, in several learning problems when features have a certain symmetric structure, the quantity τ_X is zero. For instance, when features are multivariate Gaussian with isotropic covariance matrix. More on this can be seen in Section <ref>. Size of the test. In this section, we show that the obtained decision rule <ref> has control on type I error with finite number of samples. More precisely, we show that the probability of falsely rejecting the null hypothesis (<ref>) can always be controlled such that it does not exceed a predetermined significance level α. Under the null hypothesis (<ref>), decision rule (<ref>) has type-I error smaller than α. More precisely _ℋ_0(ψ(,)=1)≤α . Based on decision rule (<ref>), we can construct p-values for the hypothesis testing problem (<ref>). The next proposition gives such formulation. Consider p= 1 , |U_n-1/2|≤τ+τ_X , 1∧η_n(U_n,τ,τ_X) , otherwise , with function η_n(u,τ_1,τ_2) being defined as η_n(u,τ_1,τ_2)=2exp(-n(|u-1/2|-τ_1-τ_2)^2) . In this case, the p-value p is super-uniform. More precisely, under the null hypothesis (<ref>) for every α∈ [0,1] we have (p≤α)≤α . §.§ Effect of feature swap on features distribution From the formulation of the decision rule given in (<ref>), it can be seen that an upper bound on total variation distance between density functions of X^(1) and X^(2)_(i,j) is required. This quantity shows up as τ_X in (<ref>). Regarding this change on X distribution, two points are worth mentioning. First, in several classes of learning problems the feature vectors follow a symmetric structure which renders the quantity τ_X to zero. For instance, when features have an isotropic Gaussian distribution (Proposition <ref>), or in the datamodel sampling scheme <cit.>, the formal statement is given in Proposition <ref>. Secondly, the value of τ_X can be computed when adequate amount of information is available on distribution of X, the so-called model-X framework <cit.>. We would also like to emphasize that indeed we do not need the direct access to entire density function p_X information, and an upper bound on the quantity d_((X^(1)), (X^(2)_(i,j))) is sufficient. In the next proposition, for the case that features follow a general multivariate Gaussian distribution (μ,Σ) we provide a valid closed-form value for τ_X. Consider a multivariate Gaussian distribution with the mean vector μ∈^d and the covariance matrix Σ∈^d× d, for two features i and j the following holds: d_((X^(1)),(X^(2)_𝗌𝗐𝖺𝗉(i,j))) ≤1/2[(-I_d+P_ijΣ^-1P_ijΣ) +(μ-P_ijμ)^Σ^-1(μ-P_ijμ) ]^1/2 , where P_ij is the permutation matrix that swaps the coordinates i and j. More precisely, for every x∈^d we have P_ijx=x_(i,j). It is easy to observe that in the case of isotropic Gaussian distribution with zero mean, we can choose τ_X=0. More concretely, when μ=0, and Σ=σ^2I, then Proposition <ref> reads τ_X=0. We next consider a setting with binary feature vectors that arise naturally in datamodels <cit.>, and will be used later in experiments of Section <ref>. Consider a learning problem with binary features vector x∈{0,1}^d. For a positive integer m, we suppose that x is sampled uniformly at random from the space S_m={x∈{0,1}^d: ∑ x_i=m}. This means that the output sample has binary entries with exactly m non-zero coordinates. Then, in this setting for two independent features vectors x^(1),x^(2), the following holds d_((X^(1)),(X^(2)_𝗌𝗐𝖺𝗉(i,j)))=0 . § MULTIPLE TESTING In this section, we consider the problem of assessing the symmetric influence of a multiple pairs of features. Since we are testing multiple hypothesis simultaneously, we have to control for the effect of multiple testing. In statistics, a large body of works are developed for this problem to control the false discovery rate (FDR) at a significance level. Specifically, FDR is defined as the expected value of falsely rejected null hypothesis to the total number of hypothesis. We first use the generic Benjamini–Yekutieli procedure to control FDR. This procedure uses the obtained p-values in (<ref>). More precisely, for the case of m tests (i.e., testing m pairs for symmetric influence), we have to first sort p-values as p_(1)≤ p_(2)≤… p_(m). In the next step, for significance level α define k=max{1≤ t≤ m: p_(t)≤m/m c_mα} , with c_m=∑_i=1^m 1/i. Finally, all hypothesis with corresponding p-values p_(t) for t≤ k will be rejected. In the next step, we provide another procedure for multiple testing which is specifically tailored for the symmetric influence problem with τ_=0. We then later compare the performance of this method with the generic Benjamini–Yekutieli procedure. This new multiple testing is inspired by the selective sequential step-up procedure introduced in Rina paper. The next theorem elaborates more on this process. Split the dataset into two parts (,) and (,), and let ={a_ℓ:=(i_ℓ,j_ℓ)} be a set of pairs with i_ℓ,j_ℓ∈{1,2,…,d}. We are interested in testing the null hypothesis (<ref>) for each pair. Define _ℓ=_(a_ℓ). Let W_ℓ=T(,)-T(_ℓ,). Then for q∈ (0,1) let τ=inf{t>0: 1+#{ℓ: W_ℓ≤ -t }/#{ℓ: W_ℓ≥ t }≤ q } . By defining S={ℓ: W_ℓ≥τ}, we have [|{j∈S∩ℋ_0 } |/|S|∨ 1] ≤ q , where ℋ_0 denote the set of pairs where the null hypothesis <ref> holds for them. More precisely, ℋ_0={(i_ℓ,j_ℓ): features i_ℓ and j_ℓ have symmetric influence} . § POWER ANALYSIS In this section, we provide a power analysis for our method. For a fixed score function T:𝒳×𝒴→ and two i.i.d. data points (x^(1),y^(2)) and (x^(2),y^(2)) consider the following cumulative distribution functions: F_T(t) =(T(X^(1),Y^(1))≤ t) , G_T(t) =(T(X^(2)_(i,j),Y^(2))≤ t) . In the next theorem, we show that the power of our test depends on the average deviation of the function F_T∘ G_T^-1 from the identity mappinp on the interval [0,1]. Consider the hypothesis testing problem (<ref>) at significance level α with n data points (,). In addition, suppose that score function T:𝒳×𝒴→ satisfies the following condition for some β∈ (0,1): | ∫_0^1 (F_T(G_T^-1(u))-u) u | ≥ρ_n(α,β,τ)+τ_X , with ρ_n(α,β,τ)=2exp(-nβ^2)+√(log(2/α)/n)+τ. In this case, the decision rule (<ref>) used with the score function T has type II error not exceeding β. More precisely (Ψ_n(,)=1) ≥ 1-β . The function F_T∘ G_T^-1 is called ordinal dominance curve (ODC) <cit.>. It can be seen that the ODC is the population counterpart of the PP plot. A direct consequence of the above theorem is that if the ODC has a larger distance from the identity map i(u)=u, then it would be easier for our test to flag smaller gaps between the influence of features. We next focus on two learning problems: 1) linear regression setting, and 2) binary classification under Gaussian mixture models. For each problem, we use Theorem <ref> and provide lower bounds on the statistical power of our closeness-of-influence test. Linear regression setup. In this setting, we suppose that y=x^þ^*+ for ∼(0,σ^2) and feature vectors drawn iid from a multivariate normal distribution (0,I_d). Since features are isotropic Gaussian with zero mean, by an application of Theorem <ref> we know that τ_X is zero. In the next theorem, we provide an upper bound for hypothesis testing problem (<ref>) with n data points and the score function T(x,y)=|y-x^| for some model estimate . We show that in this example, the power of the test highly depends on the value |þ^*_i-þ^*_j| and the quality of the model estimate . Indeed, the higher the contrast between the coefficient values þ^*_i and þ^*_j, the easier it is for our test to reject the null hypothesis. Under the linear regression setting y=x^þ^*+ with ∼(0,σ^2) with feature vectors coming from a normal population x∼(0,I_d), consider the hypothesis testing problem (<ref>) for features i and j with τ∈ (0,1). We run Algorithm <ref> at the significance level α with the score function T(x,y)=|y-x^| for a model estimate ∈^d. For β∈ (0,1) such that tan(π/2ρ_n(α,β,τ))≤1/2, suppose that the following condition holds |þ^*_i-þ^*_j|≥2tan(π/2(ρ_n(α,β,τ)))/1-2tan(π/2(ρ_n(α,β,τ))) (σ^2+-þ^*_2^2 )/|_i-_j| , for ρ_n(α,β,τ) as per Theorem <ref>. Then, the type II error is bounded by β. More precisely, we have (Ψ_n(,)=1)≥ 1-β . We refer to Appendix for the proof of Theorem <ref>. It can be seen that the right-hand-side of the above expression can be decomposed into two major parts. The first part involves the problem parameters, such as the number of samples n, and error tolerance values α and β. This quantity for a moderately large number of samples n, and small tolerance value τ can get sufficiently small. On the other hand, the magnitude of the second part depends highly on the quality of the model estimate and the inherent noise value of the problem σ^2 which basically indicates how structured is the learning problem. Another interesting observation is regarding the |_i-_j|. Indeed, it can be inferred that small values of this quantity renders the problem of discovering deviation from the symmetric influence harder. This conforms to our expectation, given that in the extreme scenario that _i=_j it is impossible for the score function to discern þ^*_i and þ^*_j, because of the additive nature of the considered score function. Binary classificaiton. In this section, we provide power analysis of our method for a binary classification setting. Specifically, we consider the binary classification under a mixture of Gaussian model. More precisely, in this case the data generating process is given by y= +1 , w.p q , -1 , w.p 1-q . , x∼(yμ,I_d) . We consider the influence testing problem (<ref>) with τ=0. In the next theorem, we provide a lower bound on the statistical power of our method used under this learning setup. Under the binary classification setup (<ref>), consider the hypothesis testing problem <ref> for τ=0. We run Algorithm <ref> with the score function T(x,y)=yx^þ at the significance level α, and suppose that for some nonnegative value β the following holds |μ_i-μ_j|≥Φ^-1(1/2+ρ_n(α,β,0) )√(2)_2/|_i-_j| , where ρ_n(α,β,τ) is given as per Theorem <ref>. Then the type-II error in this case is bounded by β. More concretely, we have (Ψ_n(,)=1)≥ 1-β . It is important to note that in this particular setting, the features do not follow a Gaussian distribution with a zero mean. Instead, they are sampled from a mixture of Gaussian distributions with means μ and -μ. The reason why τ_X=0 can be utilized is not immediately obvious. However, we demonstrate that when testing for τ=0 under the null hypothesis, it is necessary for μ_i to be equal to μ_j, and the distribution of features remains unchanged when the coordinates i and j are swapped. As a result, we can employ τ_X=0 in this scenario. This argument is further elaborated upon in the proof of Theorem <ref>. From the above expression it can be observed that for sufficiently large number of data points n and a small value τ, the value Φ^-1(1/2+ρ_n) will get smaller and converge to zero. In addition, it can be inferred that an ideal model estimate must have small norm and high contrast between _i and _j values. An interesting observation can be seen on the role of other coordinate values in . In fact, it can be realized that for the choice of the score function T(x,y)=yx^, the support of the model estimate must be a subset of two features i and j, since this would decrease || and increases the value of |_i-_j|. § EXPERIMENTS In this section, we evaluate the performance of our proposed method for identifying the symmetric influence across features. We start by the Isotropic Gaussian model for feature vectors. More precisely, we consider x∼(0,I_d) with d=10. In this case, we have τ_X=0 and we consider the hypothesis testing problem (<ref>) for τ=0 (symmetric influence). Size of the test. We first start by examining the size of our proposed method. For this end, we consider the conditional law y|x∼(x^ S x,1), for a semi-positive definite matrix S with coordinate (i,j) being S_i,j=1+(i=j). The conditional mean of y|x is a quadratic form and it is easy to observe that in this case for every two features i,j∈{1,…,10} we have x^ S x=x_(i,j)^ S x_(i,j), and therefore the null hypothesis holds. We test for the symmetric influence of each pair of features (102 number of tests). We run our method with the score function T(x,y)=|y-^ x| with ∼(0,I_d). The estimate is fixed across all 45 tests. We suppose that we have access to 1000 data points, and we consider three different significance levels α=0.1,0.15, and 0.2. The results of this experiment can be seen in Figure <ref> where the reported numbers (rejection rates) are averaged over 1000 independent experiments. It can be observed that, in this case for all three significance levels, the rejection rates are smaller than α, and therefore the size of the test is controlled. Power analysis. The linear regression setting is considered, in which y|x∼(x^þ^*, 1), for þ^*∈^d with d=10. We consider the following pattern for signal strength þ^*_1=þ^*_2=1, þ^*_3=þ^*_4=2, þ^*_5=þ^*_6=3, þ^*_7=þ^*_8=4, þ^*_9=þ^*_10=5. In this example, it can be observed that the following pairs of features ={(1,2),(3,4),(5,6),(7,8),(9,10)} have symmetric influence, and for any other pair the null hypothesis (<ref>) must be rejected. We use the score function T(x,y)=|y-x^| at significance level α=0.1 for three different choices of . We follow this probability distribution ∼(þ_0,σ^2 I_d) for three different σ values σ=1,2, and 3. A smaller value of σ implies a better estimation of þ_0. The average rejection rates are depicted in Figure <ref>, where each 10× 10 square corresponds to a different σ value (three plots in total). Specifically, (i,j)-th cell in each plot denotes the average rejection rate of the symmetric influence hypothesis for features i and j. The rejection rates are obtained by averaging over 1000 independent experiments. First, it can be inferred that for pairs belonging to the set the rejection rate is always smaller than the significance level α=0.1, thereby the size of the test is controlled. In addition, by decreasing the σ value (moving from right to left), it can be inferred that the test achieves higher power (more dark blue regions). It is consistent with our prior expectation that the statistical power of our method depends on the quality of the score function T and model estimate ; see Theorem <ref>. More on the statistical power of our method, it can be observed that within each plot, pairs that have higher contrast in the difference of coefficient magnitudes have higher statistical power. For instance, this pair of features (1,10) with coefficient values þ^*_1=1,þ^*_10=5 has rejection rates of 0.987,0.768 ,0.543 (for σ=1,2,3, respectively) while the other pair of features (6,8) with coefficient values þ^*_6=3,þ^*_8=4 has rejection rate of 0.294,0.097,0.055 (for σ=1,2,3, respectively). § INFLUENCE OF TRAINING DATA ON OUTPUT MODEL In this section, we combine our closeness-of-influence test with datamodel framework <cit.> to analyze the influence of training samples on the evaluations of the trained model on certain target examples. We first provide a brief overview on datamodels and later describe the experiments setup. §.§ Datamodels For training samples ^𝗍𝗋𝖺𝗂𝗇={(x_i,y_i)}_i=1:N consider a class of learning algorithm , where by class we mean a training mechanism (potentially randomized), such as training a fixed geometry of deep neural networks via gradient descent and a fixed random initialization scheme. In datamodels <cit.>, a new learning problem is considered, where feature vectors S are binary 0-1 vectors with size N with γ∈ (0,1) portion one entries, selected uniformly at random. Here S is an indicator vector for participation of N data points ^𝗍𝗋𝖺𝗂𝗇 in the training mechanism, i.e,. S_i=1 if and only if the i-th sample of ^𝗍𝗋𝖺𝗂𝗇 is considered for the training purpose via . For a fixed target example x, the response value is the evaluation (will be described later) of the output model (trained with samples indicated in S) on x, denoted by f_(x;S). This random sampling of data points from ^𝗍𝗋𝖺𝗂𝗇 is repeated m times, therefore data for the new learning problem is {(S_i,f_(x,S_i))}_i=1:m. The ultimate goal of datamodels is to learn the mapping S→ f_(x,S) via surrogate modeling and a class of much less complex models. In the seminal work of <cit.>, they show that using linear regression with ℓ_1 penalty (LASSO <cit.>) performs surprisingly well in learning the highly complex mapping of S→ f_(x,S). §.§ Motivation We are specifically interested in analyzing the influence of different pairs of training samples on a variety of test targets, and discover pairs of training samples that with high certainty influence the test target differently. We use the score function (f_(x,S)-x^)^2 for our closeness-of-influence test, where is the learned datamodel. We adopt this score function, mainly due to the promising performance of linear surrogate models in <cit.> for capturing the dependency rule between S and f_(x;S). In addition, the described sampling scheme in datamodels satisfies the symmetric structure as per Proposition <ref> (so τ_X=0). We would like to emphasize that despite the empirical success of datamodels, the interpretation of training samples with different coefficient magnitude in the obtained linear datamodel is not statistically accurate. Here we approach this problem through the lens of hypothesis testing and output p-values, to project the level of confidence in our findings. §.§ Experimental Setups and Results We consider the CIFAR-10 dataset <cit.>, which has N=50000 training samples along with 10000 test datapoints and 10 classes [airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck]. We consider γ=0.5 (portion of ones in S_i samples), and follow the same heuristics provided for f_(x;S) in <cit.>, which is the correct-class margin, defined as the logit value of the true class minus the highest logit value among incorrect classes. We use the datamodel data given in <https://github.com/MadryLab/datamodels-data>. The provided data has 310k samplings, where for each target example x (in the test data) the datamodel parameter ∈^N is estimated via the first 300k samples (10000 total number of datamodels for each test data). We use the additional 10k samples to run our closeness-of-fit test with the linear score function (f_(x;S)-x^)^2. Now, for each pair of training samples and a specific target test example, we can test for their closeness of influence. In the first experiment, for each two classes (can be the same) we choose two pictures as the training pair (randomly from the two classes), and for the target sample, we select randomly from the class of dog pictures. For each two classes, we repeat this process 20 times, and run our test (<ref>) with τ=0, and report all p-values (2000 in total). After running the Benjamini–Yekutieli procedure <cit.> (with log factor correction to control for dependency among p-values), we find three statistically significant results at α=0.2 with p-value=5× 10^-5 (for all three discoveries). Surprisingly, all three findings correspond to a similar test image, the pictures of training pairs and the one test image can be seen in Figure <ref>. It can be observed that in all findings one of the reported images is visually closer to the target image. This conforms well to obtained results that the null hypothesis (<ref>) which states that the two training images have equal influence on the target sample is rejected. We refer to Appendix B for the rest of experiments. § CONCLUDING REMARKS In this paper, we proposed a novel method to test the closeness of influence of a given pair of features on the response value. This procedure makes no assumption on the conditional law between the response value and features (ℒ(Y|X)). We first proposed a notion called "symmetric influence" that generalized the familiar concept of equal coefficient in parametric models. This notion is motivated to characterize the sensitivity of the conditional law with respect to swapping the features. We then formulated the closeness-of-influence testing problem as a tolerance hypothesis testing. We provide theoretical guarantees on type-I error rate. We then analyzed statistical power of our method for a general score function T, and show that for two specific learning problems i) linear regression settings, and 2) binary classification under a mixture of Gaussian models with a certain choice of score functions we can achieve full statistical power. Finally, we adopt the datamodel framework and use our closeness-of-influence test to find training samples that have different influence on the trained model. Several interesting venues for future research are in order. In particular, extending this framework for multiple testing (testing for multiple number of pairs) and still achieving valid statistical results. This can be done with generic multiple testing frameworks (similar to Benjamini–Yekutieli procedure used in Section <ref>) on the obtained p-values, but a method that is crafted for this setting can be more powerful. In addition, extending this framework for studying influence of a group of features (more that two) can be of great interest. §.§ Proof of Theorem <ref> We first establish the flip-sign property for the statistics W_ℓ. Concretely, we show that the signs of W_ℓ statistics under the null hypothesis, are i.i.d. Bernoulli's with probability. (Selective sequential step-up procedure) For multiple testing problem with m p-values p_1,…,p_m such that the null p-values are i.i.d. with p_j≥𝗎𝗇𝗂𝖿([0,1]), for c∈ (0,1) let k=max{1≤ k≤ m: #{j≤ k: p_j>c}/#{j≤ k: p_j≤ c }∨ 1} , and reject all H_j such that p_j≤ c and j≤k. Then for V being the number of false discoveries, and R being the total number of rejections, we have [V/R∨ 1]≤ q . Let W_ℓ=T(,)-T(_ℓ,), we now consider the following 1-bit p-values: p_j=1/2 , W_j>0 , 1 , W_j<0 . We must show that under the null hypothesis, they dominate the uniform distribution. More precisely, we need to show that (p_j≤ u)≤ u . Under the null hypothesis, we have (W_ℓ>0)=(W_ℓ<0)=1/2. Thereby, p_j with equal probability is 1/2 and 1. This brings us (p_j≤ u)= 0 , u< 1/2 , 1/2 , 1/2≤ u <1 , 1 , u=1 . This means that the p_j are superuniform p-values. In the next step, we try to find the proper value of c in the above lemma. By considering c=1/2 the expression in Lemma reduces to the following k=max{1≤ k≤ m: #{j≤ k: p_j>1/2}/#{j≤ k: p_j≤ 1/2 }∨ 1} , From the definition of p_j we get k=max{1≤ k≤ m: #{j≤ k: W_j>0}/#{j≤ k: W_j≤ 0 }∨ 1} , We only need to draw a connection between τ to k values. icml2023 3pt Supplementary Materials for “A general nonparametric framework for testing equal influence of features” 1pt § PROOF OF THEOREMS AND TECHNICAL LEMMAS §.§ Proof of Theorem <ref> Consider two data points z^(1)=(x^(1),y^(1)), z^(2)=(x^(2),y^(2)) drawn i.i.d. from the density function p_X,Y. For two features i,j, define π=(T(X^(1),Y^(1)) ≥ T(X^(2)_(i,j),Y^(2)) ) . We want to show that under the null hypothesis, the value π is concentrated around 1/2 with maximum distance of τ_X. First, from the symmetry between two i.i.d. data points we have (T(X^(1),Y^(1))≥ T(X^(2),Y^(2)))=1/2 . The underlying assumption is that in the case of equal values the tie is broken randomly. We introduce z^(2)=(x^(2)_(i,j),y^(2)). This brings us π-1/2 =(T(X^(2)_(i,j),Y^(2))≤ T(Z_1)) -(T(X^(2),Y^(2))≤ T(Z_1)) =[(T(Z^(2))≤ T(Z^(1))|Z^(1),Y^(2))] -[(T(Z^(2))≤ T(Z^(1))|Z^(1),Y^(2))] . In the next step, we let T^(1)=T(Z^(1)), T^(2)=T(Z^(2)), and T^(2)=T(^(2)). Then, by an application of Jenson's inequality we get |π-1/2| ≤[|(T^(2)≤ T^(1)|Z^(1),Y^(2))-(T^(2)≤ T^(1)|Z^(1),Y^(2))|] On the other hand, for some values z∈^d+1,y∈ consider the following measurable set: A_z,y={x∈^d: T(x,y)≤ T(z) } . By using this definition of set A_z,y in (<ref>) and shorthands W=(Z^(1),Y^(2)) we arrive at |π-1/2| ≤[|(X^2_(i,j)∈A_W |W)-(X^(2)∈ A_W|W)|] ≤[d_( p_X^(2)_(i,j)|W, p_X^(2)|W) ] , where the last inequality follows the definition of the total variation distance. Since Z^(1) and Z^(2) are independent random variables, we get that d_( p_X^(2)_(i,j)|W, p_X^(2)|W) =d_( p_X^(2)_(i,j)|Y^(2), p_X^(2)|Y^(2)) =d_( p_X_(i,j)|Y, p_X|Y) , where the last relation comes from the fact that random variable (x,y)∼ p_x,y and (x^(2),y^(2)) has a similar density function. Using the above relation in (<ref>) yields |π-1/2| ≤[d_( p_X^_(i,j)|Y^, p_X^|Y^) ] =d_( p_X_(i,j),Y, p_X,Y) . In the next step, for x∈^d and y∈ let p(x,y) and q(x,y) respectively denote the density functions of (X_(i,j),Y) and (X,Y). From the above relation we get |π-1/2| ≤1/2∫|p(x,y)-q(x,y) | x y . On the other hand, by rewriting the total variation distance of the joint random variables get |p(x,y)-q(x,y)| =|p(x)p(y|x)-q(x)q(y|x)| =|p(x)p(y|x)-p(x)q(y|x) +p(x)q(y|x)-q(x)q(y|x)| ≤ p(x)|p(y|x)-q(y|x)| + |p(x)-q(x)|q(y|x) . Plugging this into the above relation yields |π-1/2| ≤1/2∫ p(x)|p(y|x)-q(y|x)| x y +1/2∫ |p(x)-q(x)|q(y|x) x y . In the next step, by integration with respect to y we get |π-1/2| ≤1/2∫ p(x)|p(y|x)-q(y|x)| x y +1/2∫ |p(x)-q(x)| x . This implies that |π-1/2| ≤_X[d_(p_Y|X_(i,j),p_Y|X)]+d_(p_X,p_X_(i,j)) . Finally, under the null hypothesis <ref> and the fact that τ_X≥ d_( p_X, p_X_(i,j)) we get |π-1/2| ≤τ_X+τ . Any deviation from this range is accounted as evidence against the null hypothesis <ref>. In Algorithm <ref>, for each 1≤ m≤ n/2, it is easy to observe that each random variable (T(X^(m),Y^(m))≥ T(^(m),Y^(m)) ) is a Bernoulli with success probability π. In the next step, by an application of Hoeffding's inequality for every t≥ 0 and sum of n/2 independent Bernoulli random variables we get (|2/n∑_i=1^n/2(T(x_i,y_i)≤ T(_i,_i)) -π|≥ t ) ≤ 2exp(-nt^2) . Therefore, for statistics U_n as per Algorithm <ref> we get (|U_n -π |≥ t) ≤ 2exp(-nt^2) , ∀ t≥ 0 We next consider δ≥τ+τ_X and use triangle inequality to obtain (|U_n-1/2|≥δ) ≤(|U_n-π|+|π-1/2|≥δ) ≤(|U_n-π|≥δ-τ-τ_X) ≤ 2exp(-n(δ-τ-τ_X)^2) . Where in the penultimate relation we used (<ref>), and the last relation follows (<ref>). By letting α=δ-τ-τ_X, we get (|U_n-1/2|≥τ+τ_X+√(log2/α/n))≤α . This completes the proof. §.§ Proof of Proposition <ref> We start with þ_i=þ_j, and we want to show that the symmetric influence property holds. We have p_Y|X_(i,j)(y|x) =p_Y|X(y|x_(i,j)) =(Y=1|X=x_(i,j)) =(1+exp(-x_(i,j)^β))^-1 =(1+exp(-β_ix_j-β_jx_i-∑_ℓ≠ i,jx_ℓβ_ℓ))^-1 . Using β_i=β_j yields p_Y|X_(i,j(y|x) =(1+exp(-β_jx_j-β_ix_i-∑_ℓ≠ i,jx_ℓβ_ℓ))^-1 =(1+exp(-∑_ℓx_ℓβ_ℓ))^-1 =p_Y|X(y|x) . This completes the proof for the first part. For the other direction, suppose that the symmetric influence for i, j holds, thereby for every x∈^d we have (Y=+1|X_(i,j)=x)= (Y=+1|X=x) . By using p_Y|X_(i,j)(y|x)=p_Y|X(y|x_(i,j)) along with the logistic regression relation, we get (1+exp(-β_ix_j-β_jx_i-∑_ℓ≠ i,jx_ℓβ_ℓ))^-1 =(1+exp(-∑_ℓx_ℓβ_ℓ))^-1 . In the next step, using the function log(u/1-u) on the both sides, we get β_i x_i +β_j x_j=β_i x_j + β_j x_i . Since this must hold for all x_i,x_j values, we must have β_i=β_j. The proof for the linear regression setting follows the exact similar argument. §.§ Proof of Proposition <ref> Since x is a multivariate Gaussian, it means that its coordinates are jointly Gaussian random variables, therefore swapping the location of two coordinates i and j does not change the joint Gaussian property. On the other hand, from the linear transform x_(i,j)=P_ijx it is easy to arrive at x_(i,j)∼(P_ijμ,P_ijΣ P_ij). We are only left with upper bounding the KL divergence of density functions (μ,Σ) and (P_ijμ,P_ijΣ P_ij). For this end, we borrow a result from <cit.> for kl-divergence of multivariate Gaussian distributions. Formally we have, d_𝗄𝗅(𝒩(μ_1,Σ_1)𝒩(μ_2,Σ_2)) =1/2(logΣ_2/Σ_1-d+(Σ_2^-1Σ_1)+(μ_2-μ_1)^Σ_2^-1(μ_2-μ_1) ) . By replacing Σ_2=Σ and Σ_1= P_ijΣ P_ij along with the fact that P_ij=-1, we arrive at d_((X_(i,j))(X))=1/2(-d+(Σ^-1P_ijΣ P_ij)+(μ-P_ijμ)^Σ^-1(μ-P_ijμ) ) . Finally using Pinsker's inequality[ For two denisty functions p,q this holds d_(p,q)≤√(d_𝗄𝗅(pq)/2)] completes the proof. §.§ Proof of Proposition <ref> In this setup, form the construction of the feature vector x∈{0,1}^d it is easy to get that for every α∈{0,1}^d we have (x=α)=(|α|=m)/dm . From this structure, since swapping the coordinates does not change the number of non-zero entries of the binary feature vector, we get |α|=|α_(i,j)|. Thereby, we get (x=α)=(x_(i,j)=α) , ∀α∈{0,1}^d. Therefore d_((X),(X_(i,j)))=0. §.§ Proof of Theorem <ref> Let π= (T(X^(1),Y^(1))≥ T(X^(2)_(i,j),Y^(2)) . For the sake of simplicity, we adopt the following shorthands: T_1=T(X^(1),Y^(1)) and T_2=T(X^(2)_(i,j),Y^(2)). This gives us π =(T_1≥ T_2) =_T_1[(T_1≥ T_2|T_1) ] =_T_1[G_T(T_1)] =∫ G_T(t) F_T(t) =∫_0^1 G_T(F_T^-1(u)) u . In the next step, we let δ=2exp(-nβ^2), then by plugging this relation in the given condition in Theorem <ref> we arrive at |π-1/2|≥δ + τ+τ_X+√(log(2/α)/n) . We now focus on the decision rule (<ref>). Let τ'=τ+τ_X, then we get (Ψ(,)=1) =(|U_n-1/2|≥τ'+√(log(2/α)/n)) . On the other hand, from triangle inequality we have |U_n-1/2|≥ |π-1/2|-|U_n-π|. Plugging this into (<ref>) yields |U_n-1/2| ≥δ + τ'+√(log(2/α)/n)-|U_n-π| Combining this with (<ref>) gives us (Ψ(,)=1) ≥(δ≥ |U_n-π| ) =1-(δ≤ |U_n-π| ) . In the next step, we return to the given relation for U_n in Algorithm <ref>. From the definition of π, for each m we have (T(X^(m),Y^(m))≤ T(^(m),Y^(m)))=π . Therefore by an application of the Hoeffding's inequality we get ( |U_n-π| ≥δ)≤√(log(2/δ)/n) . Finally, recalling δ=2exp(-nβ^2) yields ( |U_n-π| ≥δ)≤β . Using this in (<ref>) completes the proof. In this case, statistical power not smaller than 1-β can be achieved. §.§ Proof of Theorem <ref> From the isotropic Gaussian distribution, we have τ_X=0. We next start by the ODC function G_T∘ F_T^-1. For this end, we start by the definition of F_T where for some non-negative t we have: F_T(t) =(|Y^(1)-^ X^(1)|≤ t) =(|(þ^*-)^ X^(1)+_1|≤ t) =(-t≤ (þ^*-)^ X^(1)+_1≤ t) . On the other hand, we know that x^ (-þ^*)+ has a Gaussian distribution (0,þ^*-_2^2+σ^2). This brings us F_T(t) =(-t≤ (þ^*-)^ X^(1)+_1≤ t) =(-t/√(þ^*-_2^2+σ^2)≤(þ^*-)^ X^(1)+_1/√(-þ^*_2^2+σ^2)≤t/√(-þ^*_2^2+σ^2)) =Φ(t/√(σ^2+þ^*-_2^2))-Φ(-t/√(σ^2+þ^*-_2^2)) =2Φ(t/√(σ^2+þ^*-_2^2))-1 , where the last line comes from the fact that Φ(t)+Φ(-t)=1 for every real value t. We introduce the shorthand _=_(i,j), then by a similar argument we get G_T(t) =(|Y^(2)-^ X^(2)_(i,j)|≤ t) =(|(þ^*-_)^ X^(2)+_2|≤ t) =2Φ(t/√(σ^2+þ^*-__2^2))-1 , By combining (<ref>) and (<ref>) we get F_T∘ G_T^-1(u)=2Φ( σ_2/σ_1Φ^-1(u+1/2) )-1 , for σ_2 and σ_1 given by σ_1^2=σ^2+þ^*-_2^2 , σ_2^2=σ^2+þ^*-__2^2 . We consider γ=σ_2/σ_1. Plugging this into the power expression in Theorem <ref> we arrive at F_T(G_T^-1(u))-u =2[ Φ(γΦ^-1(u+1/2))-u+1/2] . In the next step, by using the change of variable v=u+1/2 we get ∫_0^1[F_T(G_T^-1(u))-u] u = 4∫_1/2^1[ Φ(γΦ^-1(v))-v ] v . We then introduce function ψ:[0,+∞]→ as following ψ(γ)=4∫_1/2^1 Φ(γΦ^-1(v)) v . This implies that ψ(γ)-ψ(1)=∫_0^1[F_T(G_T^-1(u))-u] u . By differentiating ψ(.) with respect to γ in its original definition we obtain ψ/γ =4∂/∂γ∫_1/2^1 Φ(γΦ^-1(v)) v =4∫_1/2^1 Φ^-1(v) φ(γΦ^-1(v)) v . We next use s=Φ^-1(v) to arrive at the following ψ/γ =4∫_0^+∞ s φ(γ s) φ(s) s =4/2π∫_0^+∞ s exp(-s^2/2(1+γ^2)) s =2/π(γ^2+1)∫_0^+∞ sexp(-s^2/2) s =2/π(γ^2+1) . Since the differentiation of ψ with respect to γ is provided above, we then can use this and obtain the closed form equation for ψ(u). This indeed is given by ψ(γ)=C+2/πarctan(γ) , For some constant value C. In order to find C, note that ψ(1)=4∫_1/2^1v v=3/2. This brings us ψ(γ)=1+2/πarctan(γ). Using this in (<ref>) yields |∫_0^1[F_T(G_T^-1(u))-u] u| =|ψ(γ)-ψ(1)| =2/π|arctan(γ)-arctan(1)| . On the other hand, from the identity arctan(x)-arctan(y)=arctanx-y/1+xy we arrive at: |∫_0^1[F_T(G_T^-1(u))-u] u| =2/π|arctan(γ-1/1+γ)| =2/πarctan(|γ-1|/1+γ) , where in the last relation we used arctan(|.|)=|arctan(.)| (note that γ≥ 0). We next use γ=σ_2/σ_1 to get |∫_0^1[F_T(G_T^-1(u))-u] u|=2/πarctan(|σ_1-σ_2|/σ_1+σ_2) . On the other hand, from σ_1^2+σ_2^2≥ 2σ_1σ_2 we get Δ_T=|σ_1-σ_2|/|σ_1+σ_2|≥|σ_1^2-σ_2^2|/2(σ_1^2+σ_2^2) We then use this with the definition of σ_1,σ_2 to get Δ_T ≥1/2|þ^*-_2^2-þ^*-__2^2|/2σ^2+þ^*-_2^2+þ^*-__2^2 =1/2|-2^þ^*+2^_þ^*|/2σ^2+2þ^*_2^2+2_2^2-2^þ^*-2_^þ^* , where we used þ_2=þ__2. In the next step, since _,ℓ=_ℓ for all ℓ≠ i, j we get Δ_T ≥1/2|-_iþ^*_i-_jþ^*_j+_iþ^*_j+_jþ^*_i | /σ^2+þ^*_2^2+_2^2-^þ^*-_^þ^* =1/2|-_iþ^*_i-_jþ^*_j+_iþ^*_j+_jþ^*_i | /σ^2+þ^*-_2^2+^þ^*-_^þ^* In the next step, by using the observation that _,ℓ=_ℓ for all ℓ≠ i, j another time we get Δ_T ≥1/2|_i-_i||þ^*_i-þ^*_j| /σ^2+þ^*-^2+(þ^*_i-þ^*_j)(_i-_j) Thereby we get Δ_T ≥1/2|_i-_i||þ^*_i-þ^*_j| /σ^2+þ^*-^2+|þ^*_i-þ^*_j||_i-_j| Using the above relation in (<ref>) we get |∫_0^1[F_T(G_T^-1(u))-u] u|≥2/πarctan( 1/2|_i-_i||þ^*_i-þ^*_j| /σ^2+þ^*-^2+|þ^*_i-þ^*_j||_i-_j|) By recalling the given condition in Theorem <ref> we have |þ^*_i-þ^*_j|≥2tan(π/2(ρ_n(α,β,τ)))/1-2tan(π/2(ρ_n(α,β,τ))) (σ^2+-þ^*_2^2 )/|_i-_j| , By using tan(π/2(ρ_n(α,β,τ)))≤1/2 in the above relation we get 2/πarctan( 1/2|_i-_i||þ^*_i-þ^*_j| /σ^2+þ^*-^2+|þ^*_i-þ^*_j||_i-_j|)≥ρ_n(α,β,τ). By combining (<ref>) and (<ref>) we get |∫_0^1[F_T(G_T^-1(u))-u] u|≥ρ_n(α,β,τ) . Finally using Theorem <ref> completes the proof, §.§ Proof of Theorem <ref> We first show that in this case, (τ=0) for mixture of Gaussians, under the null hypothesis, we have τ_X=0. For this end, from the Bayes' formula it is easy to get (Y|X)=𝖡𝖾𝗋𝗇(g(x,μ)) with g(x,μ)=1/1+1-q/qe^-x^μ . With a similar argument, it can be observed that (Y|X)=𝖡𝖾𝗋𝗇(g(x,μ_(i,j))) . Given that d_(𝖡𝖾𝗋𝗇(a),𝖡𝖾𝗋𝗇(b))=|a-b|, under the null hypothesis (with τ=0) we must have g(x,μ)=g(x,μ_(i,j)) almost surely for all x values. This implies that x^μ=x^μ_(i,j) almost surely, thereby we have μ_i=μ_j. In the next step, we show that if μ_i=μ_j then τ_X=0. We then note that (X) =q (+μ,I_d)+ (1-q) (-μ, I_d) , (X_(i,j)) =q (+μ_(i,j),I_d)+ (1-q) (-μ_(i,j), I_d) . In the next step, using μ_i=μ_j we realize that μ_(i,j)=μ, therefore (X)=(X_(i,j)). This implies that τ_X=0. For the rest of the proof, we follow a similar argument as per proof of Theorem <ref> and we first characterize cdf functions F_T and G_T. In this case we have F_T(t) =(Y^(1)^ X^(1)≤ t) =q(^ X^(1)≤ t|Y^(1)=+1)+ +(1-q)(-^ X^(1)≤ t|Y^(1)=-1) =q(Z^+≤ t)+(1-q)(Z^-≤ t) , where Z_+∼(μ^,_2^2) and Z_-∼(-μ^,_2^2). This yields F_T(t) =qΦ(t-^μ/_2) +(1-q)(1-Φ(-t+^μ/_2) ) =Φ(t-^μ/_2) , where in the last line we used Φ(t)+Φ(-t)=1. We next introduce the shorthands _=_(i,j) and μ_=μ_(i,j), then by a similar argument we arrive at G_T(t)=Φ(t-_^μ/__2) Since _^μ=μ_^ and _= the expression for G_T(t) can be written as the following: G_T(t)=Φ(t-^μ_/_2) In the next step, it is easy to compute the quantile function G_T^-1(u)=_2 Φ^-1(u)+^μ_. This brings us F_T(G_T^-1(u))=Φ(Φ^-1(u)+^(μ_-μ)/_2) . By introducing λ=^(μ_-μ)/_2 and the function ρ(λ)=∫_0^1 Φ(Φ^-1(u)+λ) u we obtain ∫_0^1 F_T(G_T^-1(u)) u =ρ(λ) . On the other hand, by differentiating ρ(λ) with respect to λ we get ∂ρ/∂λ =∂/∂λ∫_0^1 Φ(Φ^-1(u)+λ) u =∫_0^1 φ(Φ^-1(u)+λ) u . In the next step, by using the change of variable s=Φ^-1(u) we get that ∂ρ/∂λ =∫_-∞^∞φ(s+λ)φ(s) s =1/2π∫_-∞^∞exp(-(s+λ)^2/2-s^2/2) s =exp(-λ^2/4)/2π∫_-∞^+∞exp(-(√(2)s+λ/√(2))^2/2) s =exp(-λ^2/4)/2√(2)π∫_-∞^+∞exp(-(t+λ/√(2))^2/2) t=exp(-λ^2/4)/2√(π) . Therefore we get ρ(λ)=ρ(0)+∫_0^λexp(-s^2/4)/2√(π)=ρ(0)+Φ(λ/√(2))-1/2, Since ρ(0)=1/2, we arrive at ρ(λ)=Φ(λ/√(2)). Next from the definition of ρ(λ) we have ∫_0^1[F_T(G_T^-1(u))-u] u=ρ(λ)-ρ(0) . In the next step, we use the equivalent value of λ in the function ρ(λ) to get ∫_0^1[F_T(G_T^-1(u))-u] u=Φ(^(μ_-μ)/√(2)_2)-Φ(0) . Therefore we get |∫_0^1[F_T(G_T^-1(u))-u] u |=|Φ(^(μ_-μ)/√(2)_2)-Φ(0)| . On the other hand, the normal cdf satisfies the following property |Φ(t)-1/2| = Φ(|t|)-1/2 , ∀ t∈ By using this we get |∫_0^1[F_T(G_T^-1(u))-u] u |= Φ(|^(μ_-μ)/√(2)_2| )-1/2 . In the next step, by using the fact that μ_,ℓ=μ_ℓ for ℓ≠ i,j we get that ^(μ_-μ) =_i(μ_,i-μ_i)+_j(μ_,j-μ_j) =_i(μ_j-μ_i)+_j(μ_i-μ_j) =-(_i-_j)(μ_i-μ_j) . Using this in (<ref>) yields |∫_0^1[F_T(G_T^-1(u))-u] u |= Φ(|(_i-_j)(μ_i-μ_j)|/√(2)_2)-1/2 On the other hand, by recalling the condition on |μ_i-μ_j| from Theorem <ref> we have |μ_i-μ_j|≥Φ^-1(ρ_n(α,β,0)+2Φ(|μ_i-μ_j|/√(2))-1/2)√(2)_2/|_i-_j| Combining (<ref>) and (<ref>) yields |∫_0^1[F_T(G_T^-1(u))-u] u | ≥ρ_n(α,β,0) . Finally, using Theorem <ref> completes the proof. § ADDITIONAL NUMERICAL EXPERIMENTS §.§ Size of the test (full experiments) We refer to Figure <ref> for experiment on the size of the test. §.§ Power of the test (full experiments) We refer to Figure <ref> for experiment on power of the test. §.§ binary classification under mixture of Gaussians In this section, we consider the problem of testing for symmetric influence for binary classification under a mixture of Gaussian model. We consider the data generative law (<ref>) with q=1/2 and feature dimension d=10. We consider =[1,2,3,…, 10] and let μ=/_2. We follow the score function given in Theorem <ref> and consider T(x,y)=y^ x for some ∼(0,I_d). We consider three different number of samples n=5000, 20000, 50000 for this experiment. Figure <ref> denote the results. Each number is averaged over 1000 independent experiments. It can be observed that pairs with higher contrast between their μ values are rejected more often. §.§ robustness of data models experiment In the second experiment, we consider a pair of training samples with 5 target examples. The first four targets are statistically significant (at level α=0.05), while the target 5 gives pval=0.21. We then replace the two training samples with some of their close other pictures, and compute the p-values for the new pair of images. We can see that the obtained p-values are somewhat close to the previous examples, which indicates the robustness of output results. The images along with p-values can be seen in Table 2.
http://arxiv.org/abs/2306.01680v1
20230602165512
Bloch point nanospheres for the design of magnetic traps
[ "F. Tejo", "C. Zambrano-Rabanal", "V. Carvalho-Santos", "N. Vidal-Silva" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci" ]
]Bloch point nanospheres for the design of magnetic traps Escuela de Ingeniería, Universidad Central de Chile, Avda. Santa Isabel 1186, 8330601, Santiago, Chile Departamento de Ciencias Físicas, Universidad de La Frontera, Casilla 54-D, Temuco, Chile Universidade Federal de Viçosa, Departamento de Física, Avenida Peter Henry Rolfs s/n, 36570-000, Viçosa, MG, Brasil. [email protected] Departamento de Ciencias Físicas, Universidad de La Frontera, Casilla 54-D, Temuco, Chile Through micromagnetic simulations, this work analyzes the stability of Bloch points in magnetic nanospheres and the possibility of using an array of such particles to compose a system with the features of a magnetic trap. We show that a BP can be nucleated as a metastable configuration in a relatively wide range of the nanosphere radius compared to a quasi-uniform and vortex state. We also show that the stabilized Bloch point generates a quadrupolar magnetic field outside it, from which we analyze the field profile of different arrays of these nanospheres to show that the obtained magnetic field shares the features of magnetic traps. Some of the highlights of the proposed magnetic traps rely on the magnetic field gradients achieved, which are orders of magnitude higher than standard magnetic traps, and allow three-dimensional trapping. Our results could be useful in trapping particles through the intrinsic magnetization of ferromagnetic nanoparticles while avoiding the commonly used mechanisms associated with Joule heating. [ N. Vidal-Silva July 31, 2023 ================== Several propositions for applications of magnetic nanoparticles in spintronic-based devices demand the spin transport electronics of magnetic textures through magnetic fields or electric currents without moving the particle itself <cit.>. Nevertheless, manipulating and moving nanomagnets through external magnetic fields without changing the magnetic pattern of the system also generates exciting possibilities for a plethora of applications <cit.>. Within such propositions, an emergent possibility of applying magnetic nanoparticles is using their generated magnetostatic fields as magnetic traps (MTs) <cit.>, which consists of a system that uses a gradient of the magnetic field to confine charged or neutral particles with magnetic moments <cit.>, levitate magnetic nanoparticles <cit.>, and pinning neutral atoms in low temperatures for quantum storage <cit.>. MTs generally present a set of devices arranged to generate a quadrupolar magnetic field <cit.>. These field profiles can be obtained, for instance, by two ferromagnetic bars parallel to each other, with the north pole of one next to the south of the other. The same field profile can be generated by two spaced coils with currents in opposite directions or four pole tips, with two opposing magnetic north poles and two opposing magnetic south poles <cit.>. The magnetic field gradient of a quadrupole has the particularity of allowing atoms to leave from the MT due to the zero field strength located at its center <cit.>. Several solutions to avoid the particles escaping from the trap suggest adding a set of magnetic fields generated by an array of electric currents <cit.> to the quadrupolar field. The magnetic fields generated by these electric current distributions (I) scale as I/s, while their gradient and second derivatives scale as I/s^2 and I/s^3, respectively <cit.>. Here, s represents the characteristic length of the system. In this context, the smaller these MTs, the better the particle confinement, and several techniques to diminish their sizes were developed <cit.>. However, the miniaturization of MTs using an array of nanowires and coils for manipulating atoms faces the problem of energy dissipation by Joule heating <cit.>. In this context, the intrinsic dipolar fields of specific magnetic textures of ferromagnetic nanoparticles emerge as natural candidates to compose nanosized MTs <cit.>. A promising proposition to adopt nanosized magnetic textures as sources of magnetic field gradient is using the magnetostatic field generated by spin textures in chiral magnets <cit.>. Indeed, because the magnetostatic field generated by a skyrmion lattice is similar to that created by two helices carried by electric currents <cit.>, nanoscaled MTs can be engineered by stacking chiral ferromagnets hosting skyrmions <cit.>. Another exciting result regarding magnetostatic fields produced by topological spin textures is the generation of a quadrupolar field by just one magnetic nanoelement, as evidenced by Zambrano et. al. <cit.> for a magnetic nanosphere hosting a Bloch point (BP). Nevertheless, in that case, the nanosphere is located at the center of the quadrupolar field, reducing the feasibility of applying this only structure as a magnetic trap. Following these ideas and motivated by the proposition of stacking skyrmion lattices to compose MTs, we analyze, through micromagnetic simulations, the possibility of using a BP array as an MT. We start by exploring the stability of a BP on a nanosphere as a function of their geometrical and magnetic parameters. After determining the magnetostatic field of a BP, we show that an array of four BP nanospheres generate a magnetic field gradient with all properties to be applied as an MT. Our main focus is presenting a proposition to use BP nanospheres as sources of magnetic fields in MTs. Therefore, we obtain the stable and metastable states of a ferromagnetic nanosphere as a function of its radius, R, and magnetic parameters. The analysis is performed through micromagnetic simulations using the OOMMF code <cit.>, a well-known tool that agrees well with experimental results on describing the magnetization of nanoparticles. In the simulations, we consider three values to M_s and the exchange stiffness, A, characterizing iron (M_s ≈ 1700 kA/m and A = 21 pJ/m), Permalloy (Ms ≈ 850 kA/m and A = 13 pJ/m), and cobalt (Ms ≈ 1450 kA/m and A = 56 pJ/m). To simulate a smooth spherical geometry, we consider a cubic cell with the size of 0.5×0.5×0.5 nm^3. The local and global minima are obtained by comparing the total energy, E, of three magnetic profiles: quasi-uniform, where the magnetic moments slightly deviate from the purely parallel direction <cit.>; vortex, characterized by a curling magnetization field around an out-of-plane core <cit.>; and BP configuration, characterized by two magnetic bobbers <cit.> separated by a texture that, in a closed surface around its center, the magnetization field covers the solid angle an integer number of times <cit.>. These magnetic patterns are obtained by relaxing the system from three different configurations and determining the total energy, E=E_x+E_d, of the relaxed state. Here, E_x and E_d are the exchange and dipolar contributions to the total energy. The first initial state consists of a single domain, which, after relaxation, reaches a quasi-uniform configuration. The second and third initial configurations consist of a rigid vortex and BP artificially imposed. Subsequently, both states let it relax to achieve a vortex and a BP as metastable system configurations, respectively. The energies of final states for a nanosphere of Fe, Py, and Co are shown in Fig. <ref>. One notices that due to the role that the exchange interaction plays in systems with small sizes, the quasi-uniform state appears as groundstate when the nanosphere radius is smaller than a threshold value of R_c≈15 nm (Fe), R_c≈25 nm (Py), and R_c≈30 nm (Co). Nevertheless, the contribution of dipolar energy increases with the system size, and at these threshold values, both the BP and the vortex become energetically favorable. Indeed, one can notice that the vortex configuration corresponds to the groundstate, while the BP has a slightly higher energy. As a result, the BP configuration is then a metastable state, whereas the vortex is the more stable state. Therefore, we claim that under certain conditions, a BP can be stabilized and conclude that in addition to its topological protection, the BP also has energetic metastability, compared to a quasi-uniform state, for radii greater than the material-dependent threshold value. To diminish computational effort, we will focus our discussion on a Fe nanosphere with R=15 nm, which is the lower limit to the critical radius allowing the BP metastability, and it is appreciated possesses the minimum energy difference with the vortex configuration. Nevertheless, no qualitative changes for the results presented here should be observed if we consider Py or Co nanospheres hosting BPs. After showing that BPs can appear as metastable states compared to quasi-uniform and vortex configurations, we analyze the properties of the magnetostatic field of such a system. The vector field of a BP can be parameterized by the normalized magnetization written in spherical polar coordinates as M/M_s =(sinΘcosΦ,sinΘsinΦ,cosΘ), where M_s is the magnetization saturation. Under this framework, the magnetic profile of a BP configuration can be modeled with the ansatz <cit.> Θ(θ) = pθ+π(1-p)/2 andΦ(ϕ) = ϕ+γ. Here, θ and ϕ are the standard polar and azimuthal angles describing the spherical coordinates, and p=± 1 is the BP polarity, which determines the orientation of the magnetic moments in nanosphere poles in the z-axis direction. In this case, the magnetic moments point outward or inward for p=+1 and p=-1, respectively, as depicted in Figs. <ref>a) and b). The parameter γ accounts for determining the BP helicity. For instance, γ=0 represents a hedgehog magnetization field pointing outward the sphere center, while γ=π/2 depicts a tangent-to-surface configuration in the sphere equator. The ansatz (<ref>) has been previously used to determine the magnetostatic field outside a BP nanosphere <cit.>, given by H(r,θ)=M_sR^4/48r^4(1-cosγ) [2 P_2(cosθ) r̂ +sin 2θ θ̂] , where P_2(x) is the Legendre polynomial of degree 2, and r is the radial component of the position of a point outside the nanosphere. From the BP nanosphere property that γ adopts a constant quasi-tangential configuration in the nanosphere equator <cit.>, one observes that the magnetostatic field outside the considered system consists of a quadrupole, which is consistent with Eq. (<ref>), and is also obtained in our micromagnetic simulations, as shown in Fig. <ref>c). Although this field profile seems to be a good candidate for MTs, the nanosphere is located at the center of the quadrupolar field, which avoids using this only structure for this application. Therefore, we discuss on the possibility of using an array of such elements to generate a magnetic field gradient with the features of an MT. The proposed arrays consist of four Fe BP nanospheres with a radius of 15 nm. These nanospheres are symmetrically positioned in the vertices of a square inside a rectangular prism with dimensions 120×120×60 m^3 (see Fig. <ref>-a)). The proposed arrays differ by the square side size and the BP polarities as presented in table <ref>, where p_i refers to the BP polarity in the vertex i. It is important to point out that in all the simulated arrays, the chirality acquired by the BPs emerges as a consequence of the energy minimization <cit.>. Firstly, we analyze the profile of the magnetic field of Array I. Main results are summarized in Fig. <ref> and Fig. <ref>a). In the former, we present the snapshots of the modulus of the magnetostatic field (H_d) profile in the xy and yz planes, respectively, in a longitudinal section of these planes. The color map of the magnetic field allows us to notice that Array I gives place to a range of magnetic fields going from H_d ≈ 0.5 T, in the regions surrounding the nanospheres, until a minimum value of H_d ≈ 6.8 × 10^-4 T in the center of the array, as shown in Figs. <ref>a) and b). The detailed analysis presented in Fig. <ref>a) of the field profile in the longitudinal sections in the xy and yz planes reveals that the magnetic field generated by Array I presents local and global minima depending on the position. While the local minima occur in the center of two adjacent nanospheres, the global one is in the system center. Therefore, the presented results show the existence of a magnetic field gradient in space, which can be numerically determined. We obtain that the field gradients are in the order of ∼ 10^5-10^6 T/m, much higher than the field gradient of conventional MTs <cit.>. The existence of high gradients of magnetic fields yields narrower confinement, making systems with this property very interesting for applications in MTs <cit.>. Also, the similar behavior of the magnetic field in both longitudinal sections allows the symmetric confinement in three different places (local and global minima of magnetic fields) of particles if they are charged from x or y axes. Finally, local minima have the advantage of ensuring higher stability to the trapped particles. Because changing the distance among the nanoparticles affects the strength of the magnetostatic field <cit.>, we also propose changes in the structure of the array. Therefore, we analyze the field profile when the nanosphere polarity distribution is given by Array II. Fig. <ref>b) shows the field distribution and its strength as a function of the position along the longitudinal sections along xy and yz planes. One notices that the generated magnetostatic field has exactly the same behavior in both longitudinal sections, reaching the maximum values in the space between two neighbor spheres (≈ 30 nm y ≈ 90 nm) and a unique global minimum in the array center. The appearance of just one minimum weakens the implementation of Array II as an MT. Finally, we consider the magnetic field generated by Array III, whose results are given in Fig. <ref>c). In this case, we obtain that the field profiles of the longitudinal sections along xy and yz planes are different. Indeed, the magnetic field along the xy plane has two maxima between the BP nanospheres 1 and 3, and 2 and 4, and a nonzero minimum in the array center. On the other hand, the field profile along the yz plane presents a maximum value in the array center and two minima between BP nanospheres 1 and 2, and 3 and 4. Therefore, Array III generates a magnetic field with a triple saddle point, and this array does not work as a potential MT since the magnetic field does not have the features to stabilize atoms or particles with magnetic moments. The above-described results show that different distributions of magnetic fields are obtained depending on the BP nanosphere polarity distribution. Two of these fields present the features to be used as MTs. The main advantages of using an array of BP nanospheres to generate a gradient of magnetic fields are the lower cost of production when compared to lithographic processes that use materials such as Al_2O_3, AIN, Si, and GaAs to fabricate conductor nanowires in a chip <cit.>. In addition, the proposed setting also has the advantage of avoiding energy losses due to the heating of the nanospheres. We highlight that although the BPs are metastable states, the increase in the temperature of the MT due to the motion of the trapped particles is not big enough to denucleate the BP from the nanospheres. In summary, we have analyzed the magnetostatic properties of magnetic nanospheres hosting a BP as a metastable state. In addition to their topological protection, BPs have energetic metastability in nanospheres with a radius above a threshold value that depends on the material parameters. After discussing the energy of BP nanospheres, we determine the magnetostatic field generated outside it. The micromagnetic simulations reveal the appearance of a quadrupolar field, as previously reported from analytical calculations <cit.>. We then analyzed the magnetic field profile of different arrays of BP nanospheres to propose the production of a magnetic trap. We showed that the array with the better features to be used as magnetic traps consists of four nanospheres hosting BPs with positive polarities. Although we analyzed the proposal by projecting the magnetostatic field profiles into a given plane, they are essentially three-dimensional quadrupolar fields. This feature adds a new degree of freedom to potential MTs by allowing charging particles from different directions of 3D space. Acknowledgments: The work of F.T. was supported by ANID + Fondecyt de Postdoctorado, convocatoria 2022 + Folio 3220527. V.L.C.-S. acknowledges the support of the INCT of Spintronics and Advanced Magnetic Nanostructures (INCT-SpinNanoMag), CNPq 406836/2022-1. V.L.C.-S. also thanks the Brazilian agencies CNPq (Grant No. 305256/2022-0) and Fapemig (Grant No. APQ-00648-22) for financial support. N. V-S acknowledges funding from ANID Fondecyt Iniciacion No. 11220046. Data availability: The data that support the findings of this study are available from the corresponding author upon reasonable request. 99 Shinjo-Book T. Shinjo, Nanomagnetism and spintronics. Elsevier, First edition (2009). Hirohata-JMMM A. Hirohata, K. Yamada, Y. Nakatani, I.-L. Prejbeanu, B. Diény, P. Pirro, and B. Hillebrands: Review on spintronics: Principles and device applications. J. Magn. Mag. Mat. 509, 166711 (2020). Hrcak G. Hrkac, J. Dean, and D. A. Allwood: Nanowire spintronics for storage class memories and logic. Philos. Trans. R. Soc. A 369, 3214 (2011). Vander-JPD J. Vandermeulen, B. Van de Wiele, L. Dupré, and B Van Waeyenberge: Logic and memory concepts for all-magnetic computing based on transverse domain walls. J. Phys. D: Appl. Phys. 48, 275003 (2015). Goolap-SciRep S. Goolaup, M. Ramu, C. Murapaka, and W. S. Lew: Transverse Domain Wall Profile for Spin Logic Applications. Sci. Rep. 5, 9603 (2014). Torrejon-Nat J. Torrejon, M. Riou, F. A. Araujo, S. Tsunegi, G. Khalsa, D. Querlioz, P. Bortolotti, V. Cros, K. Yakushiji, A. Fukushima, H. Kubota, S. Yuasa, M. D. Stiles, and J. Grollier: Neuromorphic computing with nanoscale spintronic oscillators. Nature 547, 428 (2017). Grolier-Nat J. Grollier, D. Querlioz, K. Y. Camsari, K. Everschor-Sitte, S. Fukami, and M. D. Stiles: Neuromorphic spintronics. Nat Electron 3, 360 (2020). Parkin-Nat S. Parkin, and S.-H. Yang: Memory on the racetrack. Nat. Nano. 10, 195 (2015). Parkin-Nat2 K. Gu, Y. Guan, B. K. Hazra, H. Deniz, A. Migliorini, W. Zhang, and S. S. P. Parkin: Three-dimensional racetrack memory devices designed from freestanding magnetic heterostructures. Nat. Nano. 17, 1065 (2022). CellMark D. Högemann, V. Ntziachristos, L. Josephson, R. Weissleder, High throughput magnetic resonance imaging for evaluating targeted nanoparticle probes. Bioconjug. Chem. 13, 116 (2002). Drug1 J. P. Fortin, F. Gazeau, and C. Wilhelm, Intracellular heating of living cells through Néel relaxation of magnetic nanoparticles. Eur. Biophys. J. 37, 223 (2008). Drug2 F. Ye, A. Barrefelt, H. Asem, M. Abedi-Valugerdi, I. El-Serafi, M. Saghafian, K. Abu-Salah, S. Alrokayan, M. Muhammed, and M. Hassan, Biodegradable polymeric vesicles containing magnetic nanoparticles, quantum dots and anticancer drugs for drug delivery and imaging. Biomaterials 35, 3885 (2014). Drug3 B. Shen, Y. Ma, S. Yu, and C. Ji, Smart Multifunctional Magnetic Nanoparticle-Based Drug Delivery System for Cancer Thermo-Chemotherapy and Intracellular Imaging. ACS Appl. Mater. Interfaces 8, 24502 (2016). Contrast C. Billotey, C. Wilhelm, M. Devaud, J. C. Bacri, J. Bittoun, and F. Gazeau, Cell internalization of anionic maghemite nanoparticles: Quantitative effect on magnetic resonance imaging. Magn. Reson. Med. 49, 646 (2003). Hyp1 P. Moroz, S. K. Jones, and B. N. Gray, Magnetically mediated hyperthermia: Current status and future directions. Int. J. Hyperth. 18, 267 (2002). Hyp2 X. Liu, Y. Zhang, Y. Wang, et al., Comprehensive understanding of magnetic hyperthermia for improving antitumor therapeutic efficacy. Theranostics 10, 3793 (2020). Hyp3 H. Gavilán, T. Fernández-Cabada, N. Soni, M. Cassani, B. T. Mai, R. Chantrell, and T. Pellegrino, Magnetic nanoparticles and clusters for magnetic hyperthermia: optimizing their heat performance and developing combinatorial therapies to tackle cancer. Soc. Rev. 50, 11614 (2021). Hyp4 N. Hallali, P. Clerc, D. Fourmy, V. Gigoux, and J. Carrey, Influence on cell death of high frequency motion of magnetic nanoparticles during magnetic hyperthermia experiments. Appl. Phys. Lett. 109, 032402 (2016). Kim-Nat D.-H. Kim, E. Rozhkova, I. Ulasov, S. Bader, T. Rajh, M. Lesniak, and V. Novosad, Biofunctionalized magnetic-vortex microdiscs for targeted cancer-cell destruction. Nature Mater. 9, 165 (2009). Trap1 A. N. Ii, S.-C. Lin, B. Lepene, W. Zhou, K. Kehn-Hall, and M. L. van Hoek, Use of magnetic nanotrap particles in capturing Yersinia pestis virulence factors, nucleic acids and bacteria. J. Nanobiotechnology 19, 186 (2021). Golub R. Golub, and J. B. Pendlebury, Ultra-cold neutrons. Rep. on Prog. Phys. 42, 439 (1979). Kugler K. J. Kügler, K. Moritz, W. Paul, and U. Trinks, Nestor — A magnetic storage ring for slow neutrons. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 228, 240 (1983). Artetal L. A. Artsimovich, A. C. Kolb, K. S. Pease, and H. P. Furth. Controlled Thermonuclear Reactions. Phys. Today, 18, 75 (1965). Kral N. A. Krall, and A. W. Trivelpiece. Principles of Plasma Physics. International series in pure and applied physics. McGraw-Hill (1973). Anderson M. H. Anderson, J.R. Ensher, M. R. Matthews, C. E. Wieman, and E. A. Cornell. Observation of Bose-Einstein Condensation in a Dilute Atomic Vapor. Science 269, 5221 (1995). Pritchard D. E. Pritchard. Cooling Neutral Atoms in a Magnetic Trap for Precision Spectroscopy. Phys. Rev. Lett. 51, 1336 (1983). Bradley C. C. Bradley, and R. G. Hulet, Laser Cooling and Trapping of Neutral Atoms. Exp. Meth. Phys. Sci. 29, 129 (1996). Gorgier-NJP R. Corgier, S. Amri, W. Herr, H. Ahlers, J. Rudolph, D. Guéry-Odelin, E. M. Rasel, E. Charron, and N. Gaaloul. Fast manipulation of Bose–Einstein condensates with an atom chip. New J. Phys. 20, 055002 (2018). Kustura-PRB K. Kustura, V. Wachter, A. E. R. López, and C. C. Rusconi. Stability of a magnetically levitated nanomagnet in vacuum: Effects of gas and magnetization damping. Phys. Rev. B 105, 174439 (2022). Briegel H.-J. Briegel, T. Calarco, D. Jaksch, J. I. Cirac, and P. Zoller. Quantum computing with neutral atoms. Journal of Modern Optics, 47, 415 (2000). FORTAG-RMP J. Fortágh and C. Zimmermann. Magnetic microtraps for ultracold atoms. Rev. Mod. Phys. 79, 235 (2007). Henriet L. Henriet, L. Beguin, A. Signoles, T. Lahaye, A. Browaeys, G. O. Reymond, and C. Jurczak. Quantum computing with neutral atoms. Quantum 4, 327 (2020). Singh-Laser V. Singh, V. B. Tiwari. and S. R. Mishra. On the continuous loading of a U-magnetooptical trap on an atom-chip in an ultra high vacuum. Laser Phys. Lett. 17. 035501 (2020). Quad1 O. Morizot, C. L. Garrido Alzar, P.-E. Pottie, V. Lorent, and H. Perrin, Trapping and cooling of rf-dressed atoms in a quadrupole magnetic field. J. Phys. B At. Mol. Opt. Phys. 40, 4013 (2007). Quad2 Z. Zhang, K. Huang, and C.-H. Menq, Design, implementation, and force modeling of quadrupole magnetic tweezer. IEEE/ASME Trans. Mechatron. 15, 704 (2010). Quad3 S. A. Zonouzi, R. Khodabandeh, H. Safarzadeh, H. Aminfar, Y. Trushkina, M. Mohammadpourfard, M. Ghanbarpour, and G. S. Alvarez, Experimental investigation of the flow and heat transfer of magnetic nanofuid in a vertical tube in the presence of magnetic quadrupole field. Exp. Term. Fluid. Sci. 91, 155 (2018). Bergman T. H. Bergeman, P. McNicholl, J. Kycia, H. Metcalf, and N. L. Balazs. Quantized motion of atoms in a quadrupole magnetostatic trap. J. Opt. Soc. Am. B 6, 2249 (1989). Sukumar C. V. Sukumar, and D. M. Brink, Spin-flip transitions in a magnetic trap. Phys. Rev. A 56, 2451 (1997). Weinstein J. D. Weinstein and K. G. Libbrecht, Microscopic magnetic traps for neutral atoms. Phys. Rev. A 52, 4004 (1995). Fortagh J. Fortagh, A. Grossmann, C. Zimmermann, and T. W. Hänsch, Miniaturized Wire Trap for Neutral Atoms. Phys. Rev. Lett. 81, 5310 (1998). Hess H. F. Hess, G. P. Kochanski, J. M. Doyle, N. Masuhara, D. Kleppner, and T. J. Greytak, Magnetic trapping of spin-polarized atomic hydrogen. Phys. Rev. Lett. 59, 672 (1987). Raab E. L. Raab, M. Prentiss, A. Cable, S. Chu, and D. E. Pritchard, Trapping of Neutral Sodium Atoms with Radiation Pressure. Phys. Rev. Lett. 59, 2631 (1987). Folman R. Folman, P. Krúger, D. Cassettari, B. Hessmo, T. Maier, and J. Schmiedmayer, Controlling Cold Atoms using Nanofabricated Surfaces: Atom Chips. Phys. Rev. Lett. 84, 4749 (2000). Muller D. Müller, D. Z. Anderson, R. J. Grow, P. D. D. Schwindt, and E. A. Cornell, Guiding Neutral Atoms Around Curves with Lithographically Patterned Current-Carrying Wires. Phys. Rev. Lett. 83, 5194 (1999). Dekker N. H. Dekker, C. S. Lee, V. Lorent, V., J. H. Thywissen, S. P. Smith, M. Drndic, M., R. M. Westervelt, and M. Prentiss, Guiding Neutral Atoms on a Chip. Phys. Rev. Lett. 84, 1124 (2000). Hansel W. Hänsel, J. Reichel, P. Hommelhoff, and T. W. Hänsch, Magnetic Conveyor Belt for Transporting and Merging Trapped Atom Clouds. Phys. Rev. Lett. 86, 608 (2001). Ott H. Ott, J. Fortagh, G. Schlotterbeck, A. Grossmann, and C. Zimmermann, Bose-Einstein Condensation in a Surface Microtrap. Phys. Rev. Lett. 87, 230401 (2001). Potting C. Henkel, S. Pötting, and M. Wilkens, Loss and heating of particles in small and noisy traps. Appl. Phys. B 69, 379 (1999). Trap2 J.-W. Kim, H.-K. Jeong, K. M. Southard, Y.-W. Jun, and J. Cheon, Magnetic Nano-tweezers for Interrogating Biological Processes in Space and Time. Acc Chem Res. 51, 839 (2018). Vuletik-PRL V. Vuletic, T. Fischer, M. Praeger, T. W. Hänsch, and C. Zimmermann, Microscopic Magnetic Quadrupole Trap for Neutral Atoms with Extreme Adiabatic Compression. Phys. Rev. Lett. 80, 1634 (1998). Jian-JPB B. Jian and W. A. van Wijngaarden. A linear array of 11 double-loop microtraps for ultracold atoms. J. Phys. B: At. Mol. Opt. Phys. 47, 215301 (2014). Roy-SciRep R. Roy, P. C. Condylis, V. Prakash, D. Sahagun, and B. Hessmo. A minimalistic and optimized conveyor belt for neutral atoms. Sci. Rep. 7, 13660 (2017). Luo-NJP X. Luo, L. Wu, J. Chen, R. Lu, R. Wang, and L. You. Generating an effective magnetic lattice for ultracold atoms. New J. Phys. 17, 083048 (2015). Singh-JAP V. Singh, V. B. Tiwari, A. Chaudhary, R. Shukla, C. Mukherjee , and S. R. Mishra. Development and characterization of atom chip for magnetic trapping of atoms. J. Appl. Phys. 133, 084402 (2023). Amir-PS A. Mohammadi, S. Ghanbari, and A. Pariz. A two-dimensional permanent magnetic lattice for ultracold atoms. Phys. Scr. 88, 015601 (2013). West2012 A. D. West, K. J. Weatherill, T. J. Hayward, P. W. Fry, T. Schrefl, M. R. J. Gibbs, C. S. Adams, D. A. Allwood, and I. G. Hughes, Realization of the manipulation of ultracold atoms with a reconfigurable nanomagnetic system of domain walls. Nano letters 12, 4065-4069 (2012). Allwood-2006 D. A. Allwood, T. Schrefl, G. Hrkac, I. G. Hughes, and C. S. Adams. Mobile atom traps using magnetic nanowires. Applied physics letters 89, 014102 (2006). Skyrmion1 R. Qin and Y. Wang, Magnetostatics of magnetic skyrmion crystals. New J. Phys. 20, 063029 (2018). Skyrmion2 R. Qin and Y. Wang, Control of ultracold atoms with a chiral ferromagnetic film. Phys. Rev. A 99, 013401 (2019). Skyrmion3 R. Qin and Y. Wang, Skyrmion-based magnetic traps for ultracold atoms. Phys. Rev. A 101, 053428 (2020). Zambrano-SciRep C. Zambrano‑Rabanal, B. Valderrama, F. Tejo, R. G. Elías, A. S. Nunez, V. L. Carvalho‑Santos, and N. Vidal‑Silva, Magnetostatic interaction between Bloch point nanospheres. Sci. Rep. 13, 7171 (2023). oommf M. J. Donahue and D. G. Porter, National Institute of Standards and Technology Interagency Report NISTIR No. 6376, 1999. Landeros P. Landeros, J. Escrig, D. Altbir, M. Bahiana, and J. d'Albuquerque e Castro. Stability of magnetic configurations in nanorings. J. Appl. Phys. 100, 044311 (2006). Vagson V. L. Carvalho-Santos, W. A. Moura-Melo, and A. R. Pereira. Miniaturization of vortex-comprising system using ferromagnetic nanotori. J. Appl. Phys. 108, 094310 (2010). Riveros2016 A. Riveros, N. Vidal-Silva, P. Landeros, D. Altbir, E. E. Vogel, and J. Escrig. Magnetic vortex core in cylindrical nanostructures: Looking for its stability in terms of geometric and magnetic parameters. J. Magn. Magn, 401, 848-852 (2016). bobbers F. N. Rybakov, A. B. Borisov, S. Blügel, and N. S. Kiselev: New type of stable particlelike states in chiral magnets. Phys. Rev. Lett. 115, 117201 (2015). MaloSlo A. P. Malozemoff, and J. C. Slonczewski, Magnetic Domain Walls in Bubble Materials (Academic, New York, 1979). Moreno R. Moreno, V. L. Carvalho-Santos, D. Altbir, and O. Chubykalo-Fesenko, Detailed examination of domain wall types, their widths and critical diameters in cylindrical magnetic nanowires. J. Magn. Mag. Mat. 542 168495 (2022). Elias-EPL R.G. Elías, and A. Verga, Magnetization structure of a Bloch point singularity. Eur. Phys. J. B 82, 159 (2011). Pyly-PRB O. V. Pylypovskyi, D. D. Sheka, and Y. Gaididei, Bloch point structure in a magnetic nanosphere. Phys. Rev. B 85, 224401 (2012). Tejo-SciRep F. Tejo, R. H. Heredero, O. Chubykalo‑Fesenko, and K. Y. Guslienko, The Bloch point 3D topological charge induced by the magnetostatic interaction. Sci. Rep. 11, 21714 (2021). Reichel J. Reichel and V. Vuletic, Atom Chips 1st ed. Wiley-VCH (2011). Aldrich S. Aldrich, MilliporeSigma | Life Science Products & Service Solutions. https://www. sigmaaldrich.com/ (2022).
http://arxiv.org/abs/2306.03030v2
20230605164841
Benchmarking Large Language Models on CMExam -- A Comprehensive Chinese Medical Exam Dataset
[ "Junling Liu", "Peilin Zhou", "Yining Hua", "Dading Chong", "Zhongyu Tian", "Andrew Liu", "Helin Wang", "Chenyu You", "Zhenhua Guo", "Lei Zhu", "Michael Lingzhi Li" ]
cs.CL
[ "cs.CL" ]
Charged black string bounce and its field source R. R. Landim July 31, 2023 ================================================ Recent advancements in large language models (LLMs) have transformed the field of question answering (QA). However, evaluating LLMs in the medical field is challenging due to the lack of standardized and comprehensive datasets. To address this gap, we introduce CMExam, sourced from the Chinese National Medical Licensing Examination. CMExam consists of 60K+ multiple-choice questions for standardized and objective evaluations, as well as solution explanations for model reasoning evaluation in an open-ended manner. For in-depth analyses of LLMs, we invited medical professionals to label five additional question-wise annotations, including disease groups, clinical departments, medical disciplines, areas of competency, and question difficulty levels. Alongside the dataset, we further conducted thorough experiments with representative LLMs and QA algorithms on CMExam. The results show that GPT-4 had the best accuracy of 61.6% and a weighted F1 score of 0.617. These results highlight a great disparity when compared to human accuracy, which stood at 71.6%. For explanation tasks, while LLMs could generate relevant reasoning and demonstrate improved performance after finetuning, they fall short of a desired standard, indicating ample room for improvement. To the best of our knowledge, CMExam is the first Chinese medical exam dataset to provide comprehensive medical annotations. The experiments and findings of LLM evaluation also provide valuable insights into the challenges and potential solutions in developing Chinese medical QA systems and LLM evaluation pipelines.[The dataset and relevant code are available at  <https://github.com/williamliujl/CMExam>] § INTRODUCTION Recent advancements brought by large language models (LLMs) such as T5 <cit.> and GPT-4 <cit.> have revolutionized natural language processing (NLP). However, evaluating LLMs in the medical field poses significant challenges due to the paucity of standardized and comprehensive datasets compiled from reliable and unbiased sources <cit.>. Most existing medical datasets <cit.> for language model evaluation have limitations that hinder comprehensive assessment of LLM performance. Many datasets are insufficient in terms of size and diversity, preventing a thorough evaluation of LLM capabilities. Furthermore, most datasets primarily focus on text generation tasks rather than utilizing clear choice evaluations, impeding objective and quantitative measurement of LLM performance. Additionally, a majority of these datasets <cit.> are sourced from online forums and consumer feedback, which could suffer from significant bias and error. These challenges are particularly amplified in non-English languages, such as Chinese, due to the pervasive inequality in language resources that exists in the NLP field <cit.>. Overall, due to the lack of qualified evaluation datasets, the strengths and weaknesses of LLMs in the medical field have not been fully studied. In response, we present a novel dataset called CMExam to overcome these challenges and benchmark LLM performance. CMExam is sourced from authentic medical licensing exams. It contains more than 60K questions and utilizes the multiple-choice question format to allow standardized and objective evaluations. Questions in CMExam have corresponding solution explanations that can be used to test LLM's reasoning ability in an open-ended manner. To offer diverse perspectives for measuring LLM performance in the medical field, we created five additional question-wise annotation dimensions based on authenticated resources and objective metrics. To reduce the substantial time and labor costs associated with annotating large-scale datasets, we propose an innovative strategy called GPT-Assisted Annotation. This approach harnessed the power of GPT-4 to automate the initial annotation process. Subsequently, the annotated data underwent a meticulous review and manual verification conducted by two medical professionals. Figure <ref> shows an example question from CMExam and the annotation process. Furthermore, we benchmark the performance of general domain LLMs and medical domain LLMs on answer prediction (multiple-choice) and answer reasoning (open-ended) tasks of CMExam. This comprehensive assessment aims to highlight the strengths and weaknesses of various approaches in Chinese medical QA, with a focus on LLMs. The main findings of this benchmark are as follows: * GPT-4 <cit.> demonstrates impressive zero-shot performance on the answer prediction task compared to other models, though still significantly lagging behind human performance. * GPT-3.5 <cit.> and GPT-4 generated reasonable answers on the answer reasoning task despite low BLUE and ROUGE scores. This is because they tended to generate short answers with reasonable quality. * Existing medical domain LLMs, such as Huatuo <cit.> and DoctorGLM <cit.>, exhibit poor zero-shot performance on both tasks, indicating their limited coverage of medical knowledge and substantial room for improvement. * Lightweight LLMs (e.g., ChatGLM <cit.>) fine-tuned on CMExam with supervision chieve performance close to GPT-3.5 on the answer prediction task. They also significantly outperform GPT-3.5 and GPT-4 on the reasoning task while having only  3% of the parameters of GPT-3.5. In summary, this study provides valuable insights into the performance of LLMs in medical contexts from multiple perspectives, benefiting both the artificial intelligence research community and the medical research community. Our findings contribute to a deeper understanding of the capabilities and limitations of LLMs in the medical domain. Additionally, the CMExam dataset and benchmark introduced in this study serve as valuable resources to inspire researchers in exploring more effective ways of integrating medical knowledge into LLMs, ultimately enhancing their performance in medical applications. § RELATED WORK Medical Question-Answering Datasets Table <ref> presents a summary of medical QA datasets published after 2017. In particular, we focus on categorizing the data source and question types of the different datasets. Most existing medical QA datasets adopt an open-ended format, primarily because they were constructed directly from consumer questions and answers from doctors. However, multiple-choice and fill-in-the-blank questions provide a more standardized and objective evaluation, and only a small portion of medical QA datasets have adopted these formats. Notable examples include CliCR <cit.>, MEDQA <cit.>, MMLU <cit.>, MLEC-QA <cit.>, and MedMCQA <cit.>. Note that the multiple-choice questions in MultiMedQA <cit.> come from MEDQA, MedMCQA, and MMLU. Data source types generally determine the reliability of a dataset. Consumer questions collected from web sources require human review to ensure the correctness of the answers. As datasets grow in size, quality control becomes increasingly challenging <cit.>. In contrast, datasets built from case reports (e.g., CliCR), research literature (e.g., BioAsq <cit.>), medical books, exams, and related practices (e.g., MMLU and MedMCQA) are often more reliable. From Table <ref>, we observe that there are few datasets based on multiple-choice questions from authoritative sources. In particular, the most related dataset is the MLEC-QA dataset, which is also derived from the Chinese National Medical Licensing Examination. Despite a shared data source, our CMExam dataset stands out due to more extensive and well-designed analysis dimensions, making it highly suitable for evaluating the medical capabilities of LLMs. We have curated a comprehensive range of disease groups, clinical departments, medical disciplines, areas of competency, and question difficulty levels, enabling a thorough understanding of LLM performance from various angles. Additionally, to facilitate objective evaluations, we provide benchmark results from state-of-the-art models on our dataset. By offering a broader scope and richer analysis dimensions, CMExam provides a valuable resource for assessing the medical abilities of LLMs. Other Benchmark Datasets of Large Language Models The assessment of LLMs has witnessed significant progress, with the introduction of diverse benchmarks that evaluate different dimensions across multiple languages and models. Many datasets focus on assessing natural language understanding and reasoning of LLMs. RACE <cit.> includes English exams for Chinese middle and high school students. TriviaQA <cit.> consists of question-answer pairs authored by trivia enthusiasts. DROP <cit.> evaluates reading comprehension with discrete reasoning and arithmetic components. GLUE <cit.> encompasses four existing NLU tasks, while SuperGLUE <cit.> extends it with a more challenging benchmark of eight language understanding tasks. Other datasets, such as HellaSwag <cit.> and WinoGrande <cit.>, focus on commonsense reasoning. TruthfulQA <cit.> includes health, law, finance, and politics, to assess LLMs' ability to mimic human falsehoods, while MMCU <cit.> covers medical, legal, psychology, and education to evaluate multitask Chinese understanding. In addition to language understanding and reasoning, several datasets focus on specific subjects and topics, such as Python coding tasks <cit.> and middle school mathematics questions <cit.>. § THE CMEXAM DATASET Data Collection and Pre-processing CMExam comprises authentic past licensed physician exams in the Chinese National Medical Licensing Examination (CNMLE) collected from the Internet. The CNMLE, also known as the Physician Qualification Examination, is a standardized exam that assesses applicants' medical knowledge and skills in China. It includes a written test with multiple-choice questions covering various medical subjects and a clinical skills assessment simulating patient diagnosis and treatment. We excluded questions that rely on non-textual information, including questions with external information such as images and tables, and questions with keywords "graph" and "table". Duplicate questions were removed from the dataset. In total, collected 96,161 questions, 68,119 of which were retained after pre-processing. The dataset was then randomly split into training/development/test sets with a ratio of 8:1:1. Each question in the dataset is associated with an ID, five candidate answers, and a correct answer. 85.24% of questions have brief solution explanations and questions in the test dataset contain additional annotations. Data Annotation CMExam provides a comprehensive analysis of LLM performance through five additional annotation dimensions. The first dimension involves disease groups based on the 11th revision of the International Classification of Diseases (ICD-11) <cit.>. ICD-11 is a globally recognized standard classification system for documenting and categorizing health conditions, consisting of 27 major disease groups. The second dimension comprises 36 clinical departments derived from the Directory of Medical Institution Diagnostic and Therapeutic Categories (DMIDTC) [ <http://www.nhc.gov.cn/fzs/s3576/201808/345269bd570b47e7aef9a60f5d17db97.shtml>], published by the National Health Commission of China. DMIDTC is an authoritative guide used for categorizing and naming diagnostic and therapeutic subjects within healthcare institutes. In cases where the question cannot be successfully classified by ICD-11 or DMIDTC, the annotation is marked as "N/A". The third dimension refers to medical disciplines, which are categorized based on the List of Graduate Education Disciplinary Majors (2022) published by the Ministry of Education of the People's Republic of China[ <http://www.moe.gov.cn/srcsite/A22/moe_833/202209/t20220914_660828.html>]. This dimension encompasses seven categories representing study majors used in universities. The fourth dimension was created by two medical professionals within the team to assess the primary medical competency tested by each associated question. It consists of four categories. The fifth dimension represents the difficulty level of each question, determined by analyzing the correctness rate observed in human performance data collected alongside the questions. It includes five categories: easy, manageable, moderate, difficult, and extra difficult. For detailed information on these additional annotations, please refer to supplementary materials. Dataset Characteristics The CMExam dataset has several advantages over previous medical QA datasets regarding: 1)Reliability and Authenticity: CMExam is sourced exclusively from the CNMLE that undergoes rigorous review and validation processes, ensuring its accuracy and adherence to established medical standards. 2) Standardization and Comprehensiveness: CMExam includes both multiple-choice questions that ensure fair and objective evaluations of models' performance and question-wise open-ended reasoning that allows in-depth analysis and assessment of model reasoning abilities and comprehension. CMExam reflects the comprehensive coverage of medical knowledge and reasoning required in clinical practice, as it is sourced from carefully designed national medical exams. The inclusion of five additional annotation dimensions enhances the dataset's rigor and offers valuable insights for in-depth evaluation and analysis. 3) Scale: CMExam consists of over 60K high-quality questions, providing a large and reliable dataset. Data Statistics The dataset has a total of 68,119 questions, with 65,950 answers being single-choice and 2,169 being multiple-choice, with a maximum of five answer choices. Among all questions, 85.24% have associated solution explanations. Questions in CMExam have a median length of 17 (Q1: 12, Q3: 32). Regarding solution explanations, the median length is 146 tokens (Q1: 69, Q3: 247). Table <ref> shows more basic statistics of CMExam, and Figure <ref> shows additional statistics visualization. Within the test set, 4,493 questions (65.97%) have corresponding disease group annotations. The most prevalent disease group is Traditional Medicine Disease Patterns (TMDP), followed by Digestive System Diseases, Certain Infectious (Digest) and Parasitic Diseases (InfDis), Endocrine, Nutritional, or Metabolic Diseases (Endo), and Circulatory System Diseases (Circ). For the associated clinical department annotations, 4,965 questions (72.90%) have been assigned values. The two most frequently represented clinical departments are Internal Medicine (IM) and Traditional Chinese Medicine (TCM), with Dentistry (Dent) and Surgery (Surg) following closely. Every question in the test set has been labeled with a discipline, where Clinical Medicine (ClinMed) comprises the largest proportion. Additionally, each question has been categorized into a competency area, with Medical Fundamentals (MedFund) being the predominant category. The difficulty levels of the questions align with common exam patterns, with a greater number of easy questions and a smaller number of hard questions. § BENCHMARKS §.§ Baselines, Settings, and Metrics Model selection The LLMs we benchmarked on the CMExam can be divided into two groups based on domains: 1) General Domain LLMs: This group comprises GPT3.5/4 <cit.>, ChatGLM <cit.>, LLaMA <cit.>, Alpaca <cit.>, and Vicuna <cit.>. These models are general-purpose language models trained on a massive amount of general-purpose corpora; 2) Medical Domain LLMs: This group can be further divided into two subgroups. The first subgroup consists of representative LLMs specifically designed for the medical domain, including DoctorGLM <cit.> and Huatuo <cit.>. DoctorGLM is a healthcare-specific language model initialized with ChatGLM-6B parameters and further fine-tuned on Chinese medical dialogues extracted from ChatGPT. Huatuo, on the other hand, is a knowledge-enhanced model, which builds upon the LLaMA architecture and is additionally supervised-fine-tuned with knowledge-based instruction data harvested from the Chinese medical knowledge graph (CMeKG). The second subgroup comprises medical LLMs that were constructed through supervised fine-tuning of LLMs using the CMExam training set. This subgroup includes models fine-tuned on BERT <cit.>, RoBERTa <cit.>, Huatuo, ChatGLM, LLaMA, Alpaca, and Vicuna. Human Performance To effectively gauge the medical proficiency of LLMs, incorporating a measure of human performance into the benchmarking process is of paramount importance. Therefore, during data collection, we preserved the accuracy of human responses for each question. Human performance is estimated by computing a weighted average of response accuracy within each dimension, with weights determined by the number of respondents. This design ensures a robust comparison of LLMs' performance relative to human capabilities, particularly when larger respondent samples contribute to a question's accuracy. Experimental Setting For GPT models, we leveraged OPENAI's API to access the GPT-3.5-turbo and GPT-4-0314 models, given that their open-source variants are currently unavailable. The LLaMA, Alpaca, and Vicuna models were used in their respective 7B versions, while ChatGLM was evaluated using its publicly accessible 6B version. Additionally, we performed fine-tuning on open-source models using the CMExam dataset. We used P-tuning V2 <cit.> for ChatGLM-6B, with the length of prefix tokens set to 128, and the learning rate set to 2e-2, LoRA <cit.> for LLaMA, Alpaca, Vicuna, and Huatuo models, with the rank set to 8, alpha set to 16, and dropout at 0.05. For BERT models, we followed the fine-tuning methods outlined in <cit.>, with batch size set to 16, learning rate set to 2e-4, hidden dropout probability set to 0.4, and maximum input length set to 192. The fine-tuning processes for all models except BERT involved a batch size of 64, a maximum input length, and a target length of 256. All fine-tuning was performed using NVIDIA V100 GPUs for 10 epochs. Metrics We assess model performance on multiple choice questions using accuracy and weighted F1 score. These metrics are commonly employed in information retrieval and question-answering tasks to evaluate model performance. For the open-ended solution explanation part of CMExam, BLUE <cit.> and ROUGE <cit.> were used to evaluate the discrepancy between model-generated explanations and ground truth. §.§ Results and Analysis Overall Comparison We first assessed the performance of general domain LLMs and medical domain LLMs for answer prediction and reasoning tasks. The results are displayed in Table <ref>. For the answer prediction task, GPT-4 significantly outperforms other methods, demonstrating a zero-shot performance with an accuracy of 61.6% and an F1 score of 0.617. While a performance gap still exists when compared to human performance (which stands at 71.6% accuracy), it's noteworthy that this gap has been greatly reduced from what was observed with GPT-3.5. Among lightweight, general domain LLMs, ChatGLM outperforms LLaMA, Alpaca, and Vicuna, likely attributable to their limited coverage of the Chinese corpus. This restriction seemingly hampers their ability to provide accurate responses to CMExam queries. Furthermore, a noticeable deficiency in zero-shot performance is evident in lightweight medical domain LLMs such as Huatuo, owing to their restricted medical corpus diversity, which hampers the acquisition of broad medical knowledge and accurate interpretation of CMExam questions. Our findings suggest that finetuning models with CMExam enhance their performance. For instance, with an accuracy of 45.3%, ChatGLM-CMExam is comparable to GPT-3.5's performance, despite utilizing only about 3% of the parameters employed by GPT-3.5. It is noteworthy that encoder-only LLMs, such as BERT and RoBERTa, remain a robust baseline for answer prediction tasks. Their performance can par with, or even exceed, that of certain decoder-only LLMs, such as LLaMA-CMExam and Alpaca-CMExam, despite having fewer parameters. For the solution explanation task, we observe that GPT models performed poorly on the BLUE metric, likely due to their tendency of generating short explanations. However, they exhibited an advantage on the ROUGE metric. As DoctorGLM is unable to return answer options according to the prompt, we only report its performance in the solution explanation task. Through finetuning, LLM was able to generate more reasonable explanations. For instance, ChatGLM-CMExam achieved scores of 31.10 and 18.94 on BLUE-1 and BLUE-4, respectively, and scores of 43.94, 31.48, and 29.39 on the ROUGE metrics. Results by Disease Groups Drawing upon ICD-11 annotations (26 categories), we conducted an analysis of the performance of several LLMs across various categories. To mitigate the potential impact of random variability resulting from the number of questions, we limited our analysis to categories containing more than 100 questions. According to Table <ref>, LLMs have uneven performance and significant gaps in knowledge. GPT-4’s accuracy ranges from 74.4% for Neo to 44.3% for TCMDP, GPT-3.5's accuracy ranges from 63.9% for Neo to 31.0% for TCMDP and ChatGLM-CMExam's accuracy ranges from 54.7% for Psy to 42.9% for RESP. Results by Clinical Departments To compare model performance regarding the clinical department dimension (36 categories), we only analyzed categories with more than 50 questions to ensure result representativeness. Results presented in Table <ref> highlight that the models show relatively high accuracy on questions associated with commonly encountered departments, such as Emergency Medicine (EM), Internal Medicine (IM) and Surgery (Surg). Their accuracy on questions associated with rarer departments, such as Traditional Chinese Medicine (TCM). There is a marked discrepancy in the average accuracy among different departments, with the highest being 50.9% and the lowest being only 13.9%. This observation suggests there are notable variations in medical knowledge and reasoning approaches among different departments. Consequently, it may be necessary to examine specific optimization strategies for different departments. Results by Medical Disciplines Then, we evaluated LLM performance across seven medical disciplines. As depicted in Table <ref>, the performance of LLMs across disciplines such as Traditional Chinese Medicine (TCM), Traditional Chinese Pharmacy (TCPharm), and Pharmacy (Pharm) was notably subpar, with all accuracy rates falling below 42%. This pattern suggests a potential deficiency in the exposure of these models to data within these categories. Conversely, disciplines such as ClinMed and Ph&PM demonstrated higher accuracy rates, likely due to the abundance of relevant data. The observed variability in performance across different disciplines underscores the distinctiveness of data characteristics and complexities inherent to each field, thereby advocating for discipline-specific model optimizations and enhancements. Results by Competencies Evaluations based on medical competency areas aimed at a higher-level understanding of model capability in solving medical problems. As indicated in Table <ref>, the lowest average accuracy across LLMs was observed within the domain of mastering Medical Fundamentals (MedFund), with a meager average score of 42.1%. This result demonstrates that these models, predominantly trained on general textual data, have inadequate exposure to medical-specific data. While fine-tuning did provide some improvement, these models could benefit from additional medical scenario data to further augment their performance. It is worth highlighting that the average accuracy in the domain of Public Health Laws and Ethics (PHL) was reasonably high, notably achieving an average of 47.6%. In addition, the LLMs showcased their proficiency in accurate disease diagnosis. r9cm Results by question difficulty. .6! Categories GPT-4 GPT-3.5 ChatGLM ChatGLM-CMExam Average Easy 74.6±0.1 58.5±0.6 31.4±0.2 61.5±0.3 56.5±0.4 Manageable 63.9±0.2 47.4±0.7 25.9±0.5 46.1±0.3 45.8±0.6 Moderate 51.3±0.6 36.8±0.8 23.0±0.4 34.5±0.6 36.4±0.7 Difficult 36.4±0.9 26.2±0.7 18.9±0.5 24.3±0.9 26.5±0.6 Extremely difficult 27.2±1.0 21.4±2.2 15.8±1.0 12.2±1.1 19.1±1.1 Results by Question Difficulty To evaluate model performance in tackling questions of varying levels of difficulty, we conducted experiments regarding the question difficulty dimension, which was calculated based on human exam-taker performance. As shown in Table <ref>, there's an evident trend where model accuracies decrease as question complexity rises. This pattern suggests that more sophisticated questions demand an extensive knowledge base and complex reasoning, which are challenging for the LLMs, thus reflecting patterns observed in human performance. r0cm < g r a p h i c s > Results stratified by question length. Results by Question Length Finally, to investigate if model performance is associated with input lengths, we compared their performance regarding question lengths. Figure <ref> illustrates that Large Language Models (LLMs) generally show higher accuracy with problem lengths between 60 and 90. However, their performance seems to falter with problems that are either too short or overly long. Additionally, we noticed that the effect of question length varies across different LLMs. For instance, GPT models tend to incrementally improve as the problem length expands, performing optimally within the 50 to 90 range. Conversely, ChatGLM-CMExam's performance fluctuates noticeably with varying lengths, and it tends to fall short compared to GPT models when addressing longer problems. § CONCLUSION AND DISCUSSIONS In this work, we developed CMExam, a dataset sourced from the stringent Chinese National Medical Licensing Examination, featuring 60,000+ multiple-choice questions, with detailed explanations. CMExam ensures reliability, validity, and adherence to medical standards. It also demonstrates the practicality of employing GPT-4 to automate the annotation process, which strikes a harmonious balance between efficiency and cost-effectiveness while maintaining the desired level of accuracy and reliability of the annotation. Utilizing this large and reliable corpus, we tested several LLMs for answer selection and reasoning tasks. A performance gap was observed between LLMs and human experts, signaling the need for additional LLM research. CMExam's standardization and comprehensiveness also ensure objective evaluations of models while enabling in-depth analysis of their reasoning capabilities. The questions cover a wide spectrum of medical knowledge, augmented with five additional annotation dimensions for rigorous evaluation. This study aims to spur further exploration of LLMs in medicine by providing a comprehensive benchmark for their evaluation. We anticipate CMExam to contribute significantly to future advancements of LLMs, particularly in handling medical question-answering tasks. Limitations Firstly, while CMExam is derived from meticulously designed medical examinations, our process of excluding questions requiring non-textual information may inadvertently affect the balance of the remaining questions, potentially introducing unexpected biases. It is critical to acknowledge this aspect while interpreting any findings or analyses conducted using this dataset. Furthermore, the current BLUE and ROUGE metrics primarily evaluate the explanation task, but these measures are insufficient for assessing the reasonableness of the answer. In future work, we will incorporate human evaluation to provide a more comprehensive assessment of the models. Ethics CMExam is a dataset derived from the Chinese National Medical Licensing Examination, which aligns with numerous datasets containing similar National Medical Licensing Examinations <cit.>. We have ensured adherence to applicable legal and ethical guidelines during data collection and use. The authenticity and accuracy of the exam questions have been thoroughly verified, providing a reliable basis for evaluating LLMs. Please note that the CMExam dataset is intended for academic and research purposes only. Any commercial use or other misuse that deviates from this purpose is expressly prohibited. We urge all users to respect this stipulation in the interest of maintaining the integrity and ethical use of this valuable resource. Societal Impacts While CMExam aims to enhance LLM evaluations in the medical field, it should not be misused for assessing individual medical competence or for patient diagnosis. Conclusions drawn from models trained on this dataset should acknowledge its limitations, especially given its single source and the specific context of the CNMLE. The use of this dataset should strictly be limited to research purposes to avoid potential misuse. ACM-Reference-Format § CHECKLIST The checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default to , , or . You are strongly encouraged to include a justification to your answer, either by referencing the appropriate section of your paper or providing a brief inline description. For example: * Did you include the license to the code and datasets? See Section <ref>. * Did you include the license to the code and datasets? The code and the data are proprietary. * Did you include the license to the code and datasets? Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below. * For all authors... * Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? * Did you describe the limitations of your work? See Section <ref> * Did you discuss any potential negative societal impacts of your work? See Section <ref> * Have you read the ethics review guidelines and ensured that your paper conforms to them? * If you are including theoretical results... * Did you state the full set of assumptions of all theoretical results? * Did you include complete proofs of all theoretical results? * If you ran experiments (e.g. for benchmarks)... * Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? All the datasets, benchmarks and code are available at  <https://github.com/williamliujl/CMExam> * Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? Those details were listed in Section <ref> and Section <ref>. * Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? * Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? We report the type of resources in Section <ref>. * If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... * If your work uses existing assets, did you cite the creators? We used code from several models in our benchmarks, all the sources were properly cited in this paper. * Did you mention the license of the assets? The code we used are all open available, they were used to evaluate model performance in our new dataset. We do not claim any copyright from the code. * Did you include any new assets either in the supplemental material or as a URL? * Did you discuss whether and how consent was obtained from people whose data you're using/curating? This work was conducted on public available data, so this study is waived from the participant’s consent. * Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? This work did not contain any personally identifiable information or offensive content. * If you used crowdsourcing or conducted research with human subjects... * Did you include the full text of instructions given to participants and screenshots, if applicable? This work was conducted on public available data, it doesn’t have participants. * Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? This dataset is based on authentic past licensed physician exams in the Chinese National Medical Licensing Examination (CNMLE) collected from the Internet, it doesn’t have potential participant risks. * Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? This dataset was voluntarily annotated by the authors and two medical professionals. § APPENDIX UTF8gbsn §.§ Abbreviations, Full Names, and Translations of Additional Annotations This section presents four tables of additional annotations that contain translation. It showcases abbreviations, full English names, and Chinese names for each group in each annotation dimension. Table <ref> showcases all disease groups included in the 11th revision of the International Classification of Diseases (ICD-11). We present the disease group in the same order found on the official website. Table <ref> offers a classification of 36 clinical departments derived from the Directory of Medical Institution Diagnostic and Therapeutic Categories. Table <ref> presents a breakdown of medical disciplines based on the List of Graduate Education Disciplinary Majors published by the Ministry of Education of the People's Republic of China. This categorization comprises seven study majors used in universities. Table <ref> provides all groups of areas of medical competency assessed in Chinese medical licensing exams. §.§ Instructions for pre-annotation In this section, we present instructions used to pre-annotate CMExam test set data using GPT4. As shown in Figure <ref>, we first constrained the output from GPT4 to return only specific categories. We then annotated each of the five additional annotation dimensions relevant to this study with all the category information for each dimension. Next, we provided specific prompt information and finally, we performed filtering on the GPT4 output to improve the effectiveness of pre-annotation. During the actual annotation process, specific categories and prompt information should be filled in the grey background areas.
http://arxiv.org/abs/2306.10800v1
20230619093928
Multilevel Surrogate-based Control Variates
[ "Mohamed Reda El Amri", "Paul Mycek", "Sophie Ricci", "Matthias De Lozzo" ]
math.ST
[ "math.ST", "stat.TH" ]
Analytical modelling of adaptive optics systems: Role of the influence function Anthony Berdeu1,2,4 Michel Tallon3 Éric Thiébaut3 Maud Langlois3 Received 29 October 2022 / Accepted 26 February 2023 =============================================================================== Monte Carlo (MC) sampling is a popular method for estimating the statistics (e.g. expectation and variance) of a random variable. Its slow convergence has led to the emergence of advanced techniques to reduce the variance of the MC estimator for the outputs of computationally expensive solvers. The control variates (CV) method corrects the MC estimator with a term derived from auxiliary random variables that are highly correlated with the original random variable. These auxiliary variables may come from surrogate models. Such a surrogate-based CV strategy is extended here to the multilevel Monte Carlo (MLMC) framework, which relies on a sequence of levels corresponding to numerical simulators with increasing accuracy and computational cost. MLMC combines output samples obtained across levels, into a telescopic sum of differences between MC estimators for successive fidelities. In this paper, we introduce three multilevel variance reduction strategies that rely on surrogate-based CV and MLMC. MLCV is presented as an extension of CV where the correction terms devised from surrogate models for simulators of different levels add up. MLMC-CV improves the MLMC estimator by using a CV based on a surrogate of the correction term at each level. Further variance reduction is achieved by using the surrogate-based CVs of all the levels in the MLMC-MLCV strategy. Alternative solutions that reduce the subset of surrogates used for the multilevel estimation are also introduced. The proposed methods are tested on a test case from the literature consisting of a spectral discretization of an uncertain 1D heat equation, where the statistic of interest is the expected value of the integrated temperature along the domain at a given time. The results are assessed in terms of the accuracy and computational cost of the multilevel estimators, depending on whether the construction of the surrogates, and the associated computational cost, precede the evaluation of the estimator. It was shown that when the lower fidelity outputs are strongly correlated with the high-fidelity outputs, a significant variance reduction is obtained when using surrogate models for the coarser levels only. It was also shown that taking advantage of pre-existing surrogate models proves to be an even more efficient strategy. Keywords: Multifidelity, multilevel Monte Carlo, control variates, surrogate models, polynomial chaos, variance reduction. § INTRODUCTION In recent years, the propagation of uncertainties in numerical simulators has become an essential step in the study of physical phenomena. Therefore, uncertainty quantification (UQ) has emerged as an important element in scientific computing <cit.>. For complex nonlinear systems, the task of quantifying the effect of uncertainties on the simulator behaviour can pose major challenges as closed-form solutions often do not exist. Sampling-based algorithms are considered the default approach when it comes to UQ for complex nonlinear simulators. The Monte Carlo (MC) sampling method is the most popular and flexible method in UQ. Here, statistical information is extracted from a n-sample of simulator responses. Due to its non-intrusive nature, MC is straightforward to implement. On the one hand, the convergence of an MC statistic is independent of the dimension, but on the other hand, it is very slow, namely at a rate of n^-0.5. For computationally expensive simulators, the cost of MC is often considered impractically high. In some cases, slight improvements can be obtained through the use of importance sampling <cit.>, latin hypercube sampling <cit.> or quasi-Monte Carlo (QMC) <cit.>. Another well-known approach in UQ consists in replacing the simulator by a surrogate model. Fast-to-evaluate surrogates can be used to approximate the high-fidelity model response and therefore reduce drastically the computational cost of estimating statistics. Polynomial chaos (PC) <cit.>, Gaussian process models <cit.>, radial basis functions <cit.> and (deep) neural networks <cit.> are commonly used surrogate models. The downside of the surrogate-based approach is that it introduces approximation error, which causes biases in the statistics estimators. Besides, this approximation error tends to increase in high dimension. One recent MC sampling framework based on control variates (CV) <cit.> has been extensively developed and used. In a sampling-based CV strategy, one seeks to reduce the variance of the MC estimator of a random variable, arising from the high-fidelity model, by exploiting its correlation with an auxiliary random variable that arises from low-fidelity models approximating the same input-output relationship. In classic CV theory, the mean of the auxiliary random variable is assumed to be known. Unfortunately, in many cases such an assumption is not valid. This creates the need to use another estimator for the auxiliary random variable <cit.>, which involves an additional computational cost. Recently, the adoption of surrogate models as such auxiliary random variables, in particular PC models, has been explored in <cit.>. The benefits are twofold: (1) the prediction of the surrogate models may be highly correlated with the output high-fidelity model, leading to a reduced variance of the CV estimator as compared to the MC estimation; (2) certain surrogate models provide exact statistics that are needed by the CV approach. Another variance reduction technique, called the multilevel MC (MLMC) method <cit.>, uses a hierarchy of models. Originally devised for the estimation of expected values, MLMC has since been extended to the estimation of other statistics, see, e.g., <cit.> for the estimation of variance and higher order central moments and <cit.> for the computation of Sobol' indices. Typically, multilevel methods are based on a sequence of levels which correspond to a hierarchy of simulators with increasing accuracy and cost. From a practical standpoint, the different levels often correspond to simulators with increasing mesh resolutions. This translates into a lower accuracy of the so-called coarse levels, whereas the finer levels correspond to accurate simulators. By construction, MLMC results in an unbiased estimator. It relies on a telescopic sum of terms based on the differences between the successive simulators. It debiases the MC estimator associated with the lowest-level simulator. Many other unbiased multilevel estimators have been devised in recent years. The multi-index MC estimator <cit.> is an extension of the MLMC estimator such that the telescoping sum idea is used in multiple directions. In <cit.>, the authors developed a variant of MLMC which uses QMC samples instead of independent MC samples for each level. Multi-fidelity MC (MFMC) estimators <cit.> are another extension that is based on the CV approach. Recently, the authors in <cit.> took a different approach by formulating a multilevel estimator as the result of a linear regression problem. In this work, we combine multilevel sampling with surrogate-based CV to define new estimators with the advantages of both. These novel multifidelity variance reduction strategies allow us to quantify efficiently the output uncertainties of simulators with limited computational budgets. A first strategy, named multilevel control variates (MLCV), uses CVs based on surrogate models of simulators corresponding to different levels. Although this strategy does not build on the MLMC approach specifically, it still exploits multilevel information through the surrogates constructed at different levels. The second strategy, named multilevel Monte Carlo with control variates (MLMC-MLCV), utilizes CV in an information fusion framework to exploit synergies between the flexible MLMC sampling and the correlation shared between the high- and low-fidelity components. The numerical results show that the unbiased MLMC-MLCV estimators can converge faster than the existing estimators. The paper is organized as follows. We introduce notations and the necessary mathematical background in <ref>. In <ref>, we briefly discuss the MLMC estimator and then define our proposed MLMC-MLCV estimator that combines multilevel sampling and surrogate-based CV. In <ref>, we conduct numerical experiments to support the theoretical results. <Ref> proposes concluding remarks. § BACKGROUND In this section, we summarize the important results of statistical estimation using control variates and we show how surrogate models can be leveraged in this setting. §.§ Notation We first introduce a few notations. Throughout the rest of this paper, the high-fidelity numerical model is abstractly represented by the deterministic mapping f: Ξ ⟶ℝ ⟼ f(), where := [ x_1 ⋯ x_d ]^⊺ is a vector of d uncertain input parameters evolving in a measurable space denoted by Ξ = Ξ_1 ×⋯×Ξ_d, with Ξ_i ⊂ℝ. Following a probabilistic approach, the d inputs are assumed to be continuous random variables X_1,…,X_d, defined on a probability space (Ω, ℱ, ℙ), with known probability density functions (PDFs) p_X_1, …,p_X_d. They are assumed to be independent, so that the PDF of the random vector := [ X_1 ⋯ X_d ]^⊺Ω→Ξ is p_ = ∏_i=1^d p_X_i. In this work, we seek an accurate estimator of some statistic θ of the output random variable Y:=f() Ω→ℝ, e.g., its expected value (θ = [Y]) or variance (θ = [Y]), at a reasonable computational cost 𝒞. §.§ Crude Monte Carlo In practice, the MC estimator θ̂ of the statistic θ is based on n observations Y^(1),…,Y^(n) defined as Y^(i)=f(^(i)) where 𝒳 = {^(1),…,^(n)} is a n-sample of , that is, a collection of independent and identically distributed random variables with the same distribution as . For instance, the sample mean and (unbiased) sample variance estimators, Ê[Y] = n^-1∑_i=1^n Y^(i) and V̂[Y] = (n-1)^-1∑_i=1^n (Y^(i) - Ê[Y])^2 are MC estimators of the expectation and variance of Y, respectively. It is well known that these estimators are unbiased, that is, [θ̂] = θ, so that the mean square error (MSE) of θ̂ reduces to the variance of the estimator: MSE(θ̂,θ) := [(θ̂ - θ)^2] = [θ̂] + ([θ̂] - θ)^2 = [θ̂]. The convergence of these MC estimators is known to be slow, so that variance reduction techniques are needed when dealing with computationally expensive simulators. §.§ Control variates In this section, we present a well-known variance reduction technique using auxiliary random variables Z_1,…,Z_M as control variates. We denote by τ_1,…,τ_M the statistics of the control variates that correspond to the statistic θ that we seek to estimate. These control statistics τ_1,…,τ_M are assumed to be known exactly. Then, the CV estimator is defined as θ̂^CV(α) = θ̂ - α^⊺ (τ̂ - τ), where θ̂ and τ̂ = [ τ̂_1 ⋯ τ̂_M ]^⊺ are unbiased MC estimators of θ and τ =[ τ_1 ⋯ τ_M ]^⊺, respectively, based on a common input n-sample 𝒳, and where α∈ℝ^M is the control parameter. We note that the CV estimator is unbiased by construction, regardless of the value of the parameter α, and that its variance reads [θ̂^CV(α)] = [θ̂] + α^⊺Σα - 2 α^⊺𝐜. with 𝐜:=[τ̂,θ̂] ∈ℝ^M and Σ:=[τ̂] ∈ℝ^M × M. To fully take advantage of the control variates, the parameter α is selected so as to minimize the variance (<ref>) of the CV estimator. Assuming that the covariance matrix Σ is non-singular (and thus symmetric positive definite, SPD), the first- and second-order optimality conditions of the minimization problem imply that there exists a unique optimal solution, α^* = Σ^-1𝐜. The optimal CV estimator is thus θ̂^CV(α^*), and its variance is given by [θ̂^CV(α^*)] = (1 - R^2) [θ̂], where R^2 = [θ̂]^-1𝐜^⊺Σ^-1𝐜 is nonnegative, as Σ^-1 is SPD. Denoting by 𝐃 = (Σ) ∈ℝ^M × M the diagonal matrix consisting of the diagonal of Σ, it can be shown further that R^2 = 𝐫^⊺𝐑^-1𝐫, where 𝐫 = ([θ̂]𝐃)^-1/2𝐜 is the vector of the Pearson correlation coefficients between θ̂ and τ̂, and 𝐑 = 𝐃^-1/2Σ𝐃^-1/2 is the correlation matrix of τ̂. Thus, R^2 ∈ [0,1] corresponds to the squared coefficient of multiple correlation between θ̂ and the elements of τ̂ <cit.>. Consequently, 0 ≤[θ̂^CV(α^*)] ≤[θ̂], and we refer to R^2 as the variance reduction factor of the CV estimator. This shows that the variance of the optimal CV estimator θ̂^CV(α^*) is always reduced (or, rigorously speaking, not increased) compared to the MC estimator θ̂. Furthermore, the higher R^2, the greater the reduction in variance. The requirement that Σ be non-singular is a reasonable one. Indeed, let us suppose that Σ is singular. Then, because Σ is positive semi-definite by construction, this implies that there exists a nonzero vector η∈^M∖{0⃗} such that η^⊺Ση = [η^⊺τ̂] = 0, indicating that any one element of τ̂ can be expressed as an affine function of the others. As such, it does not bring any additional information to the CV estimator, so that at least one of the control variates can simply be discarded. A desirable property of the CV estimator is that increasing the number of control variates improves the CV estimator. Specifically, under mild assumptions, <ref> states that the variance of the CV estimator is reduced (or, rigorously speaking, not increased) when adding a new control variate. Let τ̂_+ := [ τ̂^⊺ τ̂_M+1 ]^⊺, τ_+ := [ τ^⊺ τ_M+1 ]^⊺, and define 𝐜_+ = [τ̂_+, θ̂], and Σ_+ = [τ̂_+]. We further assume that Σ and Σ_+ are non-singular and that [τ̂_M+1] > 0. Let θ̂^CV(α^*_+) := θ̂ - α^*_+^⊺ (τ̂_+ - τ_+), with α^*_+ = Σ_+^-1𝐜_+, be the optimal CV estimator based on M+1 control variates, and let R^2_+ denote its variance reduction factor. Then R^2_+ ≥ R^2. We have R^2_+ = 𝐫_+^⊺𝐑_+^-1𝐫_+, where 𝐫_+ = [ 𝐫^⊺ γ ]^⊺ and 𝐑_+ = [ 𝐑 𝐮; 𝐮^⊺ 1 ], and where 𝐮 = ([τ̂_M+1] 𝐃)^-1/2[τ̂, τ̂_M+1] and γ = ([θ̂][τ̂_M+1])^-1/2[θ̂, τ̂_M+1]. Note that 𝐑 and 𝐑_+ are both non-singular, because so are Σ and Σ_+. The augmented matrix 𝐑_+ may be inverted by block using the Schur complement s = 1 - 𝐮^⊺𝐑^-1𝐮 0 of the (1,1)-block 𝐑. It follows that R^2_+ = R^2 + s^-1(γ - 𝐮^⊺𝐑^-1𝐫)^2. Now, because 𝐑_+ is SPD (it is positive semidefinite by construction and non-singular by assumption), ^⊺𝐑_+ > 0 for any choice of 0. The particular choice of = [ - 𝐮^⊺𝐑^-1 1 ]^⊺ implies that s > 0, which in turn implies that R^2_+ ≥ R^2. Under the assumptions of <ref>, the equality R^2_+ = R^2 holds if and only if γ = 𝐮^⊺𝐑^-1𝐫. Straightforward from R^2_+ = R^2 + s^-1(γ - 𝐮^⊺𝐑^-1𝐫)^2 with s > 0. For the CV estimation of specific statistics, the expressions of c⃗ and Σ may be further reduced to involve statistics on Y and := [ Z_1 ⋯ Z_M ]^⊺ directly. These expressions, as well as the consequences on <ref>, are given in <ref> for the CV estimators of [Y] and [Y]. It should be noted at this point that, in practice, the optimal parameter α^* needs to be approximated. Specifically, the statistics involved in c⃗ and Σ need to be estimated. This can be done either using the same sample 𝒳 as for the CV estimator itself, or using an independent, pilot sample 𝒳'. The former strategy introduces a bias in the resulting CV estimator, while the latter guarantees unbiasedness but requires additional high-fidelity simulator evaluations, so that the former strategy is generally preferred. Besides, statistical remedies such as jackknifing, splitting or bootstrapping have been proposed to reduce or eliminate the bias introduced by the former strategy <cit.>. In both strategies, however, neither the theoretical variance reduction given by <ref> nor <ref> hold anymore. In some instances, because it only involves the control variates, Σ may be known exactly (see <ref> for an example in a multilevel setting involving PC-based control variates) or estimated accurately using many independent samples at negligible cost. Unfortunately, this is not the case for c⃗, which involves the output Y=f() of the high-fidelity simulator f. §.§ Surrogate-based control variates The efficiency of the CV approach relies on the strong correlation between Y=f() and the control variates . In a multifidelity framework, the control variates correspond to the output of low-fidelity versions of f, typically in the form of simulators with simplified physics and/or coarser discretization. In many applications, the exact statistical measures τ of such control variates 𝐙 may not be available. One way to circumvent this limitation is to use additional samples to also estimate τ. This type of estimators led to an approximate CV class of methods <cit.>, where different model management and sample allocation strategies may be used to find a suitable trade-off between the additional cost of sampling and the resulting variance reduction. Recently, an optimal strategy was proposed <cit.> leading to the so-called multilevel best linear unbiased estimator (MLBLUE), as well as a unifying framework for a large class of multilevel and multifidelity estimators. In this paper, we focus on the case where the low-fidelity models correspond to surrogate models of the high-fidelity simulator f. The main advantage is that the statistics τ may, in some instances, be directly available, or at least may be estimated arbitrarily accurately at negligible cost, so that the original CV approach described in <ref> can be used. Using surrogate models in a CV strategy has been explored previously in <cit.>, where the available computational budget is allocated both to the construction of the surrogates and to the actual CV estimations. In <cit.>, the authors introduce an approach that optimally balances the computational effort needed to select the optimal degree of the polynomial chaos (PC) expansion used in a stochastic Galerkin CV approach. In our work, we focus on the situation in which the surrogate models are available, so that we do not consider any optimization strategy for the construction of the surrogates. Nevertheless, for the fairness of comparison, we still report the pre-processing cost of contructing the surrogate models. We now briefly describe three different surrogate models commonly used in a UQ framework, and discuss the availability of statistics of their output. §.§.§ Gaussian process modelling We assume that the simulator f is a realization of a Gaussian process (GP) F indexed by and defined by its mean function m_F and covariance kernel k_F, [F()] = m_F(), [F(),F(')] = k_F(,'), ∀,' ∈Ξ. In practice, one can parametrize the forms of m_F and k_F. For example, in the widely used ordinary GP method, a stationary GP is assumed. In this case, m_F is set as a constant m_F() = m. More importantly, it is assumed that k_F(,') = k̅_F( - '), and k_F(,) = k̅_F(0) = σ^2 is a constant. Popular forms of kernels include polynomial, exponential, Gaussian, and Matérn functions. For example, the Gaussian kernel can be written as k_F(,') = σ^2 exp(-1/2 -' ^2_𝐡), where the weighted norm is defined as -' _𝐡 = (∑_i=1^d(x_i - x_i')^2/h_i^2)^1/2 where h_1,…,h_d are correlation lengths. The hyperparameters σ and h_i can be obtained by maximum likelihood. Then, given n observations ℱ = [ f(^(1)) ⋯ f(^(n)) ]^⊺ of F at 𝒳 = [ ^(1) ⋯ ^(n) ]^⊺, the posterior F̃ of F can be defined as F̃ = [F | F(𝒳) = ℱ], whose expectation and covariance are given by m_F̃() =[F̃()] = m_F() + k_F(,𝒳 )^⊺ k_F(𝒳,𝒳)^-1(ℱ - m_F(𝒳)), k_F̃(,') = [F̃(),F̃(')] = k_F(,')- k_F(,𝒳)^⊺ k_F(𝒳,𝒳)^-1k_F(𝒳,'). Thereafter, the high-fidelity model f at will be approximated by the conditional expectation, g^GP() = m_F̃(). Thus, the expectation and variance of g^GP() are defined as [g^GP()] = ∫_Ξ m_F̃() p_() d, [g^GP()] = ∫_Ξ (m_F̃() - [g^GP()])^2 p_() d. These two statistics can be approximated empirically by taking a large sample of , as the GP model is inexpensive. Analytical formulas exist for some pairs of distributions and covariance functions (see <cit.> and <cit.>). §.§.§ Taylor polynomials We assume that the numerical simulator f is infinitely differentiable and that the moments of are finite. Under this assumption, it is possible to expand the original simulator f around the input's expected value μ_ := [] as the infinite polynomial series according to Taylor's theorem, f() = g^T_∞() = ∑_|β| ≤ p( - μ_)^β/β ! D^β f(μ_), where β∈ℕ^d, |β| = ∑_i=1^dβ_i, β! = ∏_i=1^dβ_i !, x^β = ∏_i=1^d x_i^β_i and D^β f = ∂^|β|f∂^β_1 x_1 ⋯∂^β_d x_d. Thus, the function f may be approximated by the first- and second-order Taylor polynomials, g^T_1() = f(μ_) + 𝐉_f(μ_) ( - μ_), g^T_2() = f(μ_) + 𝐉_f(μ_) ( - μ_) + 1/2 ( - μ_)^⊺𝐇_f(μ_)( - μ_), where 𝐉_f(μ_) = ∇_f(μ_)^⊺∈ℝ^1 × d and 𝐇_f(μ_) ∈ℝ^d× d are the Jacobian and Hessian matrices of f at μ_, respectively. In practical computations, the derivatives may be approximated by numerical differentiation if they are not provided. The expectation and variance of g^T_1() and g^T_2() are then defined as [g^T_1()] = f(μ_), [g^T_1()] = 𝐉_f(μ_)^⊙ 2σ^2_, [g^T_2()] = f(μ_) + 1/2Tr(𝐇_f(μ_) Σ^2_), [g^T_2()] = Tr(𝐉_f(μ_)^⊺𝐉_f(μ_) Σ^2_ ) + 1/2Tr( 𝐇_f(μ_)^⊙ 2σ^2_ (σ^2_)^⊺), where for any matrix 𝐀, 𝐀^⊙ 2 denotes the element-wise square of 𝐀, Tr(𝐀) is the trace of 𝐀, σ^2_ = [ σ^2_X_1 ⋯ σ^2_X_d ]^⊺∈ℝ^d is the element-wise variance of , and Σ^2_ = (σ^2_) ∈ℝ^d × d. §.§.§ Polynomial chaos expansion We consider the truncated polynomial chaos (PC) expansion of f() <cit.>, f() ≃ g^PC_P() = ∑_k=0^P𝔤_k Ψ_k(), where the coefficients 𝔤_k are real scalars and Ψ_k are orthonormal multivariate polynomials: ∀ i,j ≥ 0, ⟨Ψ_i, Ψ_j ⟩_p_ := ∫_ΞΨ_i() Ψ_j() p_() d = [Ψ_i()Ψ_j()] = δ_ij, where δ_ij denotes the Kronecker delta. The coefficients 𝔤_k may be approximated using non-intrusive techniques such as non-intrusive (pseudo)spectral projection <cit.>, regression <cit.>, interpolation <cit.>, or from intrusive approaches such as the stochastic Galerkin method <cit.>. Assuming Ψ_0 ≡ 1, the expectation and variance of g^PC_P() are then given by [g^PC_P()] = 𝔤_0 and [g^PC_P()] = ∑_k=1^P 𝔤_k^2. § MULTILEVEL ESTIMATORS In this section, we present so-called multilevel statistical estimation techniques based on a sequence of simulators (f_ℓ)_ℓ = 0^L, with increasing accuracy and computational cost, where f_L≡ f denotes the high-fidelity numerical simulator. The levels are ordered from the coarsest (ℓ=0) to the finest (ℓ=L). We denote by Y_ℓ the random variable Y_ℓ=f_ℓ() and θ_ℓ the statistic of f_ℓ() increasingly close to θ_L≡θ. §.§ Multilevel Monte Carlo The statistic θ_L can be expressed as the telescoping sum θ_L = ∑_ℓ = 0^L T_ℓ, where T_ℓ = θ_ℓ - θ_ℓ-1, and, by convention, θ_-1 := 0. The MLMC estimator θ̂_L^MLMC of θ_L is then defined as <cit.> θ̂^MLMC = ∑_ℓ = 0^LT̂_ℓ^(ℓ), where, at each level ℓ, T̂_ℓ^(ℓ) is an unbiased MC estimator of T_ℓ, based on an input sample 𝒳^(ℓ) = {^(ℓ, i)}_i=1^n_ℓ such that the members of 𝒳^(ℓ) and 𝒳^(ℓ') are mutually independent for ℓℓ'. In many instances, T̂_ℓ^(ℓ) is actually defined as T̂_ℓ^(ℓ) = θ̂_ℓ^(ℓ) - θ̂_ℓ-1^(ℓ), where θ̂_k^(ℓ) denotes the unbiased MC estimator of θ_k based on the simulator f_k using the n_ℓ-sample 𝒳^(ℓ). For example, the MLMC estimator Ê^MLMC[Y] of the expectation [Y] is given by Ê^MLMC[Y] = Ê^(0) [Y_0] + ∑_ℓ = 1^LÊ^(ℓ) [Y_ℓ]- Ê^(ℓ)[Y_ℓ-1], where Ê^(ℓ) [Y_k] = n_ℓ^-1∑_i = 1^n_ℓ f_k(^(ℓ,i)), with k ∈{ℓ, ℓ-1}. We stress that the correction terms at each level ℓ are computed from the same input sample 𝒳^(ℓ), but using two successive simulators, f_ℓ and f_ℓ-1. Similarly, the MLMC estimator V̂^MLMC[Y] of the variance [Y] is defined as <cit.> V̂^MLMC[Y] = V̂^(0) [Y_0] + ∑_ℓ = 1^LV̂^(ℓ) [Y_ℓ] - V̂^(ℓ)[Y_ℓ-1], where V̂^(ℓ) [Y_k] = n_ℓ/n_ℓ - 1 ( Ê^(ℓ) [Y_k^2] - Ê^(ℓ) [Y_k]^2) is the single-level unbiased MC variance estimator. Owing to the independence of the estimators T̂_ℓ^(ℓ), the variance of the MLMC estimator is [θ̂^MLMC] = ∑_ℓ = 0^L[T̂_ℓ^(ℓ)]. In practice, the MLMC method relies on the allocation of the total (expected) computational cost of the MLMC estimator across the different levels, with (θ̂^MLMC) = ∑_ℓ=0^L n_ℓ (𝒞_ℓ+𝒞_ℓ-1), where 𝒞_ℓ is the (expected) cost of one evaluation of the simulator f_ℓ, with 𝒞_-1 := 0 by convention. Thus, a key aspect is played by the choice of the number of samples n_ℓ allocated to each level ℓ. The goal is to find the sample sizes n_0,…,n_L that minimize the variance of the estimator [θ̂^MLMC] given a computational budget 𝒞. Thus, the sample allocation problem reads *minimize_n_0,…,n_L∈ℕ^* [θ̂^MLMC] subject to (θ̂^MLMC) = 𝒞. This minimization problem has a unique solution which can be computed analytically (see, e.g., <cit.>). In practice, [θ̂^MLMC] is not known, and we instead rely on the assumption that [T̂_ℓ^(ℓ)] ≲ n_ℓ^-1𝒱_ℓ, with 𝒱_ℓ independent of n_ℓ <cit.>. This is a reasonable assumption that holds for the MLMC estimation of the expectation, variance and covariance <cit.>. Note that, for the estimation of the expectation, we have [T̂_ℓ^(ℓ)] = n_ℓ^-1𝒱_ℓ, with 𝒱_ℓ = [Y_ℓ - Y_ℓ-1] and Y_-1≡ 0. The sample allocation problem <ref> is then replaced with *minimize_n_0,…,n_L∈ℕ^* ∑_ℓ=0^L n_ℓ^-1𝒱_ℓ subject to (θ̂^MLMC) = 𝒞, which is equivalent for the expectation, and an approximation for other statistics. §.§ Multilevel surrogate-based control variate strategies In this section, we introduce various surrogate-based control variate strategies in a multilevel framework where a hierarchy of simulators (f_ℓ)_ℓ = 0^L is available. These strategies rely on using the random variables (g_ℓ())_ℓ = 0^L as control variates, where g_ℓ is a surrogate model of the simulator f_ℓ. §.§.§ Multilevel control variates (MLCV) The first strategy, referred to as multilevel control variates and hereafter abbreviated MLCV, consists of using the surrogate models at all levels to build the control variates in <ref>. Thus, τ_ℓ corresponds to the statistical measure of the random variable Z_ℓ = g_ℓ(), and τ̂_ℓ to its unbiased MC estimator. For instance, the MLCV estimator of the expectation based on an n_L-sample 𝒳^(L) = {^(1),…,^(n_L)} reads Ê^MLCV[Y](α) = Ê^(L)[Y_L] - ∑_ℓ=0^L α_ℓ( Ê^(L)[Z_ℓ] - μ_Z_ℓ), where Ê^(L)[Y_L] = n_L^-1∑_i=1^n_L f_L(^(i)), Ê^(L)[Z_ℓ] = n_L^-1∑_i=1^n_L g_ℓ(^(i)), and μ_Z_ℓ = [g_ℓ()]. Note that this approach does not build on the MLMC methodology described in <ref>, but still exploits multilevel information through the surrogates constructed at different levels. §.§.§ Multilevel Monte Carlo with control variates (MLMC-CV) The second strategy consists of improving the MLMC estimator <ref> using the surrogate-based control variates Z_0,…,Z_L. Specifically, the MLMC-CV estimator improves the MC estimation of each of the correction terms of the MLMC estimator by using a surrogate model of the corresponding correction term as a control variate. Note that the MLMF approach proposed in <cit.> is based on a similar strategy, using arbitrary low-fidelity models at level in an approximate CV setting. The MLMC-CV estimator reads θ̂^MLMC-CV(α_1, …, α_L) = ∑_ℓ=0^L T̂_ℓ^CV(α_ℓ), where T̂_ℓ^CV(α_ℓ) is the CV estimator of the multilevel correction term T_ℓ (cf. <ref>), T̂_ℓ^CV(α_ℓ) = T̂_ℓ^(ℓ) - α_ℓ(Û^(ℓ)_ℓ - U_ℓ), with Û^(ℓ)_k an unbiased MC estimator of the control variate statistic U_k = τ_k - τ_k-1, based on the same input sample 𝒳^(ℓ) as T̂_ℓ^(ℓ), again with members of 𝒳^(ℓ) and 𝒳^(ℓ') being independent for ℓℓ'. Note that, in <ref>, because a single control variate is used at each level, the definition of Û^(ℓ)_k is used in the specific case where k=ℓ. The more general definition when k and ℓ are not necessarily equal will be useful later in <ref>, where we consider multiple control variates per level. In practice, Û^(ℓ)_k may be defined as Û^(ℓ)_k = τ̂^(ℓ)_k - τ̂^(ℓ)_k-1, where τ̂^(ℓ)_k is an unbiased estimator of τ_k using the n_ℓ-sample 𝒳^(ℓ). The optimal value α_ℓ^* for α_ℓ is obtained individually for each ℓ=0,…,L as the optimal (single) CV parameter for T̂_ℓ^CV(α_ℓ), α_ℓ^* = [T̂_ℓ^(ℓ), Û^(ℓ)_ℓ][Û^(ℓ)_ℓ], and the resulting variance of the control variate estimator of the correction is [T̂_ℓ^CV(α_ℓ^*)] = (1 - ρ_ℓ^2) [T̂_ℓ^(ℓ)], ρ_ℓ = [T̂_ℓ^(ℓ), Û^(ℓ)_ℓ][T̂^(ℓ)_ℓ]^1/2[Û^(ℓ)_ℓ]^1/2∈ [-1, 1]. The correction estimators (T̂_ℓ^CV)_ℓ=0^L being mutually independent, the variance of the optimal MLMC-CV estimator is [θ̂^MLMC-CV(α_0^*,…,α_L^*)] = ∑_ℓ=0^L [T̂_ℓ^CV(α_ℓ^*)] = ∑_ℓ=0^L (1 - ρ_ℓ^2) [T̂_ℓ^(ℓ)] ≤∑_ℓ=0^L [T̂_ℓ^(ℓ)], indicating that the variance of the MLMC-CV estimator is smaller (as long as ρ_ℓ^2 >0) than that of the MLMC estimator; see <ref>. We remark that the variance reduction depends on the (squared) correlation between T̂_ℓ^(ℓ) and Û^(ℓ)_ℓ, which, in turn, typically relates to some measure of similarity between high-fidelity corrections Y_ℓ - Y_ℓ-1 and the corresponding CV corrections (see <ref> for the expectation and variance estimators in a single-level setting). In our surrogate-based approach, we may define control variates as Z_ℓ = g_ℓ(), where g_ℓ is a surrogate of f_ℓ, so that τ_ℓ and τ_ℓ-1 could be estimated using samples of Z_ℓ and Z_ℓ-1, respectively. It is then crucial to construct the surrogates such that their successive differences g_ℓ - g_ℓ - 1 are good approximations of f_ℓ - f_ℓ - 1, to ensure a high similarity between Y_ℓ - Y_ℓ-1 and Z_ℓ - Z_ℓ-1. However, constructing g_ℓ as a surrogate of f_ℓ does not give any guarantee on the quality of g_ℓ - g_ℓ-1 as a surrogate of f_ℓ - f_ℓ - 1. Instead, in addition to the surrogates models g_ℓ of f_ℓ, we construct surrogate models h_ℓ of the differences f_ℓ - f_ℓ-1, and we define auxiliary surrogate models g̃_ℓ-1 = g_ℓ - h_ℓ, for ℓ=1,…, L. On each level ℓ, we then use samples of Z_ℓ = g_ℓ() and Z̃_ℓ-1 = g̃_ℓ-1() for the estimation of τ_ℓ and τ_ℓ-1, respectively. As a result, the variance reduction now depends on the similarity between Y_ℓ - Y_ℓ-1 and W_ℓ := Z_ℓ - Z̃_ℓ-1 = g_ℓ() - g̃_ℓ-1() = h_ℓ(), where h_ℓ has been constructed to ensure such a similarity. Specifically, for the expectation, the MLMC-CV estimator reads Ê^MLMC-CV[Y](α_0,…,α_L) = ∑_ℓ=0^L ( Ê^(ℓ)[Y_ℓ] - Ê^(ℓ)[Y_ℓ-1] ) - α_ℓ( Ê^(ℓ)[Z_ℓ] - Ê^(ℓ)[Z̃_ℓ-1] - (μ_Z_ℓ - μ_Z̃_ℓ-1) ), with optimal values of α_ℓ given by (see <ref>, with M=1) α_ℓ^* = [Y_ℓ - Y_ℓ-1, W_ℓ][Z_ℓ - Z_ℓ-1], ∀ℓ=0,…,L, resulting in level-dependent reduction factors 1-ρ_ℓ^2, where ρ_ℓ^2 = [Y_ℓ - Y_ℓ-1, W_ℓ]^2[Y_ℓ - Y_ℓ-1][W_ℓ] is the squared correlation coefficient between Y_ℓ - Y_ℓ-1 and W_ℓ = h_ℓ(). It should be noted that, because of the linearity of the expectation operator and its MC estimator, the use of g̃_ℓ-1 is superfluous. Indeed, we may directly define the MLMC-CV estimator of the expectation as Ê^MLMC-CV[Y](α_0, …, α_L) = ∑_ℓ=0^L ( Ê^(ℓ)[Y_ℓ] - Ê^(ℓ)[Y_ℓ-1] ) - α_ℓ ( Ê^(ℓ)[W_ℓ] - μ_W_ℓ ), with μ_W_ℓ = [W_ℓ]. For the variance, the MLMC-CV estimator reads V̂^MLMC-CV[Y](α_0,…,α_L) = ∑_ℓ=0^L ( V̂^(ℓ)[Y_ℓ] - V̂^(ℓ)[Y_ℓ-1] ) - α_ℓ( V̂^(ℓ)[Z_ℓ] - V̂^(ℓ)[Z̃_ℓ-1] - (σ^2_Z_ℓ - σ^2_Z̃_ℓ-1) ), with σ^2_Z_ℓ = [Z_ℓ] and σ^2_Z̃_ℓ-1 = [Z̃_ℓ-1]. The resulting level-dependent reduction factors are then related to the correlation between (Y_ℓ - Y_ℓ-1 - [Y_ℓ - Y_ℓ-1])^2 and (h_ℓ() - [h_ℓ()])^2 (see <ref>, with M=1). Further details on the construction of h_ℓ will be given in <ref>. §.§.§ Multilevel Monte Carlo with multilevel control variates (MLMC-MLCV) We now propose to further improve the MLMC-CV estimator <ref> by combining MLMC with the MLCV approach described in <ref>, resulting in the MLMC-MLCV method. The approach consists in using, at each level ℓ, the surrogate-based control variates of all the levels ℓ'=0,…,L, rather than only using those of level ℓ, as was previously done with the MLMC-CV approach. The MLMC-MLCV estimator then reads θ̂^MLMC-MLCV (α_0, …, α_L) = ∑_ℓ = 0^LT̂_ℓ^MLCV(α_ℓ), where α_ℓ denotes the CV parameter at level ℓ, and T̂_ℓ^MLCV(α_ℓ) is the MLCV estimator of T_ℓ, T̂_ℓ^MLCV(α_ℓ) = T̂_ℓ^(ℓ) - α_ℓ^⊺ (𝐔̂_ℓ^(ℓ) - 𝐔_ℓ), with 𝐔_0 = (τ_k)_k=0^L 𝐔̂_0^(0) = (τ̂_k^(0))_k=0^L 𝐔_ℓ = (U_k)_k=1^L, for ℓ>0, 𝐔̂_ℓ^(ℓ) = (Û^(ℓ)_k)_k=1^L, for ℓ>0, and with U_k and Û^(ℓ)_k defined as in <ref>. Because each term <ref> is an unbiased (multiple) CV estimator of T_ℓ, the resulting estimator <ref> is also unbiased, and the optimal (variance minimizing) value α_ℓ^* of α_ℓ is given individually for each ℓ=0,…,L as the optimal (multiple) CV parameter for T̂_ℓ^(ℓ)(α_ℓ), α_ℓ^* = [𝐔̂_ℓ^(ℓ)]^-1[𝐔̂_ℓ^(ℓ), T̂_ℓ^(ℓ)]. The resulting variance is given by [θ̂^MLMC-MLCV(α_0^*,…,α_L^*)] = ∑_ℓ = 0^L[T̂_ℓ^MLCV(α_ℓ^*)] = ∑_ℓ = 0^L (1 - R_ℓ^2) [T̂_ℓ^(ℓ)] with R_ℓ^2 = [T̂_ℓ^(ℓ)]^-1[𝐔̂^(ℓ), T̂_ℓ^(ℓ)]^⊺α_ℓ^* ∈ [0,1]. Again, owing to the fact that R_ℓ^2 ≤ 1, the variance of the MLMC-MLCV estimator is always less than or equal to the variance of the MLMC estimator given by [θ̂_L^MLMC] = ∑_ℓ = 0^L[T̂_ℓ^(ℓ)] (see <ref>). In our surrogate-based approach, L+1 surrogate-based control variates can be used for the coarsest level ℓ=0, namely g_0, …, g_L, so that α_0∈^L+1. At correction levels ℓ >0, we can use L control variates based on g_1, …, g_L and g̃_0, …, g̃_L-1, as described in <ref>, so that α_ℓ∈^L, for ℓ>0. The MLMC-MLCV estimator of the expectation [f()] is defined by <ref>, with T̂_0^MLCV(α_0) = Ê^(0)[Y_0] - α_0^⊺ (Ê^(0)[] - μ_), T̂_ℓ^MLCV(α_ℓ) = Ê^(ℓ)[Y_ℓ] - Ê^(ℓ)[Y_ℓ-1] - α_ℓ^⊺ (Ê^(ℓ)[W⃗] - μ_W⃗), ℓ >0, where = (Z_0, …, Z_L) = (g_0(), …, g_L()), μ_ = [], W⃗ = (W_1, …, W_L) = (h_1(), … h_L()), μ_W⃗ = [W⃗]. The optimal values α_ℓ^* of the CV parameters are given by α_0^* = Σ_0^-1c⃗_0, Σ_0 = [], c⃗_0 = [, Y_0], α_ℓ^* = Σ_ℓ^-1c⃗_ℓ, Σ_ℓ = [W⃗], c⃗_ℓ = [W⃗, Y_ℓ - Y_ℓ-1], ℓ >0, resulting in R^2_ℓ = [Y_ℓ - Y_ℓ-1]^-1c⃗_ℓ^⊺Σ_ℓc⃗_ℓ, for ℓ = 0, …, L. Note that Σ_ℓ = [W⃗] is the same for all ℓ > 0. The optimal MLMC-MLCV variance estimator is derived in <ref>. In practice, Σ and c⃗_ℓ may be estimated using either a pilot sample or the same sample as for the estimation of 𝐔̂_ℓ^(ℓ) and T̂_ℓ^(ℓ). Alternatively, in the specific context of PC-based control variates, for the estimation of the expectation, a closed-form expression for Σ can be obtained. Letting Z_ℓ = g_ℓ() = ∑_k=0^P_g^ℓ𝔤_ℓ, kΨ_k() and W_ℓ = h_ℓ() = ∑_k=0^P_h^ℓ𝔥_ℓ, kΨ_k(), we have ∀ m, m' = 0,…,L, [Σ_0]_m, m' = [Z_m, Z_m'] = ∑_k=1^min(P_g^m, P_g^m')𝔤_m,k𝔤_m',k, ∀ m, m' = 1,…,L, [Σ_ℓ]_m, m' = [W_m, W_m'] = ∑_k=1^min(P_h^m, P_h^m')𝔥_m,k𝔥_m',k, ℓ >0. §.§ Practical details A summary of the methods is presented in <ref>. We remark that all the MLMC-like estimators (including the MLMC estimator), hereafter abbreviated MLMC-*, have variance ∑_ℓ = 0^L (1 - R_ℓ^2) [T̂_ℓ^(ℓ)], with * R^2_ℓ = 0 for plain MLMC; * R^2_ℓ = ρ^2_ℓ as defined in <ref> for MLMC-CV; and * R^2_ℓ defined by <ref> for MLMC-MLCV. For all these methods, we will further assume that [T̂_ℓ^(ℓ)] ≲ n_ℓ^-1𝒱_ℓ (see <ref>), which implies that [T̂_ℓ^MLMC-*(α_ℓ^*)] = (1 - R_ℓ^2) [T̂_ℓ^(ℓ)] ≲ n_ℓ^-1𝒱^CV_ℓ, with 𝒱^CV_ℓ := (1 - R_ℓ^2)𝒱_ℓ. [!htbp] Summary of the methods. The first three are state-of-the art methods, the next three are the novel multilevel methods proposed in this paper, and the last two are variants of the proposed methods. Method Form of the estimator Eq. Monte Carlo (MC). θ̂ Control Variates (CV) <cit.>. θ̂ - ∑_m=1^M α_m (τ̂_m - τ_m). <ref> Multilevel Monte Carlo (MLMC) <cit.>. θ̂_0^(0) + ∑_ℓ=1^LT̂_ℓ^(ℓ). <ref> Multilevel Control Variates (MLCV). CV method where the CVs are based on surrogate models of simulators of different levels of fidelity. θ̂ - ∑_ℓ=0^L α_ℓ (τ̂_ℓ - τ_ℓ). <ref> MLMC-CV. MLMC with one CV at each correction level based on surrogate models of the simulators on the corresponding level. Corresponds to a surrogate-based MLMF <cit.> where the exact statistics of the CVs are known. θ̂_0^(0) + α_0 (τ̂_0^(0) - τ_0) + ∑_ℓ=1^L( T̂_ℓ^(ℓ) + α_ℓ (Û_ℓ^(ℓ) - U_ℓ) ). <ref>, <ref> MLMC-MLCV. MLMC-CV with the CVs based on the surrogate models g_0,…,g_L on level 0 and on h_1,…,h_L on levels ℓ>0. θ̂_0^(0) + α_0^⊺[ τ̂_0^(0) - τ_0; ⋮; τ̂_L^(0) - τ_L ] + ∑_ℓ=1^L( T̂_ℓ^(ℓ) + α_ℓ^⊺[ Û_1^(ℓ) - U_1; ⋮; Û_L^(ℓ) - U_L ]). <ref>, <ref> MLMC-CV[0]. MLMC-CV using only one CV based on the surrogate g_0 on level 0 and no CVs on levels ℓ>0 θ̂_0^(0) + α_0(τ̂_0^(0) - τ_0) + ∑_ℓ=1^LT̂_ℓ^(ℓ). MLMC-MLCV[0]. MLMC-MLCV using only CVs based on the surrogates g_0 and g_1 on level 0, and h_1 on levels ℓ>0. θ̂_0^(0) + α_0^⊺[ τ̂_0^(0) - τ_0; τ̂_1^(0) - τ_1 ] + ∑_ℓ=1^L( T̂_ℓ^(ℓ) + α_ℓ(Û_1^(ℓ) - U_1) ). In the surrogate-based variants of MLMC, the cost of evaluating the surrogate models is assumed to be negligible compared to the costs of evaluating the simulators f_0, …, f_L. Therefore, the total computational cost of the MLMC-* estimator reduces to the cost of the MLMC estimator, (θ̂^MLMC-MLCV) = (θ̂^MLMC), given by <ref>. Similarly to the case of the MLMC estimator (see, e.g., <cit.>), the optimal sample sizes (n_ℓ^*)_ℓ=0^L such that ∑_ℓ=0^L 𝒱^CV_ℓ / n_ℓ^* is minimal under a constrained computational budget of 𝒞 are given by n_ℓ^* = 𝒞𝒮_L√(𝒱^CV_ℓ𝒞_ℓ + 𝒞_ℓ-1), 𝒮_ℓ := ∑_ℓ'=0^ℓ√((𝒞_ℓ' + 𝒞_ℓ'-1) 𝒱^CV_ℓ'), so that ∑_ℓ=0^L 𝒱^CV_ℓ / n_ℓ^* = S_L^2 / 𝒞. As a consequence, [θ̂_L^MLMC-MLCV(α_ℓ^* )] ≲S_L^2𝒞, with an equality between the left- and right-hand sides for the expectation estimators. In practice, 𝒱^CV_ℓ is not known and must be estimated for each level. In this work, we consider the sequential algorithm proposed in <cit.> for the MLMC. The algorithm starts from an initial, small number of samples n_ℓ^init on each level. Then, it selects the optimal level on which to increase the sample size by an inflation factor r_ℓ > 1, i.e. the level ℓ^* on which the reduction in total variance relative to the additional computational effort achieved by inflating the sample size by r_ℓ^* is maximal. § NUMERICAL EXPERIMENTS We demonstrate the value of our MLMC-MLCV method on the uncertain heat equation problem proposed in <cit.> and summarized in <ref>. The surrogate models used for the control variates are described in <ref>, and the results from numerical experiments are presented and discussed in <ref>. §.§ Problem description We consider the partial differential equation describing the time-evolution of the temperature u(x,t;) in a 1D rod of unit length over the time interval [0,T], with uncertain (random) initial data u_0 and thermal diffusivity ν, ∂ u(x,t;)∂ t = ν() ∂^2 u(x,t;)∂ x^2, x ∈𝒟 := (0,1), t ∈ [0,T], u(x,0;) = u_0(x;), x ∈𝒟, u(0,t; ) = u(1,t; ) = 0, t ∈ [0,T], where Ω→Ξ is a random vector modelling the uncertainty in the input parameters, and where ν() > 0 almost surely. The solution of <ref> may be expressed as u(x,t;) = ∑_k=1^∞ a_k() exp(-ν() k^2π^2 t) sin(kπ x) with a_k() = 2 ∫_𝒟 u_0(x; ) sin(kπ x) dx . The initial condition is chosen to have the same prescribed form as in <cit.>. Specifically, we consider u_0(x; ) = 𝒢() ℱ_1(x) + ℐ() ℱ_2(x) with ℱ_1(x) = sin(π x), ℱ_2(x) = sin(2 π x)+sin(3 π x)+50(sin(9 π x) + sin(21 π x)), ℐ() = 7/2[sin(X_1)+7sin(X_2)^2+0.1X_3^4sin(X_1) ], 𝒢() = 50 (4|X_5| - 1)(4|X_6| - 1)(4|X_7| - 1), which allows to control the spectral content of the solution u. Furthermore, as in <cit.>, the diffusion coefficient is modelled by ν() = X_4. The random output variable of interest is defined as the integral of the temperature along the rod at final time T, ℳ() = ∫_𝒟 u(x,T; ) dx = ∑_k=1^∞ a_k() ∫_𝒟exp(-ν() k^2π^2 T) sin(kπ x) dx = 𝒢() ℋ_1() + ℐ() [ ℋ_3() + 50 ℋ_9() + 50 ℋ_21() ], where ℋ_k() = 2/kπexp(-ν() k^2π^2 T). In this experiment, we seek to estimate the expectation [ℳ()], for a given uncertain setting. Consistently with <cit.>, we consider the random variables X_1,…,X_7 to be independent and distributed as X_1, X_2, X_3 ∼𝒰[-π,π], X_4 ∼𝒰[ν_min, ν_max], X_5, X_6, X_7 ∼𝒰[-1,1]. The expected value [ℳ()] is then given by [ℳ()] = 50 H_1 + 49/4(H_3 + 50 H_9 + 50 H_21), where H_k = [ℋ_k()] = 2/k^3 π^3 Texp(-ν_mink^2π^2 T) - exp(-ν_maxk^2π^2 T)/ν_max - ν_min. Finally, we set T=0.5, ν_min = 0.001 and ν_max = 0.009, resulting in [ℳ()] ≈ 41.98. Numerically, ℳ() is approximated by truncating the Fourier expansion in <ref> to K < ∞ modes and by approximating the integrals in <ref> by a trapezoidal quadrature rule with equispaced nodes in [0,1]. The multilevel hierarchy of simulators {f_ℓ}_ℓ=0^L is then defined according to the number of quadrature nodes N_ℓ used for the approximation at level ℓ. Specifically, ℳ() is approximated at level ℓ by Y_ℓ = f_ℓ() = ∑_k=1^K A_k^ℓ() B_k^ℓ(), with A_k^ℓ() = 2 ∑_i=1^N_ℓ w_i u_0(x_i; ) sin(kπ x_i), B_k^ℓ() = exp(-ν() k^2π^2 T) ∑_i=1^N_ℓ w_i sin(kπ x_i), where {(x_i, w_i)}_i=1^N_ℓ are the pairs of quadrature nodes and associated weights on level ℓ. It is then natural to assume that the computational cost 𝒞_ℓ of an evaluation of f_ℓ is 𝒪(KN_ℓ). The statistic of interest is thus θ = θ_L = [f_L()], whose MC estimator θ̂_L = n_L^-1∑_i=1^n_L f_L(^(L,i)) will represent the baseline estimator for our experiments. Besides, the quality of all the presented estimators will be assessed in terms of their root mean square error (RMSE) w.r.t. the exact statistic [ℳ()] given by <ref>. In the following experiments, we set the number of quadrature nodes K=21 and the number of levels to 4 (i.e. L=3). Furthermore, we set N_ℓ=120×2^L-ℓ so that evaluating f_ℓ is twice as expensive as evaluating f_ℓ-1. Table <ref> summarizes the number of quadrature nodes and the evaluation cost per level. Note that the costs are normalized so that 𝒞_3 = 1. §.§ Surrogate models We will mostly use PC models (see <ref>) for the surrogate-based CV estimators. Constructing high-quality surrogates in 7 dimensions can be hard, especially in the presence of non-linearities and with a limited sample size. To avoid overfitting, we resort to the least angle regression (LARS) procedure <cit.>, which is a model-selection regression method that promotes sparsity. More precisely, we employ the basis-adaptive hybrid LARS algorithm proposed by <cit.> for the selection of sparse PC bases. For a given design of experiment (DoE) for the PC surrogate construction, this algorithm applies the LARS procedure on candidate PC bases 𝒜_p of increasing total polynomial degree p = 1, …,p_max, resulting for each p in the selection of a limited number |𝒜̃_p| < |𝒜_p| of basis polynomial functions. For each p, a PC surrogate is constructed by classical least-squares regression on the corresponding reduced (or active) PC basis 𝒜̃_p, and its quality is estimated using a corrected leave-one-out cross-validation procedure <cit.>. The best surrogate according to this quality measure is eventually retained, and the associated reduced PC basis is denoted by 𝒜̃_p^*. In our experiments, we set p_max = 16 and the construction budget to 400 times the evaluation cost of f_3, i.e. 𝒞^DoE = 400. This budget is distributed equally among the different levels, so that the associated evaluation cost on each level corresponds to the cost of 100 f_3 evaluations, i.e. n_ℓ^DoE𝒞_ℓ = 100, as reported in <ref>. Once constructed, the quality of a surrogate g of f is assessed in terms of the Q^2 measure, Q^2(g, f)=1-[(g()-f())^2]/[f()], estimated using a test sample of size n_test=10000, which is more robust than the corrected leave-one-out measure used for the model selection. The higher the Q^2 value, the higher-quality the associated surrogate. For the MLCV estimator, we learn a PC model g_ℓ of f_ℓ from a training DoE of size n_ℓ^DoE generated by latin hypercube sampling (LHS) improved by simulated annealing. The training DoE samples for the surrogate construction are different for each level and independent. <Ref> summarizes the properties of the different PC models. Except for g_3, which is built with only 100 points, all the PC models have a good Q^2, greater than 0.8. We observe that the basis-adaptive LARS algorithm has selected a decreasing polynomial degree p^* with level ℓ, while retaining only a limited number |𝒜̃_p^*| of polynomials in these reduced bases. Thus, although g_1 and g_2 have distinct degrees, the sizes of the associated reduced bases are similar. For the MLMC-based estimators, we learn a PC model g_ℓ of f_ℓ for ℓ=0,…,L and a PC model h_ℓ of f_ℓ-f_ℓ-1 for ℓ = 1,…,L. In practice, training points used for the construction of g_0, …, g_L may be reused for the construction of h_1, …, h_L using nested DoEs. However, it should be noted that the generation of nested LHS DoEs is not as straightforward as for purely random DoEs. While using nested random DoEs is a perfectly valid strategy, we opt for an alternative choice based on LHS. Specifically, we first generate a DoE 𝒳^(0)_DoE of size n_0^DoE using LHS improved by simulated annealing. Then, for ℓ=1, …, L, we sequentially extract a DoE 𝒳^(ℓ)_DoE⊂𝒳^(ℓ-1)_DoE of size n_ℓ^DoE. The subset 𝒳^(ℓ)_DoE is selected such that it has minimal centered L^2 discrepancy among a pool of 1 million random candidate subsets. <Ref> summarizes the properties of these different PC models. The PC models again have a quality measure higher than 0.8, except for g_3 and h_3, which were constructed from only 100 evaluations. Note that the values of p^*, |𝒜̃_p^*| and Q^2 are of the same order of magnitude as those of the PC surrogate models reported in <ref>. Hereafter, the CV and MLCV estimators use the PC models of <ref>, while the MLMC-based estimators use the PC models presented in <ref>. §.§ Results First, in <ref>, we illustrate the use of one or several control variates to reduce the variance of a single-level MC estimator. Then, we compare the MLCV and MLMC-MLCV approaches with the MC and MLMC estimators in <ref>, and we discuss variants of the MLMC-CV and MLMC-MLCV approaches, considering only a limited subset of the surrogate models, in <ref>. We conclude the analysis by reporting the estimation budget allocation across levels resulting from the various MLMC-based methods in <ref>. In practice, all the MLMC-based estimators are built using <ref>. Unless stated otherwise, the parameters are set to n_ℓ^init=30 and r_ℓ = 1.1 for ℓ=0,…,3. The quality of the various estimators will be assessed in terms of their RMSE w.r.t. [ℳ()], estimated from 500 repetitions of the experiment. §.§.§ Single-level MC and control variates In this first part, we consider only the finest simulator f_3 and try to reduce the variance of the MC estimator of [f_3()] by means of surrogate-based CVs. For that purpose, we consider a first-order Taylor polynomial expansion g^T_1_3 around μ_ = 𝔼[] (see <ref>) and the PC model g^PC_3 described in <ref>. <Ref> shows that g^PC_3() is well correlated with Y_3=f_3(), with a Pearson coefficient of 0.8, while g^T_1_3 is less so, with a coefficient of 0.57, which may be explained by the strong non-linearity of f_3. According to <ref>, these correlation coefficients lead to a theoretical variance reduction factor R^2 of 64% when using g^PC_3() as a single CV, and of about 32% when using g^T_1_3() as a single CV. Using both CVs results in a reduction factor of 65%. This minor increase in R^2 can be explained by the modest correlation coefficient of 0.64 between g^T_1_3() and g^PC_3(). Note that a reduction factor of R^2 in the variance corresponds to a reduction factor of 1-√(1-R^2) in the standard deviation. These theoretical expectations are reflected in <ref> with an RMSE reduction of about 20% when using g^T_1_3 and 40% when using g^PC_3 alone or jointly with g^T_1_3. This figure confirms that g^PC_3 provides a better CV than g^T_1_3, reducing the RMSE of the MC estimator twice as much, regardless of the computational budget. However, the construction cost of the surrogate is not the same. While constructing g^T_1_3 requires only one evaluation of f_3 and its Jacobian matrix, namely at μ_, the construction of g^PC_3 involved 100 f_3 evaluations. In the case where the surrogate is built specifically for the estimation of the statistic, the real estimation cost is higher as it includes this construction cost. <Ref> illustrates this difference by including the surrogate construction cost in the total evaluation cost. As a result, a significant offset appears when using g^PC_3, since part of the computational budget (namely 100) is used for the surrogate construction and thus not for the estimation. Specifically, the RMSEs of the CV estimators using g^PC_3 get below that of the CV estimator using only g^T_1_3 for a budget of 300 f_3 evaluations. For budgets under 200, using only g^T_1_3 is preferable, even without the analytical gradient available, which would then incur a construction cost of 8 f_3 evaluations to approximate the gradient using finite differences. This figure also illustrates <ref>, that is, increasing the number of control variates improves (rigorously speaking, does not deteriorate) the variance of the CV estimator. §.§.§ MLCV and MLMC-MLCV <Ref> compares the MLCV and MLMC-MLCV estimators proposed in <ref> with the classical MC and MLMC estimators. This comparison is repeated for different budgets 𝒞, expressed in terms of the equivalent number of f_3 evaluations. Note that the MC and MLCV estimators only use evaluations of the finest simulator f_3, so that 𝒞 = n_3, while MLMC-based estimators use all the simulators, so that 𝒞 is given in terms of the MLMC cost <ref>, indeed corresponding to the equivalent number of f_3 evaluations, since the costs are normalized such that 𝒞_3 = 1. From here on, only PC-based surrogates will be used, so that we omit the superscript “PC” in the notations of the surrogate models. <Ref> shows that, for a given sampling budget 𝒞, the MC estimator is the least accurate. This can be explained by the fact that it only has access to the finest simulator, f_3, whose cost only allows a limited number of evaluations. On the contrary, the MLMC estimator spreads this sampling budget over the four simulators. Ideally, the optimal sample allocation of MLMC <ref> results in many coarse, cheap evaluations and few fine, expensive evaluations. This is typically the case when the outputs of the simulators are highly correlated and the associated computational cost grows exponentially. <Ref> shows that the first assumption holds. The second assumption also holds since 𝒞_ℓ = 𝒪(K N_ℓ) = 𝒪(N_ℓ), K being fixed, i.e., the evaluation cost grows linearly with the number of quadrature nodes. As a consequence, the MLMC estimator has a lower RMSE than the standard MC estimator. For this experiment, the MLCV estimator is more accurate than the MLMC estimator. Based on one control variate per level ℓ, based on the PC model g_ℓ of f_ℓ from <ref>, this estimator dedicates all the sampling budget to the finest simulator f_3, and uses these control variates based on g_0, g_1, g_2 and g_3 to reduce the variance of the MC estimator at no extra cost, as the evaluation cost of a PC model g_ℓ is negligible compared to the evaluation cost of f_3. This MLCV technique works particularly well in this case because the control variates are highly correlated to f_3. Indeed, <ref> shows that their Pearson coefficients are at least 0.8, which guarantees a theoretical reduction of at least 94% in the variance of the MC estimator (i.e. a reduction of at least about 76% in standard deviation), corresponding to the variance reduction when using a single control variate based on g_3. In fact, the variance reduction factor R^2 when using all the surrogates is only slightly higher, namely R^2 ≈ 95% corresponding to a standard reduction factor of around 78%, which is reflected in <ref>. Combining the MLMC and MLCV techniques allows the resulting MLMC-MLCV estimator to reduce the variance even more significantly. This can be explained by the very high correlation between Y_0, Y_1, Y_2 and Y_3 on the one hand, which ensures the good performance of the MLMC approach, and by the strong correlation between the control variates on the other hand, ensuring their good performance in combination with the MLMC technique. In particular, <ref> shows that g_0(), g_1() and g_2() are highly correlated with Y_3, with Pearson correlation coefficients greater than 0.9, while g_3() is poorly correlated with Y_3, with a correlation coefficient of 0.48. Besides, h_1(), h_2() and h_3() are well-correlated with Y_1-Y_0, Y_2-Y_1 and Y_3-Y_2, respectively, with correlation coefficients greater than 0.8. Further insights regarding the expected variance reduction of the MLMC-MLCV estimator can be drawn from <ref>, which reports the variance reduction factor R^2_ℓ w.r.t. pure MLMC on each level defined in <ref>, as well as the quantity 𝒮_ℓ^2 defined in <ref>. In particular, S_L^2/𝒞 corresponds to the variance of the MLMC-MLCV estimator of the expectation with optimal sample allocation (see <ref>), so that the ratio between the variance of MLMC-MLCV estimator and that of the MC estimator is 𝒮_L^2 / (𝒞_L [Y_L]) (here with 𝒞_L=𝒞_3=1). Furthermore, the variance of the different MLMC-based estimators per unit cost can be compared directly through S_L^2. We observe that the variance of the MLMC-MLCV estimator is significantly reduced compared to that of the MLMC estimator, by about 98.6%, resulting in a reduction in standard deviation of about 88%. Again, this is well reflected in <ref>. These first results from <ref> highlight the interest of multilevel control variates, be it with the MLMC-MLCV estimator or simply with the MLCV one. These results suppose that the PC models are not built specifically for the study, so that the budget does not include the number of f_3-equivalent simulations required for their construction. <Ref> illustrates the alternative case where the cost of the surrogate based CV estimators includes the cost of constructing the surrogates. As a result, an additional budget of 𝒞^DoE=400 (see <ref>), is allocated to the construction of the PC models. As was the case for the single-level PC-based CV estimators, this results in an offset of 400 in the total cost of the MLCV and MLMC-MLCV estimators. The effect is especially noticeable when the construction budget larger than the estimation budget, i.e. for 𝒞∈{100; 300}. For a sampling cost of 100, i.e. a total evaluation cost of 500, the MLCV estimator is still slightly more accurate than the MLMC estimator, while the MLMC-MLCV estimator still largely outperforms both. §.§.§ Variants of MLMC-based CV estimators The discussion about the surrogate construction budget prompts us to investigate variants of MLMC-MLCV using fewer surrogates, in order to reduce the total evaluation cost for limited budgets. The MLMC-CV estimator is described in <ref>, while the variants MLMC-CV[0] and MLMC-MLCV[0] of MLMC-CV and MLMC-MLCV are introduced in <ref>. The MLMC-CV[0] estimator only uses a control variate based on g_0, so that the construction cost drops to 𝒞^DoE=100, while the MLMC-MLCV[0] estimator uses control variates based on g_0, g_1 and h_1, so that the construction cost drops to 𝒞^DoE=200. The MLMC-CV estimator uses control variates based on surrogates at all levels, so that the construction cost remains 𝒞^DoE=400. <Ref> shows that MLMC-CV[0] has much higher RMSE than the other variants, resulting from the fact that it only reduces the variance associated with the coarsest level of the MLMC estimator. This behavior is consistent with the quantities of <ref>. In particular, the value of 𝒮_L^2 is about 10 times higher than for the other variants, accounting for its RMSE being about 3 times higher than for the other variants. The remaining variants have similar performances regardless of the estimation budget, which is consistent with the S_L^2 values given in <ref>. On the other hand, when considering the construction cost of the surrogates, <ref> shows that MLMC-MLCV[0] performs best, as it uses only surrogate models related to the two coarsest levels, namely g_0, g_1 and h_1, so that the construction cost is reduced. Furthermore, these surrogates have excellent Q^2, and they are such that g_0() is highly correlated with Y_0, g_1() is highly correlated with Y_1, and h_1() is highly correlated with Y_1-Y_0, Y_2-Y_1 and Y_3-Y_2. Namely, the associated Pearson correlation coefficients reported in <ref> are all at least 0.94. Therefore, should one have to build the surrogates specifically for the CV estimation of a statistic, it is more advantageous to adopt the MLMC-MLCV[0] variant over the others. In our case, the construction budget is divided by two compared to MLMC-MLCV and MLMC-CV, for a similar performance in terms of RMSE. §.§.§ Budget allocation Lastly, <ref> shows the number n_ℓ of evaluations for each of the L+1 correction levels of the MLMC-* telescopic sum. Precisely, n_0 is the number of evaluations of f_0, while n_ℓ is the number of evaluations of f_ℓ-f_ℓ-1, for ℓ >0. We observe a typical sample allocation for MLMC-like estimators in ideal cases, that is, many coarse evaluations, and fewer and fewer fine evaluations. The MLMC-CV[0] estimator slightly deviates from this pattern, with n_1 ≈ n_0, which can be explained by the fact that 𝒱_1 ≈ 590 is of the same order of magnitude as (1-R_0^2) 𝒱_0 ≈ 176. <Ref> depicts the share of overall sampling cost associated with the different correction levels. Specifically, n_0𝒞_0𝒞^-1 is the share of sampling budget dedicated to evaluating f_0, and n_ℓ(𝒞_ℓ + 𝒞_ℓ-1)𝒞^-1 is the share dedicated to evaluating f_ℓ - f_ℓ-1, for ℓ>0. We see that, except for MLMC-CV[0] for the reasons explained above, most of the sampling budget (around 70%) is allocated to the coarsest level, and most of the remaining budget is dedicated to correction level ℓ=1. We note that the CV-based MLMC estimators dedicate slightly more budget to levels 2 and 3 than for the standard MLMC estimator. The sampling budget shares of <ref> are consistent with the theoretical optimal shared reported in <ref>, including those of the MLMC-CV[0] estimator. This suggests that the sample allocation resulting from <ref> seems to converge to the theoretical optimal sample allocation <ref>. § CONCLUSIONS In this paper, we proposed multilevel variance reduction strategies relying on surrogate-based control variates. On the one hand, using specific surrogate models, such as polynomial chaos expansions or Taylor polynomial expansions, allows to directly access exact statistics (mean, variance) of the control variates. Even if these exactly statistics are not directly accessible (e.g., when using GPs), they can be estimated very accurately at negligible cost. This contrasts with typical control variates relying on lower-fidelity models based on models/simulators with degraded physics or coarser discretizations, for which approximate control variate strategies need to be devised, resulting in a lower variance reduction. On the other hand, when multiple levels of fidelities (e.g., based on the discretization) with a clear cost/accuracy hierarchy are available, the surrogate-based control variate approach can be efficiently combined with multilevel strategies. The first strategy, MLCV, simply consists in using multiple control variates based on surrogate models of the simulators corresponding to the different levels. The main advantage is that the surrogate models corresponding to coarse levels may be constructed using larger sample sizes than for the finest level, resulting in a more accurate surrogate model (i.e., with lower model error). This strategy thus leads to a greater variance reduction compared to only using one surrogate model based on the finest level. This is supported by the numerical experiments we conducted, as well as by the theoretical variance reduction provided by <ref>. The second strategy, MLMC-MLCV, allows to further improve the variance reduction by combining the surrogate-based control variates with an MLMC strategy. The most appropriate way to construct and utilize the surrogate models is, however, not straightforward, and was discussed in detail in <ref>. The additional variance reduction as compared to plain MLMC is demonstrated in our numerical experiments and supported by the theoretical variance reduction factor <ref> derived in <ref>. The construction cost of the surrogate models was discussed from two perspectives. When the surrogate models are constructed for the sole purpose of serving for the control variate estimation, then the cost of their construction must be taken into account for fair comparison with other approaches. In such a case, it may not be optimal, in terms of cost/accuracy tradeoff, to construct surrogate models on all levels, especially when only a limited budget is available. In particular, using a subset of surrogate models based on the coarser levels may already lead to considerable variance reduction, provided that the outputs of the coarse surrogate models are sufficiently correlated with that of the high-fidelity simulator. Then, considering additional surrogate models on finer levels might only result in marginal improvement, at the expense of a significant computational cost. On the contrary, if the surrogate models have already been constructed for other purposes, and are, in some sense, available “for free,” then their construction cost need not be considered, and the entire set of surrogate models may then be used. From the former perspective, that is when the cost of the surrogate construction is considered as part of the estimation cost, one may devise more involved strategies seeking to optimize the tradeoff between the construction cost and the model error, directly impacting the projected variance reduction. For instance, in the context of polynomial chaos surrogate models, the truncation strategy may be controlled to this end, as proposed in a stochastic Galerkin framework in <cit.>, where the total polynomial degree is optimized alongside the sample size and the CV parameter to minimize the PC-based CV estimator's variance under a cost constraint. Another avenue to improve the proposed approach would be to replace the MLMC part of the MLMC-MLCV strategy by a more efficient multilevel approach, such as the multilevel best linear unbiased estimator (MLBLUE) <cit.>. In particular, an MLBLUE-MLCV strategy should be more efficient when a collection of low-fidelity simulators (e.g., with degraded physics) with no clear cost/accuracy hierarchy is available. Finally, although the proposed approaches apply to the estimation of arbitrary statistics, they were only tested here on the estimation of expected values. The theoretical and algorithmic ingredients for the estimation of variances are, however, described in this paper and may be tested in follow-up investigations and numerical experiments. Specifically, the multifidelity estimation of variance-based sensitivity indices is of particular interest to our team. § ACKNOWLEDGMENT We wish to acknowledge the PIA framework (CGI, ANR) and the industrial members of the IRT Saint Exupéry project R-Evol: Airbus, Liebherr, Altran Technologies, Capgemini DEMS France, CENAERO and Cerfacs for their support, financial funding and own knowledge. § OPTIMAL PARAMETER FOR EXPECTATION AND VARIANCE CV ESTIMATORS We derive here the expression of the optimal CV parameter α^* for the CV estimators of the expectation and of the variance. We define Y=f(), Z_m=g_m() and =(Z_m)_m=1^M. Then, given an input n-sample {^(i)}_i=1^n, we define Y^(i)=f(^(i)), Z_m^(i)=g_m(^(i)) and ^(i)=(Z_m^(i))_m=1^M. §.§ CV estimator of the expectation For the expectation, we have Σ=[Ê[]] and 𝐜 = [Ê[Y], Ê[]], i.e. [Σ]_m,m' = [Ê[Z_m], Ê[Z_m']] = n^-2∑_i,j =1^n[Z_m^(i),Z_m'^(j)] = n^-1[Z_m,Z_m'], [𝐜]_m = [Ê[Y], Ê[Z_m]] = n^-2∑_i,j =1^n[Y^(i),Z_m^(j)] = n^-1[Y,Z_m], so that Σ = n^-1[], 𝐜 = n^-1[Y,], and, eventually, α^* = [𝐙]^-1[Y,𝐙]. Furthermore, we have [Ê[Y]] = n^-1[Y], so that R^2 = [Y, ]^⊺[]^-1[Y, ][Y] = r⃗_Y,^⊺R_^-1r⃗_Y,, where r⃗_Y, = ([Y]D_)^-1/2[Y,], R_ = D_^-1/2[] D_^-1/2, D_ = ([]). Thus, R^2 ∈ [0,1] corresponds to the squared coefficient of multiple correlation between Y and the control variates Z_1, …, Z_M. §.§ CV estimator of the variance Similarly, for the variance, we have Σ=[V̂[]] and 𝐜 = [V̂[Y], V̂[]]. We start by deriving useful identities. In what follows, for any random variable A, we denote the corresponding centered variable by A̅ := A - [A]. First, we remark that V̂[A] = V̂[A̅], so that, for any two random variables Y and Z, [V̂[Y], V̂[Z]] = [V̂[Y̅] V̂[Z̅]] - [Y][Z]. Furthermore, it can be shown that [V̂[Y̅] V̂[Z̅]] = (nn-1)^2 ( a_n(Y̅, Z̅) + b_n(Y̅, Z̅) - c_n(Y̅, Z̅) - c_n(Z̅, Y̅) ), with (see proof below) a_n(Y̅, Z̅) := [ Ê[Y̅^2] Ê[Z̅^2] ] = 1n[Y̅^2,Z̅^2] + [Y][Z] = a_n(Z̅, Y̅), b_n(Y̅, Z̅) := [ Ê[Y̅]^2 Ê[Z̅]^2 ] = a_n(Y̅, Z̅)n^2 + 2 n - 1n^3[Y, Z]^2 = b_n(Z̅, Y̅), c_n(Y̅, Z̅) := [ Ê[Y̅^2] Ê[Z̅]^2 ] = a_n(Y̅, Z̅)n = c_n(Z̅, Y̅). We thus have [V̂[Y], V̂[Z]] = 1n[Y̅^2, Z̅^2] + 2n(n-1)[Y, Z]^2, [V̂[Y]] = [V̂[Y], V̂[Y]] = 1n[Y̅^2] + 2n(n-1)[Y]^2, eventually leading to Σ = [V̂[]] = 1n( [^⊙ 2] + 2n-1[]^⊙ 2), 𝐜 = [V̂[Y], V̂[]] = 1n( [Y̅^2, ^⊙ 2] + 2n-1[Y, ]^⊙ 2), so that α^* = [ [^⊙ 2] + 2n-1[]^⊙ 2]^-1[ [Y̅^2, ^⊙ 2] + 2n-1[Y, ]^⊙ 2], R^2 = [ [Y̅^2, ^⊙ 2] + 2n-1[Y, ]^⊙ 2]^⊺α^* [Y̅^2] + 2n-1[Y]^2. As n →∞, we see that α^* →[^⊙ 2]^-1[Y̅^2, ^⊙ 2] R^2 →[Y̅^2]^-1[Y̅^2, ^⊙ 2]^⊺[^⊙ 2]^-1[Y̅^2, ^⊙ 2] = r⃗_Y,^⊺R_^-1r⃗_Y, =: R^2_lim, where r⃗_Y, = ([Y̅^2]D_)^-1/2[Y̅^2, ^⊙ 2], R_ = D_^-1/2[^⊙ 2] D_^-1/2, with D_ = ([^⊙ 2]). Thus, R^2_lim∈ [0,1] corresponds to the squared coefficient of multiple correlation between Y̅^2 and Z̅_1^2, …, Z̅_M^2. We now proceed to the proof of identities <ref>. First, for <ref>, by definition a_n(Y̅, Z̅) = [ ∑_i=1^n (Y̅^(i))^2 ∑_i=1^n (Z̅^(i))^2 ] = 1n^2∑_i,j=1^n[(Y̅^(i))^2(Z̅^(j))^2]. We distinguish two (disjoint) cases: * i = j: [(Y̅^(i))^2(Z̅^(j))^2] = [(Y̅^(i))^2(Z̅^(i))^2] = [Y̅^2Z̅^2] = [Y̅^2, Z̅^2] + [Y][Z]. There are n such terms in the sum. * i j: [(Y̅^(i))^2(Z̅^(j))^2] = [(Y̅^(i))^2][(Z̅^(j))^2] = [Y̅^2][Z̅^2] = [Y][Z]. There are n(n-1) such terms in the sum. Then <ref> follows. For <ref>, b_n(Y̅, Z̅) = [ ( ∑_i=1^nY̅^(i) )^2 ( ∑_i=1^nZ̅^(i) )^2] = 1n^4∑_i,j,k,ℓ=1^n[Y̅^(i)Y̅^(j)Z̅^(k)Z̅^(ℓ)]. We distinguish five (disjoint) cases: * i=j=k=ℓ: [Y̅^(i)Y̅^(j)Z̅^(k)Z̅^(ℓ)] = [(Y̅^(i))^2(Z̅^(i))^2] = [Y̅^2Z̅^2] = [Y̅^2, Z̅^2] + [Y][Z]. There are n such terms in the sum. * i=j k=ℓ: [Y̅^(i)Y̅^(j)Z̅^(k)Z̅^(ℓ)] = [(Y̅^(i))^2][(Z̅^(k))^2] = [Y̅^2][Z̅^2] = [Y][Z]. There are n(n-1) such terms in the sum. * i=k j=ℓ: [Y̅^(i)Y̅^(j)Z̅^(k)Z̅^(ℓ)] = [Y̅^(i)Z̅^(i)] [Y̅^(j)Z̅^(j)] = [Y̅Z̅]^2 = [Y, Z]^2. There are n(n-1) such terms in the sum. * i=ℓ j=k: [Y̅^(i)Y̅^(j)Z̅^(k)Z̅^(ℓ)] = [Y̅^(i)Z̅^(i)] [Y̅^(j)Z̅^(j)] = [Y̅Z̅]^2 = [Y, Z]^2. There are n(n-1) such terms in the sum. * All remaining cases (at least one of the indices i,j,k,ℓ is different from all the others): [Y̅^(i)Y̅^(j)Z̅^(k)Z̅^(ℓ)] = 0. Then <ref> follows. Finally, for <ref>, c_n(Y̅, Z̅) [ ( ∑_i=1^n (Y̅^(i))^2 ) ( ∑_i=1^nZ̅^(i) )^2 ] = 1n^3∑_i,j,k=1^n[(Y̅^(i))^2Z̅^(j)Z̅^(k)]. We distinguish three (disjoint) cases: * i=j=k: [(Y̅^(i))^2Z̅^(j)Z̅^(k)] = [(Y̅^(i))^2(Z̅^(i))^2] = [Y̅^2Z̅^2] = ^4[Y, Z]. There are n such terms in the sum. * i j = k: [(Y̅^(i))^2Z̅^(j)Z̅^(k)] = [(Y̅^(i))^2][(Z̅^(j))^2] = [Y̅^2][Z̅^2] = [Y][Z]. There are n(n-1) such terms in the sum. * All remaining cases (at least one of the indices j,k is different from all the others): [(Y̅^(i))^2Z̅^(j)Z̅^(k)] = 0. Then <ref> follows. When the expected value μ_ of is known, as is the case when defining from the prediction of certain surrogate models, such as PC exansion and Taylor polynomials (and, in some instances, GPs, see <ref>), it is possible to replace V̂[] with Ê[^⊙ 2]. The derivation of the optimal CV parameter is somewhat easier and leads to similar results as in the unknown expectation case. Specifically, [Σ]_m,m' = [Ê[Z̅_m^2], Ê[Z̅_m'^2]] = [n^-1∑_i=1^n (Z̅_m^(i))^2 ,n^-1∑_i=1^n (Z̅_m'^(i))^2] = n^-2∑_i,j=1^n[(Z̅_m^(i))^2, (Z̅_m'^(j))^2] = n^-2∑_i=1^n[(Z̅_m^(i))^2, (Z̅_m'^(i))^2] = n^-1[Z̅_m^2, Z̅_m'^2], i.e. Σ = n^-1[^⊙ 2]. Regarding the vector of covariances 𝐜, [c⃗]_m = [ V̂[Y̅],Ê[Z̅_m^2]] = [ V̂[Y̅] Ê[Z̅_m^2]] - [Y][Z_m] = nn-1 ([ Ê[Y̅^2] Ê[Z̅_m^2]] - [ Ê[Y̅]^2 Ê[Z̅_m^2]] ) - [Y][Z_m] = nn-1 ( a_n(Y̅, Z̅_m) - c_n(Z̅_m, Y̅) ) - [Y][Z_m] = a_n(Y̅, Z̅_m) - [Y][Z_m] = n^-1[Y̅^2, Z̅_m^2], i.e. c⃗ = n^-1[Y̅^2, ^⊙ 2], so that α^* = [^⊙ 2]^-1[Y̅^2, ^⊙ 2], R^2 = [Y̅^2, ^⊙ 2]^⊺[^⊙ 2]^-1[Y̅^2, ^⊙ 2] [Y̅^2] + 2n-1[Y]^2 R^2_lim, with the same definitions as in <ref>. § OPTIMAL MLMC-MLCV VARIANCE ESTIMATOR The MLMC-MLCV variance estimator is given by <ref>, with T̂_0^MLCV(α_0) = V̂^(0)[Y_0] - α_0^⊺ (V̂^(0)[] - σ^2_), T̂_ℓ^MLCV(α_ℓ) = V̂^(ℓ)[Y_ℓ] - V̂^(ℓ)[Y_ℓ-1] - α_ℓ^⊺ [ V̂^(ℓ)[_1:] - V̂^(ℓ)[] - (σ^2__1: - σ^2_) ], ℓ>1, where = (Z_0, …, Z_L) = (g_0(), …, g_L()), σ^2_ = [], _1: = (Z_1, …, Z_L) = (g_1(), …, g_L()), σ^2__1: = [_1:], = (Z̃_0, …, Z̃_L-1) = (g̃_0(), …, g̃_L-1()), σ^2_ = [], g̃_ℓ-1 = g_ℓ - h_ℓ, ℓ > 0. Furthermore, we assume that the expected values of the control variates are known, so that we use the following variance estimators: V̂^(0)[] = Ê^(0)[^2], V̂^(ℓ)[_1:] - V̂^(ℓ)[] = Ê^(ℓ)[_1:^⊙ 2] - Ê^(ℓ)[^⊙ 2] = Ê^(ℓ)[_1:^⊙ 2 - ^⊙ 2]. The optimal values α_ℓ^* of the CV parameters are given by α_0^* = Σ_0^-1c⃗_0, Σ_0 = [^⊙ 2], c⃗_0 = [^⊙ 2, Y̅_0^2], α_ℓ^* = Σ_ℓ^-1c⃗_ℓ, Σ_ℓ = [_1:^⊙ 2 - ^⊙ 2], c⃗_ℓ = [_1:^⊙ 2 - ^⊙ 2, Y̅_ℓ^2 - Y̅_ℓ-1^2], ℓ >0. Note that Σ_ℓ is the same for all ℓ > 0. For PC-based control variates, g_ℓ() = ∑_k=0^P_g^ℓ𝔤_ℓ, kΨ_k() and h_ℓ() = ∑_k=0^P_h^ℓ𝔥_ℓ, kΨ_k(), letting P_g̃^ℓ := max(P_g^ℓ, P_h^ℓ), we have Z̃_ℓ-1 = ∑_k=0^P_g̃^ℓ𝔤̃_ℓ-1, kΨ_k(), 𝔤̃_ℓ-1, k := 𝔤_ℓ,k - 𝔥_ℓ,k, ℓ = 1,…,L, where 𝔤_ℓ,k := 0 for k>P_g^ℓ and 𝔥_ℓ,k := 0 for k>P_h^ℓ. Consequently, [Z_ℓ] = 𝔤_ℓ,0, [Z̃_ℓ-1] = 𝔤̃_ℓ-1,0, [Z_ℓ] = ∑_k=1^P_g^ℓ𝔤_ℓ,k^2, [Z̃_ℓ-1] = ∑_k=1^P_g̃^ℓ𝔤̃_ℓ-1,k^2. Furthermore, [Σ_0]_m,m' = ∑_i,j=1^P_g^m∑_q,r=1^P_g^m'𝔤_m,i𝔤_m,j𝔤_m',q𝔤_m',r (Φ_ijqr - δ_ijδ_qr), m,m' = 0,…, L, where Φ_ijqr := [Ψ_i()Ψ_j()Ψ_q()Ψ_r()] are the entries of the fourth-order Galerkin product tensor, which is a well-known object in stochastic Galerkin methods <cit.>, and δ_ij denotes the Kronecker delta. Besides, noticing that _1:^⊙ 2 - ^⊙ 2 = (_1: - ) ⊙ (_1: + ) = W̅⃗̅⊙ (_1: + ), with W̅⃗̅ = W⃗ - μ_W⃗ and the definitions in <ref>, we have [Σ_ℓ]_m,m' = A_m,m' + B_m,m' + C_m,m' + C_m',m, ℓ,m,m' = 1,…, L, where A_m,m' = [W̅_m Z̅_m, W̅_m'Z̅_m'] = ∑_i=1^P_h^m∑_j=1^P_g^m∑_q=1^P_h^m'∑_r=1^P_g^m'𝔥_m,i𝔤_m,j𝔥_m',q𝔤_m',r (Φ_ijqr - δ_ijδ_qr), B_m,m' = [W̅_m Z̅̃̅_m-1, W̅_m'Z̅̃̅_m'-1] = ∑_i=1^P_h^m∑_j=1^P_g̃^m∑_q=1^P_h^m'∑_r=1^P_g̃^m'𝔥_m,i𝔤̃_m-1,j𝔥_m',q𝔤̃_m'-1,r (Φ_ijqr - δ_ijδ_qr), C_m,m' = [W̅_m Z̅̃̅_m-1, W̅_m'Z̅_m'] = ∑_i=1^P_h^m∑_j=1^P_g̃^m∑_q=1^P_h^m'∑_r=1^P_g^m'𝔥_m,i𝔤̃_m-1,j𝔥_m',q𝔤_m',r (Φ_ijqr - δ_ijδ_qr). § TAYLOR SURROGATE FOR THE NUMERICAL TEST CASE In our example, f_ℓ is only differentiable in [-π,π]^3 × [ν_min, ν_max] × ([-1, 0) ∪ (0, 1])^3. Consequently, the Taylor polynomial surrogate cannot be used directly as defined in (<ref>) and (<ref>), because the Jacobian and Hessian matrices are not defined at μ_. Instead, we define the first-order Taylor surrogate as f_ℓ() ≃ g_ℓ^T_1() = f_ℓ(μ_) + ∑_i=1^7 g_ℓ, i^T_1(μ_; ), where, for i=1,…,4, g_ℓ, i^T_1(μ_; ) = (X_i - μ_X_i) ∂ f_ℓ∂ X_i(μ_), and, for i=5,…,7, g_ℓ, i^T_1(μ_; ) = (X_i - μ_X_i) ×∂ f_ℓ∂ X_i(μ_), μ_X_i 0, lim_μ_' →μ_^i, 0^-∂ f_ℓ∂ X_i(μ_'), μ_X_i = 0, X_i <0, lim_μ_' →μ_^i, 0^+∂ f_ℓ∂ X_i(μ_'), μ_X_i = 0, X_i >0, 0 μ_X_i = X_i = 0, where μ_^i, 0^± = (μ_X_1, …, μ_X_i-1, 0^±, μ_X_i+1, …, μ_X_7), which is now well-defined. With our choice of distributions for given in (<ref>), we have μ_ = (0,0,0,0.005,0,0,0), and the first-order Taylor surrogate is defined by (<ref>), with g_ℓ, 1^T_1(μ_; ) = 7 X_1 ∑_k=1^K B_k^ℓ(T; μ_) ∑_j=1^N_ℓ w_j sin(kπ x_j) ℱ_2(x_j), g_ℓ, 2^T_1(μ_; ) = 0, g_ℓ, 3^T_1(μ_; ) = 0, g_ℓ, 4^T_1(μ_; ) = - (X_4 - 0.005) π^2 T ∑_k=1^K k^2 exp(-0.005 k^2π^2 T) A_k^ℓ(μ_) ∑_j=1^N_ℓ w_j sin(kπ x_j), g_ℓ, 5^T_1(μ_; ) = 400 |X_5| ∑_k=1^K B_k^ℓ(T; μ_) ∑_j=1^N_ℓ w_j sin(kπ x_j) ℱ_1(x_j), g_ℓ, 6^T_1(μ_; ) = 400 |X_6| ∑_k=1^K B_k^ℓ(T; μ_) ∑_j=1^N_ℓ w_j sin(kπ x_j) ℱ_1(x_j), g_ℓ, 7^T_1(μ_; ) = 400 |X_7| ∑_k=1^K B_k^ℓ(T; μ_) ∑_j=1^N_ℓ w_j sin(kπ x_j) ℱ_1(x_j). Because of the piecewise definition of g_ℓ^T_1, the identity [g_ℓ^T_1()] = f_ℓ(μ_) for a regular first-order Taylor surrogate no longer holds. Instead, we have [g_ℓ^T_1()] = f_ℓ(μ_) + 600 ∑_k=1^K B_k^ℓ(T; μ_) ∑_j=1^N_ℓ w_j sin(kπ x_j) ℱ_1(x_j).
http://arxiv.org/abs/2306.09636v1
20230616054720
Artin Presentations of the Trivial Group and Hyperbolic Closed Pure $3$-Braids
[ "Lorena Armas-Sanabria", "Jesús Rodríguez Viorato", "E. Fanny Jasso-Hernández" ]
math.GT
[ "math.GT", "57M25" ]
We consider a special class of framed links that arise from the hexatangle. Such links are introduced in <cit.>, where it was also analyzed when the 3-manifold obtained after performing integral Dehn surgery on closed pure 3-braids is S^3. In the present paper, we analyze the symmetries of the hexatangle and give a list of Artin n-presentations for the trivial group. These presentations correspond to the double-branched covers of the hexatangle that produce S^3 after Dehn surgery. Also, using a result of Birman and Menasco <cit.>, we determine which closed pure 3-braids are hyperbolic. Induced Gravitational Waves via Warm Natural Inflation Grant J. Mathews Received: date / Accepted: date ======================================================= Dedicated to Professor González-Acuña in his 80th anniversary § INTRODUCTION In <cit.> closed pure 3-braids of the form β̂= σ_1^2e_1σ_2^2f_1(σ_2σ_1σ_2)^2e were considered; these are shown in Figure <ref>. It was determined exactly when an integral surgery in such links produces the 3-sphere; to do it, we used the fact that such links are strongly invertible and that their exteriors are double-branched covers of certain fillings of what was called the hexatangle. Then, it was determined when the trivial knot is obtained by filling the hexatangle. These configurations giving the trivial knot are used in the present paper to obtain Artin 3-presentations of the trivial group. An Artin n-presentation is a presentation of a group with generators x_1,⋯ , x_n and relations r_1, r_2,⋯ , r_n such that ∏ _i=1 ^n r_ix_ir_i^-1 = ∏_i=1^n x_i is satisfied in the free group F(x_1, x_2 ⋯ x_n). Using Artin presentations González Acuña <cit.> characterized 3-manifold groups. In fact, he proved that a group G is the fundamental group of a closed, orientable 3-manifold if and only if G admits an Artin n-presentation for some n. It is interesting to consider Artin n-presentations of the trivial group since there are two conjectures (now theorems) relating them with the Poincaré conjecture (proved by Perelman). These conjectures were given by González-Acuña <cit.>, <cit.>. We now state such conjectures, but first, we give a preliminary introduction taken from <cit.>, <cit.>. Define S_n as follows, S_n = |x_1,y_1, ⋯,x_n,y_n: ∏ _i=1^n x_i = ∏ _i=1^n y_i^-1x_iy_i | and let N⊂ S_n be the normal closure of y_1,⋯ ,y_n in S_n. If A is an Artin n-presentation define the automorphism ϕ _A: S_n → S_n by ϕ _A (x_i) = r_i x_i r_i^-1, ϕ _A (y_i) = r_iy_i 5pt i=1,⋯ , n. Two Artin n-presentations A, A' are equivalent if there exist automorphisms E_1, E_2 of S_n such that E_1 ∘ϕ _A = ϕ _A'∘ E_2 and E_i(N) = N, 5pt i=1,2. One can see that A∼ A' implies M_A is homeomorphic to M_A', which implies that |A| ≅ |A'|, where M_A, M_A' are the 3-manifolds whose fundamental groups are given by A, A' respectively. |A| and |A'|, denotes the presentations of the groups. Then the Poincaré conjecture is equivalent to If A is an Artin n-presentation such that |A| = 1 then A∼ T where T = ( x_1, ⋯ ,x_n : x_1, ⋯ , x_n ) The knot groups in S^3 can be characterized. It follows from Artin's work that a group is the group of a knot if and only if it has a presentation: <y_1,…,y_n:s_1y_1s_1^-1y_2^-1,…,s_n-1y_n-1s_n-1^-1y_n^-1,s_ny_ns_n^-1y_1^-1> such that ∏_i=1 ^n s_iy_is_i^-1=∏_i=1^n y_i   in F(y_1,…, y_n) If G=<x_1,x_2…,x_n:r_1,r_2,…,r_n> is an Artin n-presentation of the trivial group, then the group given by the presentation G=<x_1,x_2…,x_n:r_1,r_2,…,r_n-1> is called a rat-group. It is to say, a rat-group ( Reduced Artin Trivial ) is a group with a presentation of deficiency 1, obtained as follows: let ( x_1, x_2,...x_n; r_1, r_2, ...,r_n) be an Artin n-presentation of the trivial group. Then (x_1,x_2,...,x_n ;r_1, r_2,...,r_n-1) is a rat-group. The rat-groups are the knot groups in homotopic spheres. A rat-group is a knot group, i.e., a rat-group has a presentation satisfying formula <ref> This conjecture is equivalent to the Poincaré conjecture. For its formulation, González-Acuña used several characterizations of S^3 <cit.>. Now, we recall some well-known facts about 3-manifolds M. A torus T in M is incompressible if it does not have compression disks, i.e., embedded disks in M such that the boundary is in T and is nontrivial in T. And T is essential if it is incompressible and not boundary parallel into ∂ M. Let A be an annulus properly embedded in M. A is essential if it is incompressible and not boundary parallel into ∂ M. Thurston proved the following: Let M be a compact, orientable, connected 3-manifold with boundary. If M does not have either essential disks, spheres, annuli or tori then it is hyperbolic. And by Perelman, we know that if M is closed with no essential spheres or tori, and not Seifert fibered, then M is hyperbolic. A link contained in S^3 is hyperbolic if its exterior is hyperbolic. A Dehn surgery on a hyperbolic link is exceptional if the 3-manifold obtained is not hyperbolic. By Thurston Hyperbolic Dehn Surgery Theorem, it follows that most of the surgeries on a hyperbolic link produce hyperbolic manifolds. We just need to exclude a finite number of slopes in each component. To determine which closed pure 3-braids in S^3 are hyperbolic, we use a result of Birman and Menasco <cit.>. The first author has shown examples of hyperbolic closed pure 3-braids which do have a nontrivial surgery producing S^3 (<cit.>), some of these are small closed pure 3-braids. The content of this paper is organized as follows: In Section <ref> we present for the convenience of the reader a short introduction to the hexatangle. In Section <ref> some examples taken from the tables of the Artin 3-presentations of the trivial group, obtained by considering the symmetries of the hexatangle are presented, and in Section <ref> we determine all the hyperbolic, closed pure 3-braids. § THE HEXATANGLE We provide now the basic definitions, notation, and conventions for this paper, all of which are compatible with the ones in <cit.>. In fact, this is a short introduction obtained from <cit.>. A tangle is a pair (B,A), where B is S^3 with the interior of a finite number of disjoint 3-balls removed, and A is the disjoint union of properly embedded arcs in B, with ∂ A ≠∅ such that A intersects each component of ∂ B in four points. A marking of a tangle is an identification, of the points of intersection of A in each component of ∂ B, with exactly one of { NE, NW, SW, SE}. A tangle (B,A) with a marking is called a marked tangle. We say that two marked tangles (B,A_1) and (B, A_2) are equivalent if there is a homeomorphism of B, that fixes ∂ B, and that takes A_1 to A_2 while preserving the marking. A trivial n-tangle is a tangle that is homeomorphic to the pair (D^2, {x_1, … x_n }) × I, where x_i are points in the interior of D^2. A rational tangle is a marked 2-tangle that is homeomorphic (as an unmarked tangle) to the trivial 2-tangle. Rational tangles can be parametrized by ∪{ 1/0 }. We denote the rational tangle corresponding to p/q∈ Q∪{1/0} by R(p/q), the convention we are using is that the representation of p/q as continued fraction: p/q = a_n+ 1/ a_n-1+ 1/... + 1/a_1 corresponds to one of the tangles in Figure <ref>(a) or (b), according to the parity of n (see e.g. <cit.>). The hexatangle is a marked tangle that produces a family of links obtained by filling the six 3-balls with integral tangles as in Figure <ref>. Strictly speaking, a version of the hexatangle was first visualized in <cit.> where it was called basic polyhedra 6^* and 6^**, this was presented with a different marking. Although, Conway was more interested in these tangles to classify knots and links and to determine some relationships between the corresponding Conway polynomials. In <cit.> the hexatangle is introduced as such, with the aim of giving information about integral Dehn surgery on the closure of certain pure 3-braids β. Consider now the family of closed pure 3-braids of the form β̂= σ_1^2e_1σ_2^2f_1(σ_2σ_1σ_2)^2e, (see Figure <ref>). The link β̂ can be obtained by Dehn surgery over the link ℒ of Figure <ref>, which is strongly invertible. An involution axis is shown in the same figure. The quotient of the exterior of β̂ under this involution will be a punctured S^3, together with arcs, which arise as the image of the involution axis. The hexatangle is a class of framed links obtained by rearranging such punctured S^3 and making conventions of marked tangles. In <cit.> the hexatangle is introduced, and it is determined precisely when an integral surgery in such links produces the 3-sphere. The quotient of β̂ under the involution is a tangle (see Figure <ref>), where its boundary components come from the tori boundary components of the exterior of β̂, and the arcs are the image of the involution axis. We choose the marking given by the image of a framing on the components of β̂ as shown in Figure <ref>. This is indicated in Figure <ref> by a rectangular box, where the short sides of the rectangle represent the axis NW-SW and NE-SE, and the long sides represent the axis NW-NE and SW-SE. In all of our pictures, the shape of the rectangle will always be clear. We call this marked tangle the Hexatangle, and denote it by ℋ, or ℋ(*,*,*,*,*,*). The capital letters A, B, C, D, E, F denote boundary components in the hexatangle, and α,β,γ,δ,ϵ,η denote fillings of the hexatangle with rational tangles. We refer to the sphere boundary components of ℋ, filled or unfilled, as boxes. We say that two boxes are adjacent if there is an arc of ℋ connecting them; otherwise, we will call them opposite boxes. In the hexatangle, each box is opposite to exactly one box and adjacent to four boxes. We consider α,β,γ,δ,ϵ,η as rational parameters that we use to filling the corresponding box with a rational tangle. Note that when we fill the boxes with integral tangles, we just replace each box with a sequence of horizontal crossings. Note that filling one of the components A, B, E with a rational tangle ℛ(p/q) will correspond in the double branched cover to do (-p/q)-Dehn surgery on the corresponding component. On the other hand, filling with ℛ(p/q) in one of the components C, D, F corresponds to doing q/p-Dehn surgery in the corresponding component in the double branched cover (see <cit.>), this because of our rational tangles convention (see <cit.>). So, we can consider integral fillings in all boundary components of the hexatangle and forget the correspondence with the components of β̂. The hexatangle has many symmetries. Note that the hexatangle can be embedded in a tetrahedron so that each box is in correspondence with an edge of the tetrahedron; thus, each symmetry of the tetrahedron will give a symmetry of the hexatangle preserving framings. In <cit.>, it is given a list of fillings on the hexatangle that produces the trivial knot up to symmetries, the list is complete up to the symmetries given by the tetrahedron and mirror images. In the present paper we are listing each symmetry of the hexatangle and some representative Artin 3-presentations of the trivial group that arise after each Dehn filling (we are not listing all the Artin 3-presentations obtained, because of lack of space). In <cit.>, the main theorem (<ref>) states the following: Suppose an integral filling of the hexatangle produces the trivial knot, then the parameters are exactly as shown in Tables <ref>, <ref> and <ref>, up to symmetries. The double branched cover of ℋ(α,β,γ,δ,ϵ,η) is obtained by (-α-δ-η,-β-δ-γ-η,-ϵ-γ-η)-Dehn surgery on the closed pure 3-braid (σ_1)^-2δ(σ_2)^-2γ(σ_2σ_1σ_2)^-2η. By observing the hexatangle (see Figure <ref>), we have the following correspondences: α→ -m, 6pt β→ -n, 6pt γ→ f_1, 6pt η→ e, 6ptδ→ e_1, 6pt ϵ→ -p This means that the double branched cover of ℋ=(α,β,γ,δ,ϵ,η) is obtained by Dehn surgery on ℒ(-m,-n,1/γ,1/δ,-p,1/η). After performing surgery (1/γ,1/η,1/δ) on ℒ, we get the closed pure 3-braid (σ_1)^-2δ(σ_2)^-2γ(σ_2σ_1σ_2)^-2η, and the surgery coefficients change to (-α-δ-η,-β-δ-γ-η,-ϵ-γ-η). § ARTIN N-PRESENTATIONS OF THE TRIVIAL GROUP When we consider a group G = <x_1 x_2,⋯ , x_n : r_1, r_2, ⋯ , r_n>, we say that this is an F-Artinian n-presentation if the following equation is satisfied in the free group F(x_1, x_2, ⋯ , x_n): ∏ r_ix_i r_i^-1 = ∏ x_i F-Artinian n-presentations were defined by González-Acuña in the 70's, see <cit.>. We say that a W-Artinian n-presentation is taken if the following equation is satisfied in the free group F(x_1, x_2, ⋯ , x_n): ∏ r_i^-1x_i r_i = ∏ x_i In this paper, we are considering W-Artinian n-presentations. These kinds of presentations appear when we consider the fundamental group of a closed, connected, and orientable 3-manifold, given by an open-book decomposition with planar pages. From this open-book decomposition of the 3-manifold, its fundamental group can be read. Just read each relation described by certain disjoint simple curves r_i for i=1,..., n. As an example, we give the figure <ref>. The relations are the following: r_1 = x_1x_2x_3x_1x_2x_1 r_2= x_1x_2x_3x_1x_2^2 and r_3= x_1x_2x_3^2 Since all closed, connected, and orientable 3-manifold can be described in this way ( see <cit.>), there is no loss of generality in considering these kinds of presentations when we study fundamental groups of such 3-manifolds. Now, we consider the symmetries of the hexatangle. Since the hexatangle can be seen as a tetrahedron, there are 24 symmetries. The symmetries are as shown in Table <ref>, where for example, symmetry 3 is saying that α→γ, β→α, γ→β, δ→η, η→ϵ and ϵ→δ. It is meaning that α is taking the value of γ, β is taking the value of α, and so on in Table <ref>. In <cit.>, the following theorem is proved: Let β̂∈ S^3 be the closed pure 3-braid of β= Δ^2e(σ_1 ^2)^e_1(σ_2 ^2)^f_1 where Δ = σ_1σ_2σ_1 = σ_2σ_1σ_2 with e, e_1, f_1 ∈ℤ. Then the 3-manifold obtained by Dehn surgery on β̂ with an integral framing (m, n, p) has the Artin 3-presentation given by the generators x_1, x_2, x_3 and relations: r_1 = x_1^m-e-e_1 (x_1(x_2x_3)^-f_1 x_2 (x_2 x_3)^f_1)^e_1 (x_1 x_2 x_3) ^e r_2 = x_2^n-e-e_1-f_1 (x_2 x_3)^f_1(x_1 (x_2 x_3)^-f_1 x_2(x_2 x_3)^f_1)^e_1(x_1 x_2 x_3)^ e r_3 = x_3^p-e-f_1 (x_2 x_3)^f_1(x_1 x_2 x_3)^e In <cit.> there is a mistake in the relations r_1 and r_2 in the part of the relations (x_1(x_2x_3)^f_1x_2(x_2x_3)^-f_1 where the signs of the exponent f_1 were interchanged. Here this mistake is corrected. From this theorem and from Lemma 2.2 it follows The Artin 3- presentation associated to the double branched cover of ℋ(α,β,γ,δ,ϵ,η) is r_1 = x_1^-α (x_1(x_2x_3)^γ x_2 (x_2 x_3)^-γ)^-δ (x_1 x_2 x_3) ^-η r_2 = x_2^-β (x_2 x_3)^-γ(x_1 (x_2 x_3)^γ x_2(x_2 x_3)^-γ)^-δ(x_1 x_2 x_3) ^-η r_3 = x_3^-ϵ (x_2 x_3)^-γ(x_1 x_2 x_3) ^-η The W-Artinian 3-presentations for some examples of the trivial group see Tables <ref>, <ref>, <ref>, <ref>, <ref>, and <ref>. § HYPERBOLIC CLOSED PURE 3-BRAIDS In this Section, we prove which closed pure 3-braids are hyperbolic. To do this, we use a result of Birman and Menasco. In <cit.>, we proved that the group of 3-pure braids is the direct product of the groups Z and the free group in two generators. This does permit us to give a general diagram to represent a 3-pure braid, which is given by β =∏ _i=1^n σ_1^2e_iσ_2^2f_i(σ_1σ_2σ_1)^2e where e_i,f_i and e are integers. We call to β = σ _1 ^2e_1σ _2^2f_1(σ_1σ_2σ_1)^2e the small case. Also, in <cit.> some hyperbolic closed pure 3-braids are described, and after performing integral Dehn surgery S^3 is obtained. So, these are exceptional Dehn fillings coming from the hexatangle. Here, we said which closed pure 3-braids are hyperbolic, not only in the small case, but in the general case. The result of Birman and Menasco is the following corollary. Here is a small introduction taken from <cit.> to understand it. Let L be a closed 3-braid with axis A. A is unknotted, so S^3 - A is fibred by open disks H = {H_θ : θ∈ [0,2π] }. Suppose T is not a peripheral torus. The torus T of type 0 is described as follows (see Figure <ref>). The torus T is the boundary of a (possible knotted) solid torus V in S^3 whose core is a closed braid with axis A. The link L is also a closed braid with respect to A, part of L inside V and part (possibly empty) outside. The torus T is transverse to all fiber H_θ in the fibration of S^3 - A, and intersects each fiber in a meridian disk of V. Let L be a closed 3-braid or a prime link of braid index 3, and let T be an essential torus in S^3 - L. Then T has an embedding of type 0 and L is conjugated to β = (σ _2)^k(σ _1σ_2^2σ_1)^q where |k|≥2, |q|≥1. In this work, we determine which closed pure 3-braids are equivalent to (σ _2)^k(σ _1σ_2^2σ_1)^q. Observe that a condition for β to be pure is that k is an even number. We observe that although the result of Birman and Menasco is incomplete in the case of closed n-braids, in the case of closed 3-braids is complete, we use it to determine which closed pure 3-braids are hyperbolic. In <cit.> Corollary 1, it is shown that if a closed 3-braid β̂ admits an essential torus, then β is conjugate to (σ _2)^k( σ _1σ _2^2 σ _1)^q where |k|≥ 2, |q|≥ 1. Here, we determine which small closed pure 3-braids are equivalent to β̂. In our case, k is an even number. Let ρ : B_3 →ℤ_2 * ℤ_3 be such that B_3 = < σ_1, σ_2 | σ_1σ_2σ _1 =σ_2σ_1σ_2> and ℤ_2 * ℤ_3 = <Δ, y | Δ^2 = 1, y^3 = 1> where σ_1 ↦ y^2Δ , σ_2 ↦Δ y^2 Then it is known that ρ is an epimorphism, it comes from the fact that B_3 is homeomorphic to the group of the trefoil knot, and the latter has a quotient to ℤ_2 * ℤ_3. Observe that we are abusing the notation, Δ represents here a generator of ℤ_2 * ℤ_3, but also represents the Δ = σ_1σ_2 σ_1 = σ_2σ_1 σ_2 in the 3-braid group B_3. This abuse is justified by the relation ρ(Δ) = Δ. Notice also that ρ(σ_1 σ_2) = y. We will use ρ to determine when the words w_1 = σ_1^2e_1σ_2 ^2f_1Δ ^2e, w_2 = σ_2^2q(σ_1σ_2^2σ_1)^2k are conjugated. It follows that ρ (w_1) = (y^2Δ)^2e_1(Δ y^2)^2f_1 and since (yΔ)^-1 = Δ y^2 we obtain that: ρ(w_1) = (y^2Δ)^2(e_1-f_1) Now, ρ (w_2) = (Δ y^2)^2q(y^2 ΔΔ y^2Δ y^2y^2Δ)^2k = (Δ y^2)^2q(yΔ yΔ)^2k = Δ y^2 Δ y^2⋯Δ y^2 yΔ yΔ⋯ yΔ = (Δ y^2 )^2q-4k = (yΔ)^4k-2q. Now, we proceed by cases, using that ρ (w_1) is conjugated to ρ(w_2) if and only if their minimal cyclically reduced words are equals (see <cit.>). Case 1. e_1 ≥ 1, f_1≥ 1: (y^2 Δ)^2e_1 (Δ y^2)^2f_1 = = y^2 Δ (y^2Δ (y^2 Δ)^2e_1-2 y^2 ΔΔ (yΔ^2 y^2)^2f_1 - 2Δ y^2 = = y^2 Δ (y^2 Δ )^2e_1 - 2y(Δ y^2)^2f_1 - 2Δ y^2 = = y Δ (y^2Δ)^2e_1 -2 y (Δ y^2)^2f_1 - 2Δ but ρ (w_2) = (yΔ)^2(k-q), so in this case e_1 = 1 and f_1 = 1 and k-q =1. Case 2. e_1 < 1, f_1 ≥ 0: ( y^2 Δ )^2e_1( Δ y^2)^2f_1 = (Δ y)^-2e_1(Δ y^2)^2f_1 and since the minimal cyclically reduced word of ρ (w_2) only has y^2 or only y, it follows that e_1 = 0 or f_1 = 0. If e_1 = 0, we have (Δ y^2)^2f_1 which is a power of (Δ y^2), and if f_1 =0, (Δ y)^-2e_1 is a power of Δ y^2 if and only if is a negative power and (Δ y)^-2e_1 is equivalent to yΔ yΔ⋯ yΔ, then e_1 = 0 or f_1 = 0. Case 3. e_1≥ 0, f_1< 1: Remind that yΔ y^2 is equivalent to Δ so, (y^2 Δ)^2e_1(Δ y^2)^2f_1 = (y^2 Δ)^2e_1(yΔ)^-2f_1 is equivalent to a power of Δ y^2 if e_1 = 0 or f_1 = 0. Case 4. e_1 ≤ -1, f_1≤ -1: In this case, (y^2 Δ )^2e_1 (Δ y^2)^2f_1 = (Δ y)^-2e_1(yΔ )^-2f_1 = Δ y (Δ y)^-2e_1 - 2Δ y y Δ (y Δ )^-2f_1 - 2 y Δ = = y^2 (Δ y )^-2e_1 - 2Δ y^2 Δ (y Δ )^-2f_1 - 2 Then e_1 = -1 and f_1 = -1. So, we have Let β = σ_1^2e_1σ_2^2f_1Δ ^2e be a pure 3-braid.Then the closed pure 3-braid β̂ has an essential torus if: i)e_1 = 0, or f_1 = 0 and e any integer. Observe that if e = 0 then β̂ is splittable. ii) e_1 = f_1 = ± 1 5pt and e any integer. iii) e_1, f_1 ∈ℤ - {0} and e = 0. Observe that in this case we have a connected sum. We give some examples, see Figure <ref>. To prove the next theorem we have the following. 10pt Let w = ∏ _i=1^n (y^2Δ)^2e_i(Δ y^2)^2f_i be a word in ℤ_2⟨Δ⟩* ℤ_3⟨ y ⟩ with e_i,f_i≠ 0 for all i. The reduced form of w decomposes as w = αβγ where length(β)≥ 1 and α∈{Δ y, y^2Δ}, γ∈{ yΔ, Δ y^2}. We proceed by induction. For n= 1 is clearly true. Suppose that it is valid for n=m, we will prove it for n= m+1. w = ∏_i=1^m+1 (y^2Δ)^2e_i(Δ y^2)^2f_i = w' (y^2Δ)^2e_m+1(Δ y^2)^2f_m+1 by induction hypothesis we know that the reduced form of w' is w' =αβγ where γ = yΔ or γΔ y^2. We also know that w” = (y^2Δ )^2e_m+1(Δ y^2)^2f_m+1 has the form α' β ' γ '. So, w = (αβγ)(α ' β ' γ '). 10pt But, by analyzing all the four possibilities for γ and α ' we can see that γα ' at most reduces to y^2, but even on this case, β y^2β ' does not reduce any further because the y^2 reduction occurs in the case where γ = yΔ and α ' =Δ y, so β must ends with Δ and β ' starts with Δ. So β y^2β' is in its reduced form. So, w = αβ y^2 β' γ' decomposes as we say it would. This concludes the proof. Let w= ∏_i=1^n (y^2Δ)^2e_i(Δ y^2)^2f_i be a word with e_i,f_i ≠ 0 for all i. Then w is cyclically reduced equal to (y^2Δ)^2k or (Δ y)^2k if and only if e_i = f_i = ± 1 for all i. We already proved this for the case n=1. We will show it when n >1. So, w= (y^2Δ)^2e_1(Δ y^2)^2f_1w'. As we proved, w' = αβγ where α = (y^2Δ)^± 1 and γ = (Δ y^2)^±1 is in reduced form. Observe that η = (y^2Δ)^2e_i(Δ y^2)^2f_i reduces in four different forms depending on the signs of e_i, f_i. By looking at the options in <ref>, we notice that the reduced form of η has y and y^2 on always. So, if we want it to be cyclically reduced equal to (Δ y)^2k or (y^2Δ)^2k we need only to have one, so we must reduce η cyclically, w' has the form w' = (y^2Δ)^± 1β (Δ y^2)^± 1 . Analyzing all the possibilities, we observe that when e_i and f_i have opposite signs, y, and y^2 will survive after a cyclical reduction of w. Because at most one y or y^2 is canceled (or transformed) on each side of η (left and right) as observed in Table <ref>. But in <ref>, we can observe that η has two y's and two y^2's when e_if_i< 0 and when e_if_i > 0 it has one y in the middle surrounded by y^2 on both sides, as we can only cancel at most two y^2 the only way that this will work is when e_i = f_i =1. Analogously, if e_i f_i < 0 can be reduced cyclically eliminating the y's at the sides of η and leaving only y^2's if e_i = f_i = -1. Observe that, when e_i = f_i = 1 the reduced word has only y's and when e_i = f_i = -1 the reduced word of η has only y^2's. So, we apply this argument for (y^2Δ)^2e_i(Δ y^2)^2f_i to obtain that e_i = f_i = ± 1. As we want only y's or y^2's we conclude that either e_i = f_i = 1 for all i or e_i = f_i = -1 for all i. Let β be the following pure 3-braid β = ∏ _i=1^nσ_1^2e_iσ_2^2f_i(σ_1σ_2σ_1)^2e where e_i, f_i, e any integers. Then β̂ is hyperbolic except if we have: i) Some of the braids in Theorem 4.2 ii) e_i = 0 or f_i = 0 for all i and e = 0 in which case β̂ is splittable. iii) e_i = f_i = 1 for all i iv) e_i = f_i = -1 for all i We show that β̂ does not have an essential annulus or sphere. For the annulus, it is enough to analyze when its boundary is in different or in the same component of the link. It is enough to take a neighborhood of the annulus and the link, and then we get an essential torus. If there is an essential sphere then the link is splittable. Acknowledgement. The first author is supported by a fellowship Investigadoras por México from CONAHCyT. Research partially supported by the grant PAPIIT-UNAM 116720. alpha
http://arxiv.org/abs/2306.02806v1
20230605115807
A Data-driven Region Generation Framework for Spatiotemporal Transportation Service Management
[ "Liyue Chen", "Jiangyi Fang", "Zhe Yu", "Yongxin Tong", "Shaosheng Cao", "Leye Wang" ]
cs.LG
[ "cs.LG", "cs.DB" ]
Key Lab of High Confidence Software Technologies (Peking University), Ministry of Education & School of Computer Science, Peking University Beijing China [email protected] School of Artificial Intelligence and Automation, Huazhong University of Science and Technology Wuhan China [email protected] DiDi Chuxing Hangzhou China [email protected] State Key Laboratory of Software Development Environment, Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing, Beihang University Beijing China [email protected] DiDi Chuxing Hangzhou China [email protected] [1] Key Lab of High Confidence Software Technologies (Peking University), Ministry of Education & School of Computer Science, Peking University Beijing China [email protected] Corresponding authors. MAUP (modifiable areal unit problem) is a fundamental problem for spatial data management and analysis. As an instantiation of MAUP in online transportation platforms, region generation (i.e., specifying the areal unit for service operations) is the first and vital step for supporting spatiotemporal transportation services such as ride-sharing and freight transport. Most existing region generation methods are manually specified (e.g., fixed-size grids), suffering from poor spatial semantic meaning and inflexibility to meet service operation requirements. In this paper, we propose RegionGen, a data-driven region generation framework that can specify regions with key characteristics (e.g., good spatial semantic meaning and predictability) by modeling region generation as a multi-objective optimization problem. First, to obtain good spatial semantic meaning, RegionGen segments the whole city into atomic spatial elements based on road networks and obstacles (e.g., rivers). Then, it clusters the atomic spatial elements into regions by maximizing various operation characteristics, which is formulated as a multi-objective optimization problem. For this optimization problem, we propose a multi-objective co-optimization algorithm. Extensive experiments verify that RegionGen can generate more suitable regions than traditional methods for spatiotemporal service management. <ccs2012> <concept> <concept_id>10002951.10003227.10003236</concept_id> <concept_desc>Information systems Spatial-temporal systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010405.10010481.10010485</concept_id> <concept_desc>Applied computing Transportation</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Spatial-temporal systems [500]Applied computing Transportation A Data-driven Region Generation Framework for Spatiotemporal Transportation Service Management Leye Wang ============================================================================================== § INTRODUCTION The global transportation services market is expected to grow to over 7,200 billion and 10 trillion dollars in 2022 and 2026 over a compound annual growth rate of 9% according to the market report <cit.>. For online transportation platforms (e.g., Uber), region is the fundamental operation areal unit for spatiotemporal data management. For example, aiming to shorten passengers' waiting time, online transportation platforms first divide the whole city into several regions (e.g., 1km× 1km grids <cit.>) and dispatch idle drivers to hot areas based on the estimated demand for each region. Only with a well-managed set of regions, online transportation platforms can dispatch more orders to drivers and respond to passengers' ride-sharing requests more quickly. In general, how to determine the region (i.e., areal unit) for spatial data management is one of the most important problems in geo-science, widely known as the the modifiable areal unit problem (MAUP) <cit.>, since different region specifications may lead to diverged analysis outcomes <cit.>. Despite the importance of the region specification, current industry practice is mostly ad-hoc and manually defined, e.g., fixed shapes like grids or hexagons <cit.>, and street/administrative blocks <cit.>, without comprehensive assessment standards. This could incur uncontrollable and unanticipated effects on the operations of spatiotemporal services. As later shown in our experiments, compared to ad-hoc grid regions, an optimized region specification can reduce the spatiotemporal service demand prediction error by around 10%; this improvement is particularly noteworthy considering that a recent benchmark study <cit.> has shown that state-of-the-art deep models can improve prediction performance also by about 10% compared to classical methods (e.g., gradient boosted regression trees <cit.>). Hence, region specification is a crucial but largely under-investigated issue for spatiotemporal data management practice. Next, we briefly illustrate existing region specification methods and their pitfalls. Typically, fixed-shape methods divide the whole city into numerous regions with the same shape, such as grids and hexagons, in the absence of geographic semantic meaning.[Regions with good semantic meaning are bounded by roads <cit.>.] As demonstrated in Fig. <ref>(a), the Shanghai Expo Park is an entire geographic entity, but it is separated into several grids, violating the semantics of urban spaces. In addition, crossing a river usually takes longer for both automobiles and pedestrians than crossing a road. However, the fact that the two sides of the river are divided into the same grid makes it inconvenient to operate online transportation services. For instance, when a driver is guided to such a grid to wait for ride-sharing requests, she/he would be doubtful whether she/he needs to go across the river or not. Street/administrative-block methods use pre-defined spatial boundaries (i.e., roads and administrative boundary) for region generation. While these methods can keep good geographic semantic meaning compared to fixed-shape methods, they generally lack the flexibility to well fit various operation requirements of online transportation services. For instance, administrative blocks may be too coarse to enable fine-grained operations (especially for downtown areas with a large number of demands); meanwhile, road-segmented street blocks may be too small and thus most blocks hold nearly zero demands. A practically key spatiotemporal data management question then emerges: can we generate a good set of regions for online transportation platforms by meeting two goals: (i) with good geographic semantic meaning, and (ii) flexibly fitting various operation requirements of spatiotemporal services? To address the above problem, we propose and implement a unified region generation framework, called RegionGen, which enables the adaptive formation of regions for various spatiotemporal transportation services, such as taxi dispatching, freight transportation, and designated driver services. The general process of RegionGen follows two steps. In the first step, we adapt road segmentation techniques <cit.> to extract semantically-meaningful atomic spatial units. While the atomic spatial units may be too small for operation, the second step merges nearby units into larger regions, which can be seen as a spatial clustering process. In particular, our spatial clustering process needs to consider various realistic operation characteristics for spatiotemporal services, including generated regions' granularity, specificity, etc. <cit.>. More specifically, we argue that the temporal predictability plays a key role in transportation service operations. If the regions are generated with higher predictability, future events of interests (e.g., the number of ride-sharing requests in next one hour) are easier to forecast and thus service operation decisions can be made more accurately (e.g., how many drivers need to be dispatched to a certain region). Fig. <ref>(b) and (c) illustrate a toy example to show how spatial clustering can help generate a high-predictability region for freight services. A, B, and C are three atomic spatial units generated by road segmentation; nevertheless, their temporal patterns on freight are unstable, as each spatial unit contains fewer data and are susceptible to random disturbance. Then, performing operation management directly based on these three spatial units will be difficult. Meanwhile, we find that these three spatial units all contain university campuses; then, by clustering them together for operation, we obtain a highly-predictable merged region whose temporal regularity is significantly increased. In summary, our main contributions include: * To the best of our knowledge, this is one of the pioneering efforts toward generating regions considering key operation characteristics (e.g., good spatial semantic meaning and predictability) in spatiotemporal service. This is also one of the first data-driven solutions to MAUP for spatiotemporal transportation service operations. * RegionGen includes two main steps. First, to keep good spatial semantic meanings, RegionGen segments the whole city into atomic spatial elements based on road networks and obstacles (e.g., rivers). Second, it clusters the atomic spatial elements by maximizing the key operating characteristics with a multi-objective optimization process. * We conduct extensive experiments on three spatiotemporal service datasets. Results verify that RegionGen can generate more suitable regions than traditional methods for spatiotemporal transportation service. § OPERATION CHARACTERISTICS ANALYSIS AND PROBLEM FORMULATION There are two main challenges in designing such a region generation framework. The first is to guarantee the generated regions with good spatial semantic meaning. The second is to make the generated regions meet the operation requirements of spatiotemporal services. Our basic idea is to aggregate segmented fine-grained regions into large ones to acquire better operating characteristics (e.g., predictability). It is worth noting that the aggregated regions are still bounded by roads and not cross obstacle entities, and thus still guarantee good spatial semantics. Accordingly, we disassemble the region generation process in two steps. The first step is to generate many fine-grained regions with good spatial semantics by segmentation (in the rest of the paper, for clarity, the fine-grained regions are called atomic spatial elements). Then, the second step is to aggregate atomic spatial elements to obtain better operating characteristics. Especially, we first quantify several spatiotemporal services operating characteristics (Sec. <ref>). While there exist a variety of spatiotemporal services in practice, we have summarized three types of general operating characteristics (i.e., predictability, granularity, and specificity) that may benefit most services. §.§ Quantifying Operation Characteristics In most urban prediction applications (e.g., ride-sharing or dockless bike-sharing demand prediction), we usually expect regions with pleasant predictability, fine-grained spatial granularity, and high service specificity. Predictability refers to whether future events of interest for the service (e.g., ride-sharing demands) can be easily predicted. At the same time, many spatiotemporal services (e.g., traffic monitoring) hope that the spatial granularity can be fine-grained (e.g., the region size is not too large) to carry out precise management. Besides, high specificity indicates that the generated region closely matches the actual service area, i.e., the generated region has few redundant parts where (almost) no service is required (e.g., ride-sharing service operation regions may not need to cover mountain areas where cars cannot reach). We here quantify these general characteristics for the region generation process. Predictability. As there exist various prediction models that can be adopted in practice <cit.>, it is generally non-trivial to efficiently and precisely quantify the predictability of the spatiotemporal time-series data within regions. An intuitive way could be firstly fixing a prediction model and then measuring the prediction errors on test records (e.g., mean absolute error, Kaboudan metric <cit.>). Such measurements are highly model-dependent. That is, the error measurements obtained by one model cannot usually represent another model. As state-of-the-art spatiotemporal prediction models are mostly complex deep models <cit.>, training these models to measure predictability for every possible region generation candidate would be too time-consuming and thus practically intractable. We then propose to use model-agnostic measures to efficiently estimate regions' predictability without the need to repeatedly training deep models. Specifically, we select the auto-correlation function (ACF) as a fast proxy measurement for predictability. Suppose that {s_1,i,s_2,i,...,s_T,i} is the time series of region i, the ACF of region i after k slots delay is computed by: ρ^k_i=T·∑_t=k+1^T(s_t,i-s̅_̅i̅)(s_t-k,i-s̅_̅i̅)/(T-k)·∑_t=1^T(s_t,i-s̅_̅i̅)^2 where s̅_̅i̅ is the mean value of the time series in region i. ACF is often used to measure the periodicity of time series data. While high periodicity (e.g., daily periodicity) is one of the dominant indicators for accurate prediction <cit.>, we deem that high periodicity may be highly correlated with low prediction errors of deep models. To investigate whether such correlations exist, we explore the relationship between the ACF after 24 hours delay (called ACF_daily) and prediction errors regarding a state-of-the-art deep forecasting model (i.e., STMGCN <cit.>) in Fig. <ref> (Appendix <ref>). The results verify that ACF_daily and prediction errors are highly correlated, especially when ACF_daily is large; that is, maximizing the ACF_daily would practically decrease the prediction errors significantly. Therefore, in this work, we adopt the ACF_daily indicator for efficiently measuring the predictability of regions. Granularity. In real-world applications, to keep good spatial semantics, it is difficult to enforce the generated region with specific shapes and we typically measure the region size by the region area. More importantly, keep in mind that the greater the area, the more spatiotemporal data it contains, and the less random noise in the time series, the more regular the time series are, and the more predictable they are. Therefore, unlike predictability, we cannot require that the generated regions be as small as possible. As a result, we need to make a trade-off between good predictability and fine-grained spatial granularity. In practice, we typically meet the need for fine-grained spatial granularity by imposing the maximum region area constraint. Suppose that P_i is the geographic shape of region i, stored as a set of vector coordinates, the granularity of region i is quantified by its area: ts_i = Area(P_i) where Area(·) is the area of the 2D polygon calculation function, having been widely integrated into tools, e.g., ArcGIS.[https://www.arcgis.com/] Specificity. For most spatiotemporal services, the regions for operation management do not always need to cover the entire target area (e.g., a city), as many sub-areas may not have the service requirements (e.g., ride-sharing services may not be required for mountain sub-areas where vehicles cannot access). To this end, the generated regions suitable for operation management would prefer to cover only those service-required sub-areas. We thus propose the specificity to measure the ratio of service-required sub-areas within the generated regions. Then, low-specificity regions mean that there exist a lot of redundant sub-areas that do not have service requirements and may negatively impact the operation efficiency. For example, when providing online ride-sharing services, if the service specificity of a region is low, the ride-sharing demands are mainly concentrated in only a few hotspots. When dispatching idle drivers to this region, a large number of drivers may influx to the same place, causing traffic congestion. Particularly, we define the ratio of the service-required area (vs) to the total area of the region (ts) as the specificity, where the service-required area is counted according to historical service records. While spatiotemporal data records usually store spatial information in the format of points (e.g., latitude and longitude), we first convert these data points to Geohash <cit.> units for further area size calculation. Especially, we use 8-bit geohash (Fig. <ref> in Appendix <ref> displays the geohash in the service-required area and the whole area) as the calculation unit to get an efficient and effective approximation to the area size. The service specificity of the region i is calculated as: c_i = vs_i/ts_i≈# of historically-serviced geohash in region i/# of total geohash in region i §.§ Problem Formulation §.§.§ Atomic Spatial Element Segmentation Problem Given road networks and geographic obstacles (e.g., rivers), the segmentation problem aims to generate road segments bounded by roads and not overlapping with obstacles. §.§.§ Atomic Spatial Element Clustering Problem Given a graph G of N nodes (i.e., the atomic spatial elements 𝒫), the adjacency matrix A∈ℝ^N× N (details in <ref>), spatiotemporal raster data D∈ℝ^T× N which is extracted from 𝒫 and historical service records 𝒟={(l_i,t_i)} (l_i and t_i are the locations and the timestamp that the service takes place), the number of time intervals T, maximum area constraint L, and the service-required area and total area vs, ts ∈ℝ^N× 1 of atomic spatial elements, we aim to cluster N atomic spatial elements to M clusters (2 ≤ M < N) by maximizing the average predictability and service specificity of clusters. Maximize f_1(X)= 1/M∑_j=1^Mρ_j^daily Maximize f_2(X)= 1/M∑_j=1^M∑_i=1^Nx_i,j· vs_i/∑_i=1^Nx_i,j· ts_i s.t. ∑_j=1^Mx_i,j =1 ∀ i ∈ [N] ∑_i=1^Nx_i,j≥ 1 ∀ j ∈ [M] ∑_i=1^Nx_i,j· ts_i ≤ L ∀ j ∈ [M] x_u,i+x_v,i - ∑_z∈𝒮x_z,i≤ 1 ∀ A_u,v=0, 𝒮∈Γ(u,v) where X∈{0,1}^N× M is the binary clustering results. x_i,j=1 denotes i^th atomic spatial element belong to j^th cluster. S=D × X ∈ℝ^T × M is the aggregated ST raster data. ρ_j^daily is the ACF of the time series in the j^th cluster after one day delay. Eq. <ref> impose each atomic spatial element can only be assigned to at most one cluster. Ineq. <ref> ensures that each cluster contains at least one atomic spatial element. The overall area of each aggregated cluster must be less than the specified maximum area (Ineq. <ref>). Ineq. <ref> guarantees that every cluster induces a connected subgraph, where u,v are two non-adjacent nodes in G. A set 𝒮⊆ V\{u,v} is a (u,v)-separator if u and v belong to different components of G-S. We denote by Γ(u,v) the collection of all minimal (u,v)-separators in G <cit.>. Remark. Eq. <ref> - Eq. <ref> present a bi-objective optimization problem to maximize both the specificity and predictability of clustered regions. The optimization problem is NP-hard as it could be reduced to the well-known balanced graph partition problem, considering a special case that the optimized clustering result may be a balanced partition based on the data volume.[In practice, it is possible as more data often leads to better predictability (Fig. <ref> in Appendix) and balanced partitioned clusters may have high predictability on average.] Hence, approximation algorithms or heuristics are needed to be designed to find a good solution in a reasonable amount of time. § METHOD §.§ Framework Overview The proposed region generation framework RegionGen consists of two core components, including the atomic spatial element segmentation and clustering (Fig. <ref>). The atomic spatial elements segmentation component first extracts atomic spatial elements with good spatial semantic meaning by using the proposed obstacle-aware road map segmentation techniques. Then, the spatiotemporal data filtering block removes the atomic spatial elements with less service data to reduce the scale of the subsequent clustering problem. The atomic spatial elements clustering component first represents the atomic element clustering problem in graph formats, where the nodes are the atomic elements. By combining domain knowledge, we establish the edge sets, which indicate that related nodes can be aggregated into the same cluster. The cluster scale estimation component provides a minimal cluster number that enables the aggregated clusters to meet the operation requirements (i.e., granularity in Eq. <ref>), allowing for adapting the well-researched graph partition approaches to address the clustering problem that requires the fixed partition number (i.e., clustering number) as inputs. Finally, the predictability-specificity co-optimization component generates regions by solving the atomic spatial element clustering problem. §.§ Atomic Spatial Element Segmentation §.§.§ Obstacle-aware Road Map Segmentation Road segments provide us with more natural and semantic meaning than fixed-shape methods. In the real world, geographic entities (e.g., parks and residential areas) are bounded by roads and people live in these roads-segmented regions and POIs (Points of Interests) fall in these regions instead of the main roads <cit.>. Previous research <cit.> adopt image-based road segmentation techniques that mainly consist of three morphological operators, namely dilation, thinning, and connected component labeling (CCL). Dilation is designed to remove some redundant road details for segmentation avoiding the small connected areas induced by these unnecessary details (e.g., the lanes of roads and the overpasses). The thinning operator aims to extract the skeleton of the dilated road segments while keeping the topology structure of the original image. The CCL operator finds the connected pixels with the same label in the image and eventually generates road segmentation. Although these road segmentation techniques can generate regions bounded with roads and keep good spatial semantic meaning. We argue that the generated regions may still be inconvenient for operating spatiotemporal services since the regions may cross obstacle entities (e.g., rivers). To overcome this shortage, we proposed the obstacle-aware road map segmentation method that incorporates obstacle entities into the road map segmentation process, enabling the road segmentation will not to stretch over obstacle entitles and thus be more suitable for operating services. Fig. <ref> shows the procedure of the obstacle-aware road map segmentation method and the main intermediate results of an example. The main improvement lies in the obstacle entities. Geographic obstacle data usually stores in vector form in spatial databases (e.g., OpenStreetMap) and we convert the vector-based obstacle data into binary images. Each pixel represents a grid cell (`1' denotes the obstacle segments, and `0' stands for the background). Then, we fuse the obstacle raster with thinned road raster (after dilation and thinning), so that the obstacles entities will also be the boundaries for segmentation. The generated regions are called atomic spatiotemporal elements 𝒫^'. Besides, we should use fine-grained level road data since we expect fine-grained spatial operation and the atomic spatial elements need to be as small as possible. §.§.§ Spatiotemporal Data Filtering The obstacle-aware road map segmentation method outputs 𝒫^', containing N^' atomic spatial elements. Due to the heterogeneous spatial data density, many spatial atomic elements hardly contain spatiotemporal service data and are unnecessary to store. Filtering them helps reduce the scale of subsequent clustering problems and obtain better spatial granularity for supporting spatiotemporal services. We impose restrictions on the atomic elements based on the service data, filter out atomic elements whose data amount (e.g., daily average demand) is smaller than α, and then get N main atomic spatial elements. §.§ Atomic Spatial Element Clustering §.§.§ Graph Generation To solve the clustering problem for region generation (Sec. <ref>), we transform the problem into graph format so that the connected constraint (i.e. Eq. <ref>) is easily satisfied by adopting many well-studied techniques (e.g., connected graph partition). Every atomic spatial element can be regarded as a node in the graph and their edges denote their `aggregatable' property. That is, the edges between two nodes represent that the predefined constraints (e.g., maximum area and geographic adjacent constraints) are still satisfied after merging these two connected nodes. In this section, we elaborate on building the edge sets of atomic spatial elements based on domain knowledge and geographic constraints. Domain Knowledge. The atomic spatial elements may have the following properties, which indicate that the atomic elements cannot or unnecessarily aggregate with other atomic elements: * Oversize: The road network may be scarce in some places (e.g., suburbs), and obstacle-aware road map segmentation may produce huge atomic elements whose area may exceed the maximum area specified by the service operator. At this time, aggregating the large atomic elements will generate an oversized region, which makes the operation difficult. * Already Predictable: In urban hot spots, such as commercial areas, small atomic elements may contain a lot of spatiotemporal data, and they may be predictable even if not aggregated with other atomic elements. In this situation, it is not necessary to aggregate, and small regions can retain fine-grained spatial granularity. Geographic Constraints. The `aggregatable' nodes must meet the following two geographic constraints (denote as 𝒞), otherwise, the clustered regions are useless for spatiotemporal services: * Adjacent Constraint: `Aggregable' atomic elements should be geographically adjacent. In practice, we usually set a small threshold τ (e.g., 50 m), and when the shortest distance between the two atomic elements is less than τ, two atomic regions are defined as geographically adjacent. * Obstacle Constraint: Regions with good spatial semantics do not cross obstacle entities (e.g., rivers). To ensure that the aggregated regions still keep good spatial semantics, if there is an obstacle between the two atomic elements, there will be no edge between these two atomic spatial elements. Fig. <ref> shows an example of the graph generation process of 7 atomic spatial elements. First, based on the ACF and area attributes, Node 1 and Node 3 will be separate nodes due to oversize area and high predictability (i.e., ACF). We then calculate the geographic distance for the remaining 5 nodes (Node 2, 4, 5, 6, 7). Among these 5 nodes, there is a river between Node 2 and the remaining nodes, so there are no edges between them (marked with red dotted lines). Next, the distances between Node 2 and Node 4, Node 4 and Node 6, etc., exceed the given threshold (τ= 50m) and therefore do not have edges between them (marked with blue dotted lines). Node 4 and Node 5, Node 5 and Node 6, etc., are closely adjacent and do not cross the river, and thus, can be aggregated (marked with black lines). With the above process, the graph generation module eventually produces a graph G with atomic elements as nodes (V) and their adjacency matrix A (equivalent to the edge set E of G). §.§.§ Cluster Scale Estimation One challenge in solving the clustering problem is ensuring that the solutions meet the maximum area constraints (Ineq. <ref>). The maximum area constraint depends on the number of clusters (M) required to operate services in a city of a given size. For instance, if a 100 km^2 city has a maximum area of 5 km^2 for each region, 20 regions may be necessary; however, if this constraint changes to 10 km^2, only 10 regions are needed. To estimate the clustering scale and satisfy these constraints, we gradually increase M and check whether they are met. We define an ideal minimum cluster number as M^* which satisfies ⌈∑_i=1^Nts_i/L⌉ < M^* < N. Here, M^* represents the minimum cluster scale below which feasible aggregation results cannot be obtained. To test whether maximum area constraints are satisfied at different trial cluster scales, we require a fast solver that can obtain feasible aggregated solutions quickly. Existing balanced graph partition methods <cit.> have been shown to be efficient enough for our purposes (a sparse graph with less than 10k nodes). We use Metis software package as our fast solver and assign node weights based on spatiotemporal data amounts while minimizing edge-cut tolerating up to 5% unbalance. This method is called D-Balance. §.§.§ Predictability-Specificity Co-optimization The above fast-solver can generate feasible but insufficiently superior solutions, and the specificity is not taken as the optimization objective. Hence, we design a heuristic predictability-specificity co-optimization algorithm that can iteratively refine multiple objectives, based on the famous node-swapping local search approaches (i.e., Fiduccia-Mattheyses <cit.>). We summarize the proposed algorithm in Algorithm <ref> (Appendix <ref>). Our algorithm first initializes Pareto solution sets 𝒴 with simple strategies (e.g., random growth or greedy). There are three initialization methods (details in Appendix <ref>): * D-Balance balances the data across cluster (details in Sec. <ref>). * Greedy <cit.> grows clusters by choosing the maximum gain. * Fluid <cit.> aggregates nodes by random propagation. Then, at each iteration, it first selects a candidate solution from 𝒴 and iteratively improves it by moving boundary nodes that connect two clusters to obtain the positive gain (obtaining better predictability or specificity) while not violating the geographic constraint 𝒞. The way of selecting the candidate solutions from Pareto solution sets 𝒴 will determine the final solution quality. We hold the Pareto optimal solutions rather than weighting them into a single objective (e.g., linear scalarization <cit.>). At the beginning of each iteration, we sample a random number ∈ [0,1] from uniform distribution and compare it with a predefined parameter w ∈ [0,1]. If w is bigger, we choose the best-predictability solution (probability w) from 𝒴. Otherwise, we select the best-specificity solution (probability 1-w). That is, w represents the preference (probability) of optimizing the predictability objective. As the example shown in Fig. <ref>, we first choose the candidate solution with probability w=0.7. Then the selected best-predictability solution will serve as the starting solution for the refinement. We record all positive gain movements and append them into 𝒴 for the next iteration. Note that, in one iteration, even if we choose the best-predictability solution for refinement, we still record the better-specificity solutions during refinement. The algorithm stops when no more positive gain can obtain or achieve the max iterations. Time Complexity. The refinement process improves solutions by swapping the nodes having edges, and we will try every pair of nodes in the worst case. Since we limit the max iteration number to n, the time complexity of the co-optimization algorithm is O(n| E|), where E is the edge set of G. § IMPLEMENTATION AND DEPLOYMENT As illustrated in Fig. <ref>, our system mainly has two parts, i.e., offline periodical region optimizing and online region query for the transportation services. In the offline optimizing part, we adopt the `T+1' mode to update the generated regions, which means the regions will be optimized and uploaded to Hive [https://hive.apache.org/] in an offline manner on a daily basis and will be used for downstream spatiotemporal transportation services for the next day. In the online part, Redis[https://redis.io/], an online real-time database, daily updates the region polygons from Hive and respond to online queries from online transportation services. For example, when providing demand heatmap services, region polygons are queried for visualization. The offline part of RegionGen is capable of optimizing fine-grained spatial atomic elements on a daily basis while the online transportation service only needs to look up the region polygons from Redis and the response time is around 100ms. The current run time of RegionGen is around 2 hours (Sec. <ref>) with serial processing, which is already enough for `T+1' daily updates. Meanwhile, it can be accelerated by parallel processing. In each iteration, the co-optimizing algorithm in Sec. <ref> refines solutions by moving all boundary nodes, which can be parallelized (i.e., each processor deals with a part of nodes). In practice, our RegionGen system has deployed to support online real-world services, such as demand heatmap visualization. § EVALUATION §.§ Datasets, Baselines, and Experiment Settings We collect three transportation service datasets (designated driver, freight transport, and taxi demand) as well as the geographic entities (roads and obstacles). Dataset details are in Appendix <ref>. We implement widely-adopted manually-specified region generation methods (Grid, Hexagon) and data-driven baselines (DBSCAN <cit.>, GSC <cit.>, GCSC <cit.>). Our experiment platform is a server with 10 CPU cores (Intel Xeon CPU E5-2630), and 45 GB RAM. More baseline and experimental setting details are in Appendix <ref>. §.§ Evaluating with Spatiotemporal Prediction To evaluate the effectiveness of RegionGen for accurately operating spatiotemporal services, we conduct demand predictions on three spatiotemporal service datasets, which are the basic capabilities of various downstream tasks, such as scheduling idle drivers. Specifically, we predict the demand in the next hour for all datasets. §.§ Evaluation Metric We evaluate the quality of the generated region by two operational characteristics, namely ACF_daily and service specificity. Besides, popular metrics (including RMSE and MAE) are not feasible to directly evaluate the prediction performance, since the prediction ground truths of various region generation methods are different. The Mean Absolute Percentage Error (MAPE) measures the relative errors and thus different predictive objects are comparable. However, different regions contain varying amounts of service data, to make a fair comparison, we ensure the amount of service data of different regions is approximately equal (i.e., demand recall). Specifically, we filter out the regions whose average daily demand is less than 1, and get recalls of 97%, 98%, and 99.8%, respectively, in the designated driver, freight transport, and taxi demand datasets. §.§ Results and Analysis §.§.§ Main results In Table <ref>, we report the ACF_daily, Specificity, and MAPE@Recall on the designated driver and freight transport dataset under the same clustering scale. RegionGen gets Pareto optimal solutions (w is 0.7) and we report two solutions best in the ACF_daily and specificity metrics respectively. In Table <ref>, we report the results of ACF_daily and MAPE@Recall on the taxi dataset, since the latitude and longitude of the original data are aggregated into the census tracts, making the `specificity' metric inapplicable. Table <ref> shows that RegionGen achieves better ACF_daily than the baselines and gets corresponding lower MAPE@Recall. Especially, RegionGen (Best ACF) consistently outperforms baselines in terms of MAPE@Recall, decreasing 3.2% and 3.3% than Grid in the designated driver and freight transport dataset. The above observations demonstrate the generated regions with better predictability support operating more accurate services. They also provide us with new insight that, besides predictive models, the prediction performance can be significantly improved by generating regions with better predictability. The results on the taxi dataset in Table <ref> are similar. RegionGen consistently gets the best ACF_daily and prediction accuracy. In addition, we record the run time of RegionGen. For one city, RegionGen can finish region generation in two hours, which is enough for real-life deployment and usage (detailed discussion in Sec. <ref>). §.§.§ Robustness analysis on open datasets The main results are conducted on two private spatiotemporal service datasets. To help reproduce our results, we also experiment on an open dataset, Chicago taxi demand. Moreover, we conduct spatiotemporal prediction based on various state-of-the-art models, including STMGCN <cit.>, GraphWaveNet <cit.>, GMAN <cit.> and STMeta <cit.>, as these four models have performed competitively in a recent benchmark study <cit.>. Results in Table <ref> show that RegionGen outperforms baselines consistently. Despite small differences in the prediction accuracy of the four models, MAPE is highly correlated to ACF_daily. For instance, RegionGen obtains the highest ACF_daily, and achieves the lowest MAPE consistently under four models. This confirms that our choice of ACF_daily as a measurement for the predictability of regions is acceptable and effective. §.§.§ Robustness analysis under different recall settings To see whether RegionGen is robust under different recall settings, we unceasingly remove the tailed atomic spatial elements that are with the fewest data samples. Fig. <ref> show how ACF_daily changes by varying the recall. We observe that RegionGen (marked with red lines) consistently achieves the best ACF_daily, demonstrating its robustness. Among baselines, with the recall decreasing, ACF_daily of DBSCAN increases the fastest (green lines). It shows that DBSCAN has a relatively obvious long-tail phenomenon; that is, the ACF_daily of the tailed elements is much lower than the other elements, which may incur inconvenience for operating services upon these tailed regions. Besides, fixed-shape methods, Grid and Hexagon (blue lines) perform consistently the worst. Note that fixed-shape regions are still popular for spatiotemporal service management in practice, but these results again point out their weakness. Hence, it would be expected that new region generation technologies, such as RegionGen, will significantly advance the field. §.§.§ Analysis of scalability To analyze the scalability of RegionGen, we conduct experiments on different scales (i.e., the number of atomic elements N). Fig. <ref> shows our results. We explored the change of RegionGen with the scale by dividing the target area into different numbers of atomic elements (from 100 to 10,000).[In this experiment, we choose grids as the atomic spatial elements since they easily adapt to different granularity.] We observe that as the scale increases, the running time and the converge epochs of the algorithm also increase synchronously. It is worth noting that 10,000 atomic elements can already support fine-grained spatiotemporal service in a metropolis like Shanghai (i.e., each atomic element covers around 0.01 km^2); with 10,000 atomic elements, RegionGen takes 17 hours, which still satisfies the need for the `T+1' update mode in our realistic deployment (Sec. <ref>). This demonstrates that RegionGen is capable of optimizing very fine-grained spatial atomic elements. §.§.§ Effect of w In Fig. <ref>, we display the ACF metric of the best-predictability solution and the Specificity metric of the best-specificity solutions from different Pareto solution sets obtained by changing w, the probability of optimizing the predictability objective. We observe that: (i) with the increase of w, the ACF metric gradually increases and the specificity metric decreases; (ii) when w is set to 0.4, the w-hold mechanism prefers to optimize specificity, but still achieves solutions with reasonable predictability. Hence, in practice, we may obtain a solution with both good predictability and high specificity by setting an appropriate w. §.§.§ Effect of different initialization methods In Fig. <ref>, we compare different initialization methods including D-Balance, Greedy, and Fluid by generating initial solutions with 30 different random seeds for each method. We record the initial ACF (called Init ACF) and the ACF when the algorithm stops (called Last ACF). We observe that (i) the final solutions given by Greedy and Fluid are quite different with diverse random seeds (the difference in terms of ACF exceeds 0.02); (ii) the Init ACF is linearly related to the last ACF, which inspires us to choose an initial solution with better quality for further optimization; (iii) The Last ACF of D-Balance solutions are better than Greedy and Fluid (closer to the upper area), demonstrating that it is more capable and more robust to be the initialization method for generating high-quality solutions. §.§ Case Study We visualize the generated regions of baselines and RegionGen in the downtown area of Chicago on the taxi demand dataset. We filter out regions with few historical service data for all region generation methods and each color represents a region. Fig. <ref> displays the demand heatmap and red indicates high-density areas. In Fig. <ref> and Fig. <ref>, Grid and Hexagon generate regions with poor spatial semantics, while hotspots and cold areas are of the same spatial granularity. Based on census tracts, in Fig. <ref>, regions clustered by DBSCAN are with good spatial semantic meaning. DBSCAN prefers to aggregate more census tracts in the high-density area (i.e., hotspots with many data points), which offers excellent predictability but low spatial granularity. At the same time, in cold areas with few data points, DBSCAN preserves a single census tract (good spatial granularity) but with bad predictability. In Fig. <ref>, the aggregate regions by BSC will not be oversize like DBSCAN since the first partition stage in BSC makes all aggregated regions geographically close. However, the nearby census tracts may not be strictly adjacent, and nonadjacent census tracts with similar demand matrices will also be aggregated (marked with red boxes), resulting in inappropriate clustering results. In Fig. <ref>, despite generating spatial continuous regions and obtaining sufficient adaptive granularity (small regions in hotspots and big regions in cold areas), GCSC may still fall short of successfully optimizing the predictability. For example, census tract A is predictable (with much data to support clear daily regularity), yet GCSC continues to aggregate it with other census tracts. In Fig. <ref>, RegionGen gets reasonable adaptive granularity and good predictability (small regions in hotspots and big regions in cold areas). Specifically, RegionGen takes census tract A (already predictable) alone as a cluster, demonstrating that it optimizes the predictability well. In Fig. <ref>, we explore ACF_daily distribution of different regions on baselines and RegionGen. For RegionGen (red line), there are fewer regions with low ACF_daily than all baselines, while having more regions with larger ACF_daily. Therefore, RegionGen obtains better predictability by balancing the spatiotemporal data over different regions. That is, it prefers to cluster fewer atomic spatial elements in hotspots and more in cold areas. § RELATED WORK With the wide adoption of GPS-equipped devices and the great success in machine learning models, massive historical spatiotemporal data (e.g., GPS trajectory data <cit.>) has been widely used to support intelligent transportation services, including traffic prediction <cit.>, travel time estimation <cit.>, transportation route recommendation <cit.>, trajectory similarity computing and outlier detection <cit.>, bus route planning <cit.>, outdoor advertising <cit.>, and crowdsourcing <cit.>. The transportation services may be operated upon specified areal units, e.g. by fixed-size grids <cit.>. Existing transportation services may benefit from our region generation framework. For example, for spatial crowdsourcing pricing applications (e.g., food delivery services), previous research demonstrated that spatial crowdsourcing needs to price for multiple local markets based on the spatiotemporal distributions of tasks and workers than seek a unified optimal price for a single global market <cit.>. The regions created by our framework are more suitable for estimating the spatiotemporal distribution of works (e.g., making the prediction of the supply of workers more accurate) and thus the pricing strategies are easier to give than grids <cit.>. Moreover, with better spatial semantic meaning, our regions may support further analysis correlating with regions' functionality. § CONCLUSION In this paper, we present a unified data-driven region generation framework, called RegionGen, which can flexibly adapt to various operation requirements of spatiotemporal services while keeping spatial semantic meaning. To keep the good spatial semantic meaning, RegionGen first segments the whole city into atomic spatial elements based on the fine-grained road networks and obstacle entities (e.g., rivers). Then, it aggregates the atomic spatial elements by maximizing key operating characteristics such as predictability and specificity. Extensive experiments have been conducted in three transportation datasets including two industrial datasets and an open dataset. The results demonstrate that RegionGen can generate regions with better operating characteristics (including spatial semantic meaning, predictability, and specificity) compared to other region generation baselines under the same granularity. Moreover, we conduct demand prediction services upon the generated regions, and RegionGen still achieves the best performance, verifying its effectiveness. As a pioneering study toward the adaptive region generation problem, we expect that our research can benefit online transportation platforms to provide intelligent and satisfactory transportation services. This work was partly supported by National Science Foundation of China (NSFC) Grant No. 61972008 and CCF-DiDi GAIA Collaborative Research Funds for Young Scholars. Yongxin Tong's work was partially supported by National Science Foundation of China (NSFC) Grant No. U21A20516. ACM-Reference-Format § ILLUSTRATIVE EVIDENCES §.§ ACF_daily is a proper proxy for measuring predictability §.§ More data samples support more obvious daily regularity §.§ Example of calculating specificity § MULTI-OBJECTIVE NODE SWAPPING ALGORITHM FMainMovableBoundary FnFunction: § DATASET DESCRIPTION The Designated Driver Dataset and Freight Transport Dataset are both sampled from a world-leading online transportation company. They include the designated driver orders and the freight transport orders from Oct. 2020 to Aug. 2021 in a mega-city. The designated driver order typically takes place like this: after drinking alcoholic beverages, people are not allowed to drive and seek help from the sober designated driver to take them home. The freight transport service is similar to online ride-sharing services. That is, users send orders online, and truck owners receive orders online and provide transportation services. We sample a certain percentage of these two datasets and get a 10-month dataset with 8,000,000 and 7,000,000 records respectively. Each record contains the start time and location (longitude and latitude). Open Taxi Dataset. The taxi demand dataset is collected from Chicago's open data portal.[https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew] The dataset describes taxi trip records including the pickup time and location. Note that to protect privacy, the latitude and longitude of the trips are not recorded in the dataset; instead, the location is reported at the census tract level. Considering that census tracts already hold good spatial semantics, we use census tracts as the atomic spatial elements for clustering (without the need to do segmentation). We obtain the polygon boundaries of the census tracts from Chicago's open data portal[https://data.cityofchicago.org/Facilities-Geographic-Boundaries/Boundaries-Census-Tracts-2010/5jrd-6zik]. Road and Obstacle Dataset. Road and obstacle data are collected from OpenStreetMap (OSM).[We download OSM data from https://download.geofabrik.de/] To produce fine-grained level road segmentation, we choose the majority of vehicle roads, primarily those classified as `motorways', `primary', `secondary', and `tertiary'. We extract river records from OSM waterway data. § BASELINES AND EXPERIMENTAL SETTING §.§ Baselines For a fair comparison, RegionGen and all the baselines are tuned to generate the same number of regions (M). Specifically, M is set to 913, 724, and 95, respectively, for the designated driver, freight transport, and open taxi datasets. The baselines are as follows. * Grid and Hexagon: With poor spatial semantic meaning, we split the city into several grids/hexagons of equal size. The elements without spatiotemporal data will be filtered out. * DBSCAN: DBSCAN <cit.> is used for clustering transportation service orders' location points. It is a point clustering method and cannot output the shape of the generated regions directly. Previous research has proposed several station-based clustering methods <cit.>, which aggregate nearby stations with similar spatiotemporal patterns. To adopt these methods, we take the atomic spatial elements as stations to generate regions by clustering. * BSC <cit.>: The Bipartite Station Clustering (BSC) method groups stations into clusters based on their geographical locations (first partition) and transition patterns (second partition). As our datasets include demand records, we cluster the station by the demand matrix instead of the transition matrix in the second partition. * GCSC <cit.>: The Geographically-Constrained Station Clustering (GCSC) method groups stations into clusters, making each cluster consist of neighboring stations with similar usage patterns. §.§ Experimental Setting of Region Generation The road and obstacle vectors in the entire city (about 80km× 70km) are converted into a binary raster with 8000× 6000 pixels. We apply a small 5× 5 dilation kennel as Yuan et al. <cit.>. In the graph generation component, atomic elements whose ACF_daily>0.5 or area>5 km^2 are separate nodes. The geographic adjacent distance τ is 50m. The maximum area constraint is 5 km^2. §.§ Setting of Spatiotemporal Prediction To conduct demand prediction, we first select the spatiotemporal data in the last 10% duration in each dataset as test data and the 10% data before the test data as the validation test. All region generation methods are based on the data in the train set. We apply three state-of-the-art predictive models, including STMGCN <cit.>, STMeta <cit.>, and GraphWaveNet <cit.>). To capture spatial dependences, we introduce distance and correlation graphs as Wang et al. <cit.>. The distance graphs are calculated based on the Euclidean distance while the correlation graphs are computed by the Pearson coefficient of demand series. The hyperparameters of these deep models are fine-tuned by grid search and the hidden states of the STMGCN network are 64 (the dimension of spatiotemporal representations). The degree of graph Laplacian is set to 1. We use the Adam optimizer with learning rate = 1e-4. § INITIALIZATION METHODS D-Balance. Namely the fast solver in Sec. <ref>, we get the results by assigning the node weight with the amount of spatiotemporal data and minimize the edge-cut (to isolate the nodes with small degrees) while tolerating 5% unbalance. Greedy. Graph growing is a widely adopted graph partition technique <cit.>. The simplest growing method starts from a random node v, remaining nodes are assigned to the same cluster using a breadth-first search. The growth stops when a certain constraint transgresses. We extend the growing method by a greedy strategy to guide the node growth and optimize the objectives (e.g., predictability). First, we randomly select M nodes as the initial points of M clusters. Then we add unassigned nodes to the assigned clusters in turn by picking the unassigned node with the greatest gain. The following equation specifies the gain of appending node v: gain(v) = λ·ΔACF + (1-λ) ·ΔSpecificity where ΔACF and ΔSpecificity are the change of the ACF_daily and specificity after appending node v into assigned sets. λ (usually set to 0.7 in practice) is a trade-off parameter between choosing better predictability solutions or better specificity solutions. Therefore, the Greedy solver converts the original problem into a single-objective optimization problem by the linear scalarization. Fluid <cit.> is a community detection technique based on how fluids interact with one another and change size in their environment. We get the solutions by giving the `aggregatable' graph G=(V, E) and using a propagation-based approach with predefined cluster numbers.
http://arxiv.org/abs/2306.06528v1
20230610215339
Pus$\mathbb{H}$: Concurrent Probabilistic Programming with Function Spaces
[ "Daniel Huang", "Christian Camaño", "Jonathan Tsegaye" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.PL" ]
Disorder effects on the so-called Andreev band in Majorana nanowires Haining Pan ==================================================================== We introduce a prototype probabilistic programming language (PPL) called for performing Bayesian inference on function spaces with a focus on Bayesian deep learning (BDL). We describe the core abstraction of based on particles that links models, specified as neural networks (NNs), with inference, specified as procedures on particles using a programming model inspired by message passing. Finally, we test on a variety of models and datasets used in scientific machine learning (SciML), a domain with natural function space inference problems, and we evaluate scaling of on single-node multi-GPU devices. Thus we explore the combination of probabilistic programming, NNs, and concurrency in the context of Bayesian inference on function spaces. The code can be found at <https://github.com/lbai-lab/PusH>. § INTRODUCTION Bayesian deep learning (BDL) <cit.> brings the benefits of Bayesian inference to function spaces. However, Bayesian inference over function spaces can be compute intensive and difficult to scale. BDL is also an emerging discipline whose benefits and feasibility is an active area of research. Consequently, we would like to lower the barrier to experimenting with novel algorithms and models. Probabilistic programming <cit.> is an approach to Bayesian inference where programming language and systems technology is applied to aid Bayesian inference. However, most existing probabilistic programming languages (PPLs) do not focus on function space priors nor Bayesian inference on function spaces. In this paper, we introduce a PPL called for performing Bayesian inference on function spaces with a focus on BDL to address this gap. Our contributions are as follows. * We introduce a language called (abbreviation for particle pushforward on Hilbert space) and a core abstraction based on particles (Section <ref>). The main idea behind is that particles link models, specified as neural networks (NNs), with inference, specified as procedures on particles using a concurrent programming model inspired by message passing (between particles). Thus we explore the combination of probabilistic programming, NNs, and concurrency in the context of Bayesian inference on function spaces. The code can be found at <https://github.com/lbai-lab/PusH>. * We demonstrate how to encode various Bayesian inference algorithms in using the particle abstraction including SWAG <cit.> and Stein Variational Gradient Descent <cit.> (Section <ref>). The idea is that the language designer and BDL researchers can implement various Bayesian inference algorithms using the particle abstraction. * We evaluate on various scientific machine learning <cit.> (SciML) models/datasets and demonstrate promising single-node performance scaling as a function of GPU devices utilized (Section <ref>). SciML is an emerging discipline that studies inference problems on data generated from natural processes and has an abundance of natural function space inference problems. Thus may be of independent interest to users in the SciML community. §.§ Related Work Our work is inspired by probabilistic programming and deep probabilistic programming, although there are three main differences. First, most existing PPL designs are ill-adapted for the use case of specifying function space priors. In particular, many PPLs specify distributions on program traces <cit.> or use a restricted modeling language to provide notation to define familiar probabilistic models such as graphical models <cit.>. The former approach is general which increases the difficulty of inference since gradient-based methods are typically not used. The latter approach is typically not expressive enough to capture function space distributions. Our approach will focus on function spaces explicitly. Second, most existing PPL designs do not focus on enabling Bayesian inference on function space priors. Deep probabilistic programming <cit.> is a emerging paradigm that enables lightweight interoperability between probabilistic programs and NNs. However, these frameworks focus on enabling Bayesian inference for non-function space priors by leveraging the underlying NN library to take derivatives. Our approach will introduce a unified abstraction for model and inference based on particles. Similar to some deep probabilistic programming languages <cit.>, also provides no separation of concerns between models and inference. Third, most existing PPLs focus on a sequential model of computation whereas our work is based on a concurrent model of computation. Thus our work explores the intersection of probabilistic programming, NNs, and concurrency. § LANGUAGE We walk through an example of first (Section <ref>) to illustrate the main ideas before discussing it's design (Section <ref>) and implementation (Section <ref>) in more detail. §.§ Motivation and Example Many BDL algorithms consist of replicas of a NN. For example, a deep ensemble <cit.> replicates a NN multiple times. SWAG <cit.> maintains two additional parameter sets corresponding to the first and second moments of a NN. Stein Variational Gradient Descent (SVGD) <cit.>, when applied to a Bayesian NN, will replicate NN parameter sets multiple times as well. In this paper, we will refer to each NN replica as a particle. The obvious challenge with many BDL algorithms is that we have to work with N particles, which can be time-consuming and memory-intensive. Moreover, such an approach may require communication between various particles to compute required information. Figure <ref> illustrates common communication patterns in BDL algorithms. A deep ensemble requires no communication, SWAG requires communication between a main particle and two children particles, and SVGD requires communication between all particles. Consequently, it can be tedious and error-prone to implement BDL algorithms without language support. Instead, we might hope that an abstraction that (1) enables us to scalably declare N NNs and (2) primitives for point-to-point communication between each of the N NNs to exchange information useful for speeding up training may enable more rapid exploration and implementation of BDL algorithms. This forms the core idea behind the particle abstraction in . To illustrate this idea, consider the implementation of a deep ensemble <cit.>, a simple BDL algorithm <cit.>. Figure <ref> provides an encoding of a deep ensemble <cit.> involving three NNs in . On Line 4, we construct a particle neural network where is a generic PyTorch NN created with arguments . This wraps an ordinary NN in a form where the language implementation can now introspect and manipulate it to provide support for Bayesian inference. On Line 6, we use to initialize three particles where each particle corresponds to one setting of NN parameters. The particles are indexed by 0, 1, and 2 respectively. Lines 7-14 give an example encoding of a deep ensemble. On Line 10, we enumerate each particle. On Line 12, we use a function that calls the typical optimization step on a particle specified by the appropriate index: (1) compute a forward pass, (2) zero the gradients, (3) compute the loss, (4) compute a backward pass, and (5) update the parameters. By default, is asynchronous. Finally, on Line 14, we synchronize across all three particles using using the returned events that we have accumulated. Thus we have a centralized method of encoding a deep ensemble where the particle neural network synchronizes each batch of training. §.§ From Example to Particle Abstraction While it may be conceptually simple to map the usage of particles to deep ensembles in the example above, the challenge of presenting the idea of a particle as an abstraction in a language is that it should be general. Towards this end, we would like to demonstrate that the particle abstraction (1) is expressive so that it can encode arbitrary distributions on function spaces and (2) is flexible so that it can express a variety of inference algorithms. Model: Particles approximate function space distributions via a pushforward. One method for specifying a distribution on functions is using weight space view. For example, nn(x; θ_1, θ_2) = g(f(x; θ_1); θ_2) where random variables θ_1∼μ_1 and θ_2∼μ_2 for some distributions μ_1 and μ_2, and parameterized functions f(·; θ_1) and g(·; θ_2), defines a distribution on functions. Alternatively, we can also define the same distribution on functions using the pushforward of a distribution with respect to (w.r.t.) a function. As a reminder, the pushforward of a distribution μ on Θ w.r.t. F: Θ→ Y is μ_*(F) = B ↦μ(F^-1(B)) for any measurable subset B of Θ. Obviously, it is not practical to represent high-dimensional distributions as general measures. Instead, we can define the particle pushforward μ_*^δ(F) = {θ_1^μ↦ F(θ_1^μ), …, θ_n^μ↦ F(θ_n^μ) } which approximates μ_* using particles {θ_1^μ, …, θ_n^μ} where the superscript indicates that the particles are chosen in a manner dependent on μ (, a sample) and will be dropped from now on to declutter the notation. In the case of parameterized functions g(x; θ), we can define the particle pushforward as (μ)(g(x; ·)) = μ_*^δ(g(x; ·)) when the parameter θ is random (, the argument x is given). When both the argument and the parameter are random, we use the notation (μ_x ⊗μ_θ)(g) = μ_*^δ(g) where ⊗ is the product distribution so that the first component defines the distribution on the argument and the second component defines the distribution on the parameter. Putting this together, the function space prior that we have defined previously is nn(x; θ_1, θ_2) = ((μ_1)(f(x; ·) ⊗μ_2)(g) = { (θ_1^μ_1, θ_1^μ_2) ↦ g(f(x; θ_1^μ_1); θ_1^μ_2), …, (θ_n^μ_1, θ_n^μ_2) ↦ g(f(x; θ_n^μ_1); θ_n^μ_2) } . uses NNs with particles, , particle neural networks, to directly to express a distribution on functions. Since is simply an alternative method for defining a distribution on functions that highlights the link with particles, we choose to define models in using standard NN definitions to maintain interoperability. Consequently, a particle neural network can simply wrap an existing NN. Each call to in a particle neural network creates an instance of g(·; θ_i) where i is the index of the particle. is named after particle pushforwards: particle pushforward on Hilbert space. Inference: Particles and gradients provide the link between model and inference. provides no separation of concerns between model and inference. Instead, the interaction between particles and gradients provides the link to an inference algorithm implementation. The observation that particles and gradients can be used in tandem to construct Bayesian algorithms is not new. In our example, it is used (trivially) to implement inference for deep ensembles which can be understood from a Bayesian perspective <cit.>. It is also used in SVGD <cit.>. However, from a language perspective, it is important that we can extract out a key principle that can be used across a range of algorithms involving particles so that we can justify the usefulness of the particle abstraction for supporting inference. Towards this end, suppose μ = p(θ) is a prior distribution with differentiable density and f = p( | θ) is a differentiable likelihood. Then ∇log p(θ_i | ) = ∇log(p( | θ_i)p(θ_i)/p()) = ∇log(p( | θ_i)p(θ_i)) for any particle θ_i so that the gradient of the posterior at a particle (hard to compute) is equal to the gradient of the unnormalized joint distribution (computable by definition). The key point is that p() is a constant so its derivative is 0. Notably, SVGD relies on this principle (among many other insights). We make two remarks about this principle with regards to probabilistic programming. First, many PPLs rely on Markov-Chain-Monte-Carlo (MCMC), another popular Bayesian inference method. While there are many design decisions, the key idea is that MCMC eliminates the need to compute the intractable denominator p() via a division (symmetry via detailed balance). PPLs can leverage this key idea to generate model specific inference code. Similarly, our hope here is that will enable the study of more algorithms that perform (approximate) Bayesian inference on function spaces that rely on the idea that the intractable denominator p() can be eliminated via gradients. Second, Bayesian inference in probabilistic programs is often presented as “running a program backwards from observations to inputs". Equation <ref> expresses the intuition that posterior inference can be seen as running a program backwards from observations to parameters since back-propagation transmits information backwards along the particle. This adds credence to the view that particles link model and inference. More generally, we might imagine any linguistic support for reversible computation (, if the density is invertible) enables Bayesian inference. Similar ideas have been explored in reversible NNs for generative modeling <cit.>. §.§ From Particle Abstraction to Implementation Solving the language design problem is only half the challenge. The second challenge involves developing a practical implementation of the language. Our previous discussion of a deep ensemble alludes to two issues that we will encounter in implementing : (1) scaling the number of particles and (2) communication between particles. In solving these challenges, we are inspired by techniques for distributed NN training <cit.>, although our challenges are orthogonal to those solved by data parallelism and model parallelism. Data parallelism requires distributing data across hardware accelerators, and thus, requires synchronization of gradients for back propagation. In our case, we will use the same data to fit different particles. Consequently, we do not need to distribute data. Model parallelism involves splitting a large model across multiple hardware accelerators when a single model does not fit on a single hardware accelerator, and thus, requires synchronization of gradients across accelerators during back propagation. In our case, particles share the same model so this is not necessary as well. Instead, synchronization is required when particles communicate with each other. Figure <ref> illustrates the main architecture of which borrows from systems and language implementation design techniques designed to implement concurrent systems. The main idea is that each particle can be implemented as a lightweight process called a device event loop that can communicate via message passing with other particles and the main particle neural network. This strategy is inspired by the approaches taken to implement concurrent languages such as Erlang and Go, although the setting of Bayesian inference will suggest different design choices. We describe the components of in more detail now. Component 1: Device event loop A device event loop is an isolated process that maps to a single physical GPU accelerator. As the name suggests, a device event loop is an event loop whose primary responsibilities include (1) performing operations on all the particles on the device event loop and (2) handling communication between other device event loops and the parent particle neural network. Operations on a particle includes normal NN operations such as (1) computing a forward pass, (2) computing a backward pass, and (3) performing a parameter update. These operations use the appropriate accelerator device that the device event loop is mapped to. Thus a device event loop forms the primary place where computations in take place. We make the design decision to create a single device event loop for a single physical GPU accelerator. Thus a program running on hardware that has four physical GPUs will spawn four independent device event loops. Because a user can define an arbitrary number of particles, this means that multiple particles may map to the same device event loop. We make this choice for at least two reasons. First, mapping each particle to a separate device event loop increases fault-tolerance but incurs communication overhead. While fault tolerance is also important in our setting, our first concern is on scaling the number of particles that can handle in a non-distributed setting such as single multi-GPU node so we can explore the benefits of scaling BDL algorithms. We leave it for future work to consider the case of fault-tolerance. Second, communication with an accelerator device such as a GPU is already asynchronous. Consequently, introducing more processes may not necessarily improve speed of the computation. Message passing between two different particles is handled by their respective device event loops. If the particles are on different device event loops, messages are communicated via message queues handled by their respective device event loops. This incurs communication overhead. If particles are on the same device event loop, messages are handled locally and communication overhead can be eliminated. Currently, users are given control over which device event loop to place a particle on. The tradeoff with our approach is that we may run into memory issues on any single GPU. In particular, since a device event loop may contain multiple particles and each particle corresponds to a NN which may be large, the accelerator itself may run out of memory. In a traditional operating system, abstractions such as virtual memory and context-switching is used to share the compute and memory resources for threads. Inspired by this, also contains a simple context switching mechanism for particles. Every device event loop contains an active set which limits the maximum number of active particles allowable on a single device. When more particles are used by a BDL algorithm than allowed by the particle cache, the device event loop will perform a context switch to swap out particles to an inactive set using a least-recently-used (LRU) policy similar to one found in hardware caching. It would be an interesting direction of future work to explore other kinds of replacement policies. It would also be interesting to explore advanced optimizations such as just-in-time compilation where particles with static computation graphs can be pinned in accelerator memory although we leave this for future work. Component 2: Particle neural network The particle neural network contains the top-level program. It serves as the entry point for managing all resources, creating new particles (via ), and synchronizing across all particles. The creation of a particle neural network will create multiple device event loops depending on the underlying hardware availability. Particles are allocated on an existing device event loop and broadcast across all other device event loops so that point-to-point communication between particles is possible. The particle neural network primarily serves as a point of synchronization as most work is done on the respective device event loops. Currently, a particle neural network does not support distributed device event loops although there is no technical reason why cannot since device event loops are isolated processes with point-to-point communication. It would be an interesting direction of future work to explore a distributed implementation. is embedded in Python and is implemented on top of PyTorch. This enables us to take advantage of an established ecosystem for enabling a probabilistic programming approach to Bayesian inference on function spaces and experiment with the particle abstraction. In particular, the design and implementation of attempts to take advantage of existing accelerator infrastructure used for distributed NN training, but adapting it for other purposes scaling Bayesian inference on function spaces. § BAYESIAN INFERENCE IN In this section, we encode various existing BDL algorithms using the particle abstraction in . The idea is that users and BDL researchers can use these abstractions to more easily experiment with algorithms that require more particles. §.§ Example: SWAG We have implemented SWAG using the particle abstraction. Related algorithms such as SWA <cit.> and MultiSWAG <cit.> are conceptually similar. This family of algorithms typically use a standard optimizer to train a NN for some number of epochs. Subsequently they begin to keep track of estimates of first and/or second moments of the parameters to make a Gaussian approximation of the posterior. In our implementation, we use the particle abstraction to store these estimates and the communication primitives in the language to synchronize first and/or second moment computations as illustrated in Figure <ref>. §.§ Example: Stein Variational Gradient Descent Figure <ref> provides an excerpt of a SVGD implementation in that illustrate the salient points. Lines 3-7 implements an all-to-all communication pattern present in SVGD between all particles (Figure <ref>). On Line 4, the code gives each particle access to particle identifiers for every other particle that is present. On Line 6, the code asynchronously accesses the state of particle . On Line 7, the code blocks until all parameters have been retrieved. We then use this information to complete the rest of the SVGD step. The top-level SVGD functionality is contained in . Lines 13-17 initialize and register particles. On Line 21, we asynchronously initiate a on each particle so that every particle now contains gradient information. Note that particles executed on different device event loops are handled in parallel in a manner that is transparent to the user and one of the advantages of using a language like . On Line 23, we synchronize on the particle neural network so that every particle has a gradient before it is passed to every other particle. This communication pattern would be more difficult to express if a particle neural network was not distinct from a particle event loop. Finally, on Line 25, we send a message to trigger to invoke the SVGD step which exchanges information between every particle. § EXPERIMENTS We evaluate in two different ways. First, we apply to several models and tasks in SciML to demonstrate its applicability in a setting with natural function space inference problems (Section <ref>). Second, we study the scaling properties of given our design decisions (Section <ref>). §.§ SciML Use Cases Deep learning and NNs are becoming increasingly popular in SciML for the purposes of constructing surrogate models and solving inverse problems. Uncertainty quantification <cit.> is important in these contexts since scientists and engineers want to provide guarantees on the trustworthiness of surrogate models and inferred solutions to inverse problems. Due to the natural fit of Bayesian methods for SciML use cases, we choose several SciML datasets and models to test on. Our goal is to test 's applicability and not the performance of any specific Bayesian inference algorithms. Towards this end, we select NNs that give us a wide range of architectures to test . We choose UNET <cit.> and Fourier Neural Operator (FNO) <cit.> from PDEBench <cit.>, a SciML benchmark. These networks are similar to convolutional NNs. We use the Advection dataset (9000 datapoints) introduced in PDEBench. FNO and UNet learn a partial differential equation. We also select two quantum chemistry networks Schnet <cit.> and CGCNN <cit.> which is a graph NN. Both networks are designed specifically for the use case of learning a potential energy surface. We train on 1000 geometries of the Asprin molecule from MD17 <cit.> for our quantum chemistry experiments. The quantum chemistry networks are interesting in that their loss functions use derivatives of the model w.r.t. the input. Thus we can also test 's ability to cope with higher-order gradients. Importantly, we can incorporate these NNs into with minimal effort. Some porting is required since many SciML NNs have non-standard prediction functions that need to be wrapped to match 's interface. Figure <ref> compares the time per epoch of training for each NN for each method averaged across 20 epochs of training. We did not observe high variance in time per epoch. Standard training is handled with each NN's respective training procedures detailed in either PDEBench or their original papers. For UNET training, we do not use autoregressive training. The method SWAG uses the original code provided in the paper. SWAG and SVGD uses our implementation of these BDL algorithms. For SVGD , we use two particles and uniform priors on the NNs since we are interested in performance of . All tests are done on one device. We make several observations. First, the BDL algorithms are slower than the standard training procedure as expected. Second, the implementation of SWAG is slower than the handwritten implementation of SWAG. This is due to the fact that we have extra costs associated with managing particles as opposed to simply keeping track of two extra parameter sets. Thus there is a performance cost to using the particle abstraction. Third, SVGD is a costly method so we will need language support for scaling. In the next section, we will analyze the scaling properties of on single-node and multi-GPU devices in more detail. §.§ Scaling We test the scaling of on a simple NN architecture using SVGD (bandwidth 1 and step size of 10^-3) with a uniform prior on all NN weights. The NN architecture consists of N = 10 fully connected layers of size D × D followed by a D × 1 prediction layer. We choose a random dataset with 10 batches of 128 datapoints of dimension D = 200 and D = 2000. Since we only are measuring performance and not quality, we train for 20 epochs and report the average time. The standard deviation is small compared to the average time so we do not report it. Figure <ref> shows the scaling of time per epoch (in seconds) as a function of the number of SVGD particles on one device and two devices. As a reminder, SVGD has an all-to-all communication pattern and synchronization. Consequently, we should expect to see a quadratic scaling in the time per epoch as each graph illustrates. We make two observations. First, for a smaller NN where D = 200, we obtain no distinguishable behavior between 1, 2, or 4 actives particles. We find that using 2 devices is slightly slower than 1 device. This suggests that the cost of context switching and the cost of communication dominate the computation. Second, for a larger NN where D = 2000, the 2 device implementation is roughly 2-3 times faster than the single device implementation. This suggests that the cost of computation outweighs the cost of communication. Moreover, we observe that larger active sets are beneficial for further improving the performance of computation so that the cost of computation outweighs the cost of context switching. All tests are done on workstations with 2x RTX 3090 GPUs with 24GB of video RAM. We have not tested on 4 and 8 GPU hardware but anticipate similar performance gains as the number of devices is increased. As we have mentioned previously, it should be easy to extend to a distributed setting as future work. § DISCUSSION We explore the combination of probabilistic programming, NNs, and concurrency in the context of Bayesian inference on function spaces in this paper. In particular, we demonstrate (1) how particles can be used to approximately describe function space distributions in and (2) BDL algorithms can be implemented as concurrent procedures on particles that interact seamlessly with gradients. We also explore the challenges of scaling such an implementation and demonstrate promising single-node multi-GPU scaling results. If a sequential process is analogous to a point-estimate, then a concurrent process is analogous to obtaining a distributional estimate. This analogy reframes aspects of Bayesian inference in language and systems terms, and hints at why a combined language and systems approach offered by probabilistic programming is not only feasible, but well-founded and advantageous as well. Our hope is that can form a springboard for exploring more BDL algorithms that have more expressive communication patterns. Applying more advanced language and systems optimizations to is also an interesting direction of research, particularly since we can leverage the same hardware architectures that are used for distributed NN training. Daniel Huang was partially supported by Google exploreCSR. Christian Camaño was partially supported by Google exploreCSR, CSU-LSAMP Award #532635-A5, NSF/CSU-LSAMP Award #533005-A5, NSF CNS-2137791 and HRD-1834620. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We thank the Systems Team in Academic Technology at San Francisco State University for computing resources. plain § SUPPLEMENTARY MATERIAL §.§ Bayesian Inference Algorithms in in More Detail Figure <ref> provides the code in for a deep ensemble. Unlike the deep ensemble presented in the main text, we train each particle independently of one another by asynchronously sending a message to each particle. Figure <ref> provides the code in for SWAG. The code is used to update the second particle which keeps track of the first moment. The code is used to update the third particle which keeps track of the second moment. The first and second moment update codes each obtain information from the first particle which contains the NN weights. We use the particle neural network to synchronize updating the NN weights with updating the first and second moment updates. The state is used to keep track of the number of training iterations. In our experience, the primary advantage of writing SWAG in as opposed to a hand-written implementation is a first-class and functional interface for keeping track of the first and second moments without resorting to code that modifies the state of an existing NN. Figure <ref> provides the code in for SVGD. The code performs the SVGD update. The first part of the code obtains the particle state and gradients from every other particle. The second part of the code performs the SVGD update by computing a step using a pairwise comparison using the kernel w.r.t. every other particle. The user can select the prior distribution on the NN parameters by setting to an appropriate function that computes the log density of the NN weights w.r.t. the prior distribution. There is potential for speedup on multi-GPU devices since the code is less specific about the order-of-evaluation compared to a hand-written implementation. In particular, since is asynchronous, particles allocated on different device event loops can execute in parallel, leading to speedups. This is shown in Figure <ref>. §.§ Scaling in in More Detail Table <ref>, Table <ref>, and Table <ref> provide the slowdowns of across NNs of various sizes and active sets, and expands upon the experiments summarized in Figure <ref>. A NN with D = 1000 compared to one with D = 200 is 25 times smaller since 1000^2/200^2 = 25. Similarly, a NN with D = 2000 compared to one with D = 1000 is 4 times smaller since 2000^2/1000^2 = 4. Thus we should theoretically expect a 25× slowdown for D=1000/D=200 and a 4× slowdown for D=2000/D=1000. There are at least two reasons why the observed slowdown can deviate from the theoretical slowdown. First, this can occur due to overhead in 's implementation. For example, in Table <ref> with 1 device and and active set of 1, we should expect all the observed slowdowns to be close to the theoretical slowdowns. When comparing D=1000/D=200, we see that the observed slowdown here is much lower than the theoretical slowdown. Rather than this indicating a speedup, this means that small NNs have large overhead in . When comparing D=2000/D=1000, we see that the observed slowdown here is much closer to the theoretical slowdown. As the number of particles increases, we trend closer to the theoretical slowdown as 's overhead becomes dominated by scaling the number of particles. Second, the deviation between an observed slowdown and a theoretical slowdown can result from the size of the active set or number of devices which can improve the performance. For example, in Table <ref> when comparing D=2000/D=100 with an active set of 2 and 1 device, we see that slowdown for 2 particles is 1.15 whereas the slowdown for 4 particles is 3.52. The deviation of 1.15 from 4 can be attributed here to eliminating the need for context switching. When comparing D=1000/D=200 with an active set of 2 and 2 devices, we see that the slowdown decreases to 1.99 from 3.52. This can be attributed to doubling the number of compute by doubling the number of devices. However, the slowdown is still larger than 1.15, which means that on a relative basis it is harder to improve performance across multiple devices than it is on a single device.
http://arxiv.org/abs/2306.08796v1
20230615005053
Tropical Logistic Regression Model on Space of Phylogenetic Trees
[ "Georgios Aliatimis", "Ruriko Yoshida", "Burak Boyaci", "James A. Grant" ]
math.CO
[ "math.CO", "q-bio.PE" ]
Local Labor Market Effects of Mergers and Acquisitions in Developing Countries: Evidence from Brazil Vítor CostaEconomics PhD Candidate at Cornell University. July 31, 2023 ==================================================================================================== Motivation: Classification of gene trees is an important task both in analysis of multi-locus phylogenetic data, and assessment of the convergence of Markov Chain Monte Carlo (MCMC) analyses used in Bayesian phylogenetic tree reconstruction. The logistic regression model is one of the most popular classification models in statistical learning, thanks to its computational speed and interpretability. However, it is not appropriate to directly apply the logistic regression model to a set of phylogenetic trees with the same set of leaf labels, as the space of phylogenetic trees is not Euclidean. Results: It is well-known in tropical geometry and phylogenetics that the space of phylogenetic trees is a tropical linear space in terms of the max-plus algebra. Therefore, in this paper, we propose an analogue approach of the logistic regression model in the setting of tropical geometry. In our proposed method, we consider two cases: where the numbers of the species trees are fixed as one and two, and we estimate the species tree(s) from a sample of gene trees distributed over the space of ultrametrics, which is a tropical linear space. we show that both models are statistically consistent and bounds on the generalization error of both models are derived. Finally, we conduct computational experiments on simulated data generated by the multi-species coalescent model and apply our model to African coelacanth genomes to infer the species tree. § INTRODUCTION Phylogenomics is a new field that applies tools from phylogenetics to genome datasets. The multi-species coalescent model is often used to model the distribution of gene trees under a given species tree <cit.>. The first step in statistical analysis of phylogenomics is that evolutionary biologists, also known as systematists, analyze sequence alignments to determine whether their evolutionary histories are congruent with each other. In this step, systematists aim to identify genes with unusual evolutionary events, such as duplication, horizontal gene transfer, or hybridization <cit.>. To accomplish this, they compare multiple sets of gene trees, that is, phylogenetic trees reconstructed from alignments of genes. The classification of gene trees into different categories is therefore important for analyzing multi-locus phylogenetic data, but also in assessing convergence of Markov Chain Monte Carlo (MCMC) analyses for Bayesian inference on phylogenetic tree reconstruction. Often we apply MCMC samplers to estimate the posterior distribution of a phylogenetic tree given an observed alignment. These samplers typically run multiple independent Markov chains on the same observed data set. The goal is to check whether these chains converge to the same distribution. This process is often done by comparing summary statistics computed from sampled trees, as there is no classification model over the space of phylogenetic trees, the set of all possible phylogenetic trees with leaves [m]:={1, …, m}. However, computing a summary statistic from a sample naturally loses information about the sample <cit.>. In Euclidean geometry, the logistic regression model is the simplest generalized linear model for classification. It is a supervised learning method that classifies data points by modeling the log-odds of having a response variable in a particular class as a linear combination of predictors. This model is highly popular in statistical learning due to its simplicity, computational speed and interpretability. However, directly applying classical supervised models to a set of sampled trees may be misleading, since the space of phylogenetic trees does not conform to Euclidean geometry. The space of phylogenetic trees with labeled leaves [m] is a union of lower dimensional polyhedral cones with dimension m - 1 over ℝ^e where e = m2 <cit.>. This space is not Euclidean and even lacks convexity <cit.>. In fact, <cit.> showed that the space of phylogenetic trees is a tropicalization of linear subspaces defined by a system of tropical linear equations <cit.> and is therefore a tropical linear space. Consequently, many researchers have applied tools from tropical geometry to statistical learning methods in phylogenomics, such as principal component analysis over the space of phylogenetic trees with a given set of leaves [m] <cit.>, kernel density estimation <cit.>, MCMC sampling <cit.>, and support vector machines <cit.>. Recently, <cit.> proposed a tropical linear regression over the tropical projective space as the best-fit tropical hyperplane. In this paper, an analog of the logistic regression is developed over the tropical projective space, which is the quotient space where 1:= (1, 1, …, 1). Given a sample of observations within this space, the proposed model finds the “best-fit” tree representative ω_Y ∈ of each class Y ∈{0,1} and the “best-fit” deviation of the gene trees. This tree representative is a statistical parameter and can be interpreted as the corresponding species tree of the gene trees. It is established that the median tree, specifically the Fermat-Weber point, can asymptotically approximate the inferred tree representative of each class. The response variable Y ∈{0,1} has conditional distribution Y|X ∼ Bernoulli( S(h(X))), where h(x) is small when x is close to ω_0 and far away from ω_1 and vice versa. In Section <ref> an overview of tropical geometry and its connections to phylogenetics is presented. The one-species and two-species tropical logistic model is developed in Section <ref>. Theoretical results, including the optimality of the proposed method over tropically distributed predictor trees, the distance distribution of those trees from their representative, the consistency of estimators and the generalization error of each model are stated in Section <ref> and proved in Supplement <ref>. Section <ref> explains the benefit and suitability of using the Fermat-Weber point approximation for the inferred trees and a sufficient optimality condition is stated. Computational results are presented in Section <ref> where a toy example is considered for illustration purposes. Additionally, a comparison study between classical, tropical and BHV logistic regression is conducted on data generated under the coalescent model and an implementation of the proposed method on the lungfish dataset is analysed. Finally, the paper concludes with a discussion in Section <ref>. § TROPICAL GEOMETRY AND PHYLOGENETIC TREES §.§ Tropical Basics This section covers the basics of tropical geometry and provides the theoretical background for the model developed in later sections. For more details regarding the basic concepts of tropical geometry covered in this section, readers are recommended to consult <cit.>. §.§.§ Tropical Metric A key tool from tropical geometry is the tropical metric also known as the tropical distance defined as follows: The tropical distance, more formally known as the Generalized Hilbert projective metric, between two vectors v, w ∈ (ℝ∪{-∞})^e is defined as d_ tr(v,w) := v-w _ tr = max_i{ v_i - w_i } - min_i{ v_i - w_i }, where v = (v_1, … , v_e) and w= (w_1, … , w_e). Consider two vectors v=(c,…,c) = c 1∈ℝ^e and w= 0∈ℝ ^ e. It is easy to verify that d_ tr(v,w) = 0 and as a result d_ tr is not a metric in ℝ ^ e. The space in which d_ tr is a metric treats all points in { c 1 : c ∈ℝ} = ℝ 1 as the same point. The quotient space ( ℝ∪{-∞} ) ^ e / ℝ 1 achieves just that. The function d_ tr is a well-defined metric on (ℝ∪{-∞})^e /ℝ 1, where 1∈ℝ ^ e is the vector of all-ones. §.§ Equidistant Trees and Ultrametrics Phylogenetic trees depict the evolutionary relationship between different taxa. For example, they may summarise the evolutionary history of certain species. The leaves of the tree correspond to the species studied, while internal nodes represent (often hypothetical) common ancestors of those species and their ancestors. In this paper, only rooted phylogenetic trees are considered, with the common ancestor of all taxa based on the root of the tree. The branch lengths of these trees are measured in evolutionary units i.e. the amount of evolutionary change. Under the molecular clock hypothesis, the rate of genetic change between species is constant over time, which implies genetic equidistance and allows us to treat evolutionary units as proportional to time units. Consequently, phylogenetic trees of extant species are equidistant trees. Let T be a rooted phylogenetic tree with leaf label set [m], where m ∈ℕ is the number of leaves. If the distance from all leaves i ∈ [m] to the root is the same, then T is an equidistant tree. It is noted that the molecular clock hypothesis has limitations and the rate of genetic change can in fact vary from one species to another. However, the assumption that gene trees are equidistant is not unusual in phylogenomics; the multispecies coalescent model makes that assumption in order to conduct inference on the species tree from a sample of gene trees <cit.>. The proposed classification method is not restricted to equidistant trees, but all coalescent model gene trees produced in Section <ref>. are equidistant. In order to conduct any mathematical analysis, a vector representation of trees is needed. A common way is to use BHV coordinates <cit.> but in this paper distance matrices are used instead. Consider a phylogenetic tree T with leaf label set [m]. Its distance matrix D ∈ℝ^m × m has components D_ij being the pairwise distance between a leaf i ∈ [m] to a leaf j ∈ [m]. It follows that the matrix is symmetric with zeros on its diagonals. For equidistant trees, D_ij is equal to twice the difference between the current time and the latest time that the common ancestor of i and j was alive. To form a vector, the distance matrix D is mapped onto ℝ^e by vectorizing the strictly upper triangular part of D, i.e. D ↦ (D_12, …, D_1m, D_23, …, D_2m, …, D_(m-1)m) ∈ℝ^e, where the dimension of the resulting vector is equal to the number of all possible pairwise combinations of leaves in T. Hence the dimension of the phylogenetic tree space is e = m2. In what follows, the connection between the space of phylogenetic trees and tropical linear spaces is established. Consider the distance matrix D ∈ℝ^m × m. Then if max{D_ij, D_jk, D_ik} is attained at least twice for any i,j,k ∈ [m], D is an ultrametric. Note that the distance map d(i,j) = D_ij forms a metric in [m], with the strong triangular inequality satisfied. The space of ultrametrics is denoted as . Suppose we have an equidistant tree T with a leaf label set [m] and D as its distance matrix. Then, D is an ultrametric if and only if T is an equidistant tree. Using Theorem <ref>, if we wish to consider all possible equidistant trees, then it is equivalent to consider the space of ultrametrics as the space of phylogenetic trees on [m]. Here we define as the space of ultrametrics with a set of leaf labels [m]. Theorem <ref> in Supplement <ref> establishes the connection between phylogenetic trees and tropical geometry by stating that the ultrametric space is a tropical linear space. § METHOD Unlike tropical PCA developed by <cit.> which is an unsupervised learning to reduce its dimensionality, logistic regression is a supervised learning model with a categorical response variable associated with the input variable. In this paper, we only consider the simplest case, that is, a bivariate response variable Y ∈{0,1} given with the explanatory variable x ∈^n, where n is the number of covariates in the model. Under the logistic model, Y ∼Bernoulli(p(x|ω)) where p(x|ω) = ℙ(Y=1|x) = 1/1+exp(-h_ω(x)) = S( h_ω(x) ), where S is the logistic function and ω is the statistical/model parameter. The most intuitive and sensible classifier for this model is defined as C(x) = 𝕀(h_ω (x) > 0) ∈{0,1}. The log-likelihood function of logistic regression for N observation pairs (x^(1),y^(1)), …, (x^(N),y^(N)) is l(ω|x,y) = 1/N∑_i=1^N y^(i)log p_ω^(i) + (1-y^(i)) log(1-p_ω^(i)), where p_ω^(i) = p(x^(i)|ω). The training model seeks a statistical estimator ω that maximizes this function. Everything mentioned thus far follows from the classical logistic regression. The only difference is our choice of the function h. In fact, this function can be derived from the conditional distributions Y|X, as stated in Lemma <ref>. The classical and tropical logistic regression assume different distributions Y|X and therefore get different functions h, as shown in Examples <ref> and <ref>. Let Y ∼ Bernoulli(r) and define the random vector X ∈ℝ^n with conditional distribution X|Y ∼ f_Y, where f_0, f_1 are probability density functions defined in ℝ^n. Then, Y | X ∼ Bernoulli(p(X)) with p(x) = σ(h(x)), where h(x) = log( r f_1(x)/(1-r)f_0(x)). [Normal distribution and classical logistic regression] Suppose that the two classes are equiprobable (r=1/2) and that the covariate is multivariate normal X|Y ∼𝒩(ω_Y,σ^2 I_n), where n is covariate dimension and I_n is the identity matrix. Using Lemma <ref>, the optimal model has h(x) = - x-ω_1^2/2σ^2 + x-ω_0^2/2σ^2 = (ω_1-ω_0)^T/σ^2 (x-ω), where · is the Euclidean norm and ω=(ω_0+ω_1)/2. This model is the classical logistic regression model with translated covariate X-ω and ω = σ^-2 (ω_1- ω_0). [Tropical Laplace distribution] It may be assumed that the covariates are distributed according to the tropical version of the Laplace distribution, as presented in <cit.>, with mean ω_Y and probability density functions f_Y(x) = 1/Λexp( - d_ tr(x,ω_Y)/σ_Y), where Λ is the normalizing constant of the distribution. In distribution (<ref>), the normalizing factor is Λ = e! σ_Y^e-1. See Supplement <ref>. Combining the result of the proposition above with equations (<ref>) and (<ref>) yields h_ω_0,ω_1(x) = d_ tr(x,ω_0)/σ_0 - d_ tr(x,ω_1)/σ_1 + (e-1) log(σ_0/σ_1). In its most general form, the model parameters are (ω_0,ω_1,σ_0,σ_1) so the parameter space is a subset of ()^2 ×ℝ^2_+ with dimension 2e. There are two notable special cases that are discussed in more detail than the general model in the subsequent sections; the one-species and two-species model. For the one-species model, it is assumed that ω_0 =ω_1 and σ_0 ≠σ_1. If, without loss of generality, σ_1>σ_0, equation (<ref>) becomes h_ω(x) = λ( d_ tr(x,ω) - c ), where λ = (σ_0^-1 - σ_1^-1) and λ c = log(σ_1/σ_0). Symbolically, the expression in equation (<ref>) can be considered to be a scaled tropical inner product, whose direct analogue in classical logistic regression is the classical inner product h_ω(x) = ω^T x. See Section <ref> in the supplement for more details. The classifier is C(x) = 𝕀(d_ tr(x,ω ) > c), where ω is the inferred estimator of ω^*. Note that the classification threshold and the probability contours (p(x)) are tropical circles, illustrated in Figure <ref>. For the two-species-tree model, it is assumed that σ_0 = σ_1. Equation (<ref>) reduces to h_ω_0,ω_1(x) = σ^-1( d_ tr(x,ω_0) - d_ tr(x,ω_1) ), with a classifier C(x) = 𝕀( d_ tr(x,ω_0) > d_ tr(x,ω_1) ), where ω_y is the inferred tree for class y ∈{0,1}. The classification boundary is the tropical bisector which is extensively studied in <cit.> between the estimators ω_0 and ω_1 and the probability contours are tropical hyperbolae with ω_0 and ω_1 as foci, as shown in Figure <ref>(right). The one-species model is appropriate when the gene trees of both classes are concentrated around the same species tree ω with potentially different concentration rates. When the gene trees of each class come from distributions centered at different species trees the two-species model is preferred. § THEORETICAL PROPERTIES In this section, we show some theoretical results on the model, including that it is statistically consistent. The proofs of these results can be found Section <ref> in the supplement. From Lemma <ref>, we have that given the distributions of Y and X|Y, the distribution of Y|X follows. Proposition <ref> states that from all possible distributions Y|X, the one that fits the data best is the true distribution as given by the lemma. Therefore, the best model to fit data that have been generated by tropical Laplace distribution (<ref>) is the tropical logistic regression. Let Y ∼Bernoulli(r) and define the random vector X ∈ℝ^n with conditional distribution X|Y ∼ f_Y, where f_0, f_1 are probability density functions defined in ℝ^n. The functional p that maximises the expected log-likelihood as given by equation (<ref>) is p(x) = σ(h(x)), with h defined as in equation (<ref>) of Lemma <ref>. Corollary <ref>, which is based on Proposition <ref> allows us to derive the distribution of the radius d(X,ω) for both the Euclidean and the tropical metric. However, the arguments used in the proof of Corollary <ref> do not work for distributions defined on the space of ultrametric trees 𝒰_m, because these spaces are not translation invariant. For a similar reason, the corollary does not apply to the BHV metric. Consider a function d:ℝ^n →ℝ with α d(x) = d(α x), for all α≥ 0. If X ∼ f with f(x) ∝exp(-d^i(x)/(iσ^i)) being a valid probability density function, for some i ∈ℕ, σ>0. Then, d^i(X) ∼ i σ^i Gamma(n/i). If X ∈ℝ^e with X∼ f ∝exp(-d^i(x,ω^*)/(iσ^i)), where d is the Euclidean metric, then d^i(X,ω^*) ∼ i σ^i Gamma(e/i). If X ∈ with X∼ f ∝exp(-d^i(x,ω^*)/(iσ^i)), where d is the tropical metric, then d^i(X,ω^*) ∼ i σ^i Gamma((e-1)/i). The main result of this section is Theorem <ref>, which states that the general tropical logistic regression method is statistically consistent. The proof can also be adapted for the special cases of one-species and two-species model. The estimator (ω,σ) = (ω_0, ω_1, σ_0,σ_1) ∈Ω^2 ×Σ^2 of the parameter (ω^*,σ^*) = (ω_0^*,ω_1^*,σ_0^*,σ_1^*) ∈Ω^2 ×Σ^2 is defined as the maximizer of the logistic likelihood function, where Ω⊂ℝ^e/ℝ 1 and Σ⊂_+ are compact sets. Moreover, it is assumed that the covariate-response pairs (X_1,Y_1), (X_2,Y_2), …, (X_n,Y_n) are independent and identically distributed with X_i ∈, d_ tr(X,ω_Y) being integrable and square-integrable and Y_i ∼Bernoulli( S(h(X_i,(ω^*,σ^*) ))). Then, (ω,σ) p→ (ω^*,σ^*) as n →∞. In other words, the model parameter estimator is consistent. Finally, Proposition <ref> and <ref> provide generalization error bounds for the one-species and two species model respectively. In both cases the error bounds are getting worse as the estimation error ϵ increases. It is worth mentioning that in the case of exact estimation, the generalization error of the one-species model can be computed explicitly by equation (<ref>). Moreover, there is a higher misclassification rate from the more dispersed class (inequality (<ref>)). Consider the one-species model where ω=ω_0=ω_1 ∈ and without loss of generality σ_0 < σ_1. The classifier is C(x)=𝕀(h_ω(x) ≥ 0), where h is defined in equation (<ref>) and ω is the estimate for ω^⋆. Define the covariate-response joint random variable (X,Y) with Z = σ_Y^-1 d_ tr(X,ω_Y^*) drawn from the same distribution with cumulative density function F. Then, ℙ(C(X) = 1|Y=0) ∈[ 1-F(σ_1 ( α + ϵ)), 1-F(σ_1 ( α - ϵ)) ], ℙ(C(X) = 0|Y=1) ∈[ F(σ_0 ( α - ϵ)), F(σ_0 ( α + ϵ)) ], where α = logσ_1/σ_0/σ_1 - σ_0, and ϵ = (e-1) d_ tr( ω, ω^* ) /σ_1σ_0. The generalization error defined as ℙ(C(X) ≠ Y) lies in the average of the two intervals above. In particular, note that if ω = ω^*, then ϵ=0 and the intervals shrink to a single point, so the misclassification probabilities and generalization error can be computed explicitly. ℙ(C(X) ≠ Y) = 1/2( 1-F(σ_1 α) + F(σ_0 α) ) Moreover, if ω = ω_* and Z ∼ Gamma(e-1,1), then ℙ(C(X)=1 | Y = 0) < ℙ(C(X)=0 | Y = 1). Consider the random vector X ∈ℝ^e/ℝ 1 with response Y ∈{0,1} and the random variable Z = d_ tr( X , ω^*_Y ). Assuming that the probability density function is f_X(x) ∝ f_Z(d_ tr( x , ω^*_Y )), the generalization error satisfies the following upper bound ℙ( C(X) ≠ Y ) ≤1/2 F^C_Z(Δ_ϵ) + h(ϵ) , where ϵ = d_ tr(ω_1 , ω^*_1 )+d_ tr(ω_0, ω^*_0), 2Δ_ϵ = ( d_ tr(ω_1^*, ω_0^*) - ϵ), F^C_Z is the complementary cumulative distribution of Z, and h(ϵ) is an increasing function of ϵ with 2h(ϵ) ≤ F^C_Z(Δ_ϵ) and h(0)=0 assuming that ℙ(d_ tr(X,ω_1^*)) = d_ tr(X,ω_-1^*) )=0. Observe that the upper bound is a strictly increasing function of ϵ. The complementary cumulative distribution of Gamma(n,σ) is F^C(x) = Γ(n,x/σ)/Γ(n,0), where Γ is the upper incomplete gamma function and Γ(n,0)=Γ(n) is the regular Gamma function. Therefore, the tropical distribution given in equation (<ref>) yields the following upper bound for the generalization error Γ(e-1, d_ tr(ω_0^*,ω_1^*)/2σ)/2Γ(e-1), under the assumptions of Proposition <ref> and assuming that the estimators coincide with the theoretical parameters. This assumption is reasonable for large sample sizes and it follows from Theorem <ref>. In Section <ref>, these theoretical properties are applied. Bounds on the generalization error from Propositions <ref> and <ref> are computed and the suitability of Euclidean and tropical distributions, and as a result of classical and tropical logistic regards, using the distance distribution of Proposition <ref>. § OPTIMIZATION As in the classical logistic regression, the parameter vectors (ω̂, σ̂) maximising the log-likelihood (<ref>), are chosen as statistical estimators. Identifying these requires the implementation of a continuous optimization routine. While root-finding algorithms typically work well for the classical logistic regression where the log-likelihood is concave, they are unsuitable here. The gradients of the log-likelihood under the proposed tropical logistic models are only piecewise continuous, with the number of discontinuities increasing along with the sample size. Furthermore, even if a parameter is found, it may merely be a local optimum. In light of this, the tropical Fermat-Weber problem of <cit.> is revisited. §.§ Fermat-Weber Point A Fermat-Weber point or geometric mean ω_n of the sample set (X_1,…,X_n) is a point that minimizes the sum of distances from to sample points, i.e. ω_n ∈_ω∑_i=1^n d_ tr(X_i,ω). This point is rarely unique for finite n <cit.>. However, the proposition below gives conditions for asymptotic convergence. Let X_i iid∼ f, where where f is a distribution that is symmetric around its center ω^* ∈ℝ^e/ℝ 1 i.e. f(ω^* + δ) = f(ω^* - δ) for all δ∈ℝ^e/ℝ 1. Let ω_n be any Fermat-Weber point as defined in equation (<ref>). Then, ω_n p→ω^* as n →∞. The significance of Proposition <ref> is twofold. It proves that the Fermat-Weber sets of points sampled from symmetric distributions tend to a unique point. This is a novel result and ensures that for sufficiently large sample sizes the topology of any Fermat-Weber point is fixed. Additionally, using Theorem <ref> and Proposition <ref>, ω_n - ω_n p→ 0 as n →∞. As a result, for a sufficiently large sample size we may use the Fermat-Weber point as an approximation for the MLE vector, which is a simpler problem especially for the two-species model. Instead of having a single optimization problem with 2e-1 degrees of freedom, three simpler problems are considered; finding the Fermat-Weber point of each of the two clusters, which has e-1 degrees of freedom and then finding the optimal σ which is a one dimensional root finding problem. The algorithms of our implementation for both model can be found in Supplement <ref>. There is also another yet another benefit of using Fermat-Weber points. In <cit.>, Fermat-Weber points are computed by means of linear programming, which is computationally expensive. Employing a gradient-based method is much faster, but there is no guarantee of convergence. Nevertheless, if the gradient, which is an integer vector, vanishes, then it is guaranteed, as below, that the algorithm has reached a Fermat-Weber point. Let X_1,…,X_n ∈, ω∈ and define the function f(ω) = ∑_i=1^n d_ tr(X_i,ω). * The gradient vector of f is defined at ω if and only if the vectors ω - X_i have unique maximum and minimum components for all i ∈ [n]. * If the gradient of f at ω is well-defined and zero, then ω is a Fermat-Weber point. Proposition <ref> provides a sufficient optimality condition that the MLE lacks, since a vanishing gradient in the log likelihood function merely shows that there is a local optimum. § RESULTS In this section, tropical logistic regression is applied in three different scenarios. The first and simplest one is an illustration that considers datapoints generated from the tropical Laplace distribution. Secondly, the coalescent model is employed to generate gene trees from a species tree, and finally a real dataset of 1290 gene trees is considered. The models' performance on these datasets is examined. §.§ Toy Example In this example, a set of data points is generated from the tropical normal distribution as defined in Equation (<ref>) using rejection sampling. The data points are defined in the tropical projective torus , which is isomorphic to ^e-1. To map x ∈ to ^e-1, simply set the last component of x to 0, or in other words x ↦ (x_1-x_e,x_2-x_e, …, x_e-1 - x_e). For illustration purposes, it is desirable to plot points in ℝ^2, so e=3 which corresponds to phylogenetic trees with 3 leaves. Both the one-species model and the two-species model are examined. In the case of the former, ω = ω_0 = ω_1 and σ_0 ≠σ_1. The classification boundary in this case is a tropical circle. If σ_0 < σ_1, the algorithm classifies points close to the inferred centre to class 0 and those that are more dispersed away from the centre as class 1. For simplicity, the centre is set to be the origin ω=(0,0,0) and no inference is performed. In Fig. <ref> a scatterplot of of the two classes is shown, where misclassified points are highlighted. As anticipated from Proposition <ref> there are more misclassified points from the more dispersed class (class 1). Out of 100 points for each class, there are 7 and 21 misclassified points from class 0 and 1 respectively, while the theoretical probabilities calculated from equation (<ref>) of Proposition <ref> are 9% and 19% respectively. Varying the deviation ratio σ_1/σ_0 in the data generation process allows exploration of its effect on the generalization error in the one-species model. The closer this ratio is to unity, the higher the generalization error. For σ_0 = σ_1 the classes are indistinguishable and hence any model is as good as a random guess i.e. the generalization error is 1/2. The estimate of the generalization error for every value of that ratio is the proportion of misclassified points in both classes. Assuming an inferred ω that differs from the true parameter, Fig. <ref>(left) verifies the bounds of Proposition <ref>. For the two-species model, tropical logistic regression is directly compared to classical logistic regression. Data is generated using different centres ω_0 = (0,0,0), ω_1 = (3,2,0) but the same σ=0.5. The classifier is C(x)=𝕀(h(x)>0) for both methods, using h as defined in equations (<ref>) and (<ref>) for the classical and tropical logistic regression respectively. Fig. <ref> compares contours and classification thresholds of the classical (left) and tropical (right) logistic regression by overlaying them on top of the same data. Out of 100+100 points there are 5+4 and 4+3 misclassifications in classical and tropical logistic regression respectively. Fig. <ref>(right) visualizes the misclassification rates of the two logistic regression methods for different values of dispersion σ, showing the tropical logistic regression to have consistently lower generalization error than the classical, even in this simple toy problem. §.§ Coalescent Model In Bayesian inference of phylogeny via MCMC, it is important to be able to recognise whether the chain of trees has converged. MCMC convergence can be tested using any classification algorithm. In Bayesian inference of phylogenetic trees generated from some distribution conditional on the species tree, the chain has likely converged if it is hard to distinguish it from another independent chain of the same length, generated from the same distribution <cit.>. On the other hand, if it is easy to classify the generated trees coming from two different chains, then the chains have not converged yet. However, instead of using trees from MCMC, the data that have been used in our simulations were generated under the multispecies coalescent model, using the python library <cit.>. The classification method we propose is the two-species model because two distinct species tree have been used to generate gene tree data for each class. Two distinct species trees are used, which are randomly generated under a Yule model. Then, using , 1000 gene trees are randomly generated for each of the two species. The trees have 10 leaves and so the number of the model variables is 102 = 45. They are labelled according to the species tree they are generated from. The tree generation is under the coalescent model for specific model parameters. Since the species trees are known, we conduct a comparative analysis between classical, tropical and BHV (<cit.>) logistic regression. In the supplement, we show an approximation analog of our model to the BHV metric. The comparative analysis includes the distribution fitting of distances and the misclassification rates for different metrics. In Fig. <ref>, the distribution of the radius d(X,ω) as given by Proposition <ref>, is fitted to the histograms of the Euclidean and tropical distances of gene trees to their corresponding species tree. According to Proposition <ref>, for both the classical and tropical Laplace distributed covariates, d(X,ω^*) ∼σ Gamma(n), shown in solid lines in Fig. <ref>, where n = e = 45 and n=e-1=44 for the classical and tropical case respectively. Similarly, for normally distributed covariates, d(X,ω^*) ∼σ√(χ_n^2), shown in dashed lines. It is clear that Laplacian distributions produce better fits in both geometries and that the tropical Laplacian fits the data best. This is reflected in the values of the average log-likelihood summarised in Table <ref>. As discussed in Section <ref>, the same analysis can not be applied to the BHV metric, because the condition of Proposition <ref> does not hold. Species depth SD is the time since the speciation event between the species and effective population size N quantifies genetic variation in the species. Datasets have been generated for a range of values R := SD/N by varying species depth. For low values of R, speciation happens very recently and so the gene trees look very much alike. Hence, classification is hard for datasets with low values of R and vice versa, because the gene deviation σ_R is a decreasing function of R. We expect classification to improve in line with R. Fig. <ref> and Fig. <ref> in Supplement <ref> confirm that, by showing that as R increases the receiver operating characteristic (ROC) curves are improving and the Robinson-Foulds and tropical distances of inferred (Fermat-Weber point) trees are decreasing. In addition, Fig. <ref> shows that as R increases, AUCs increase (left) and misclassification rates decrease (right). It also shows that tropical logistic regression produces higher AUCs than classical logistic regression and lower misclassification rates than both the BHV and classical logistic regression. Finally, note that the generalization error upper bound as given in equation (<ref>) is satisfied but it not very tight (dashed line in Fig. <ref>). §.§ Lungfish Dataset In this last application, we consider an empirical dataset of 1290 gene (loci) alignments of 10 species from <cit.> and reconstructed 1290 corresponding gene trees in <cit.>. Two different methods have been applied for tree reconstruction; the standard maximum likelihood estimator (MLE) method and the neighbor-joining (NJ) method <cit.>. After dimensionality reduction using principal component analysis, in <cit.> Yoshida et al. applied three different clustering methods and observed that normalised cuts is the best performing method. Using the two clusters as labels, our model is implemented to test how easy it is to differentiate the two clusters from each other. Failing to differentiate the clusters is indicative of poor performance in clustering. Five clustering cases have been considered for the MLE and NJ reconstructed trees, for each dimensionality reduction technique. The authors in <cit.> observed that the clustering algorithms performed better on the NJ gene trees than the MLE gene trees. Our model confirms that by comparing AUC values. The one-species model is not applicable in this case, as the two clusters have different centers. The AUC values obtained for the one-species model are quite poor, around 60%. However, applying the two-species model results in significantly better performance. For the five clusterings of NJ reconstructed trees, the range of AUCs is between 96% and 98%, while the range for the MLE trees is much lower, ranging from 67% to 77%. These results indicate that the clustering methods implemented in <cit.> perform better on the former (NJ reconstructed trees) compared to the latter (MLE trees). Finally, the Fermat-Weber points of the MLE reconstructed trees are computed for both the one-species and two-species model and projected onto the space of ultrametrics via complete-linkage hierarchical clustering. The tree topologies of the resulting trees matched (shown in Fig. <ref>) and are almost entirely in agreement with that of the inferred tree from <cit.>. However, lungfish and coelacanth seem to be equally away from tetrapods. Nonetheless, this is tree topology is also proposed by many researchers <cit.>. § DISCUSSION In this paper we developed an analogue of the classical logistic regression model and considered two special cases; the one species-tree model and two species-tree model. Even if the former was not suitable for the datasets considered in this paper, it could still be useful in other settings. The main benefit is having the same number of parameters as the number of predictors, unlike the two-species model which has almost twice as many. Therefore, it fits the standard definition of a generalized linear model and could even generalize to a stack of GLMs to produce a "tropical" neural network. The two-species model implemented on data generated under the coalescent model outperformed classical and BHV logistic regression models on misclassification rates, AUCs and fitness of the distribution of distances from their centre. It was also observed that Laplacian distributions were more fit than Gaussians, for both geometries. Further research on the generalization error for the two-species model would provide tighter bounds. Our model was applied to the lungfish dataset and verified the claim that the clustering method of <cit.> performed better for NJ trees than MLE trees. The inferred species tree of the MLE trees agrees with literature. However, this is not the case with NJ trees, whose species trees were topologically far from each other. Perhaps this incongruence has to do with the fact that NJ trees are unrooted. Finally, the Fermat-Weber point is not always an equidistant tree and has to be projected onto the space of ultrametric trees. This is the case even when all datapoints are ultrametric, as in the case of coalescent model data. This projection was performed via hierarchical clustering and it was observed that complete-linkage clustering produced the smallest Robinson-Foulds distances between the projected tree and the true species tree. An interesting extension would be to consider properties of Fermat-Weber points when the sample is ultrametric and to investigate the best way of projecting them onto the space of ultrametrics. § FUNDING RY is partially funded by NSF Division of Mathematical Sciences: Statistics Program DMS 1916037. GA is funded by EPSRC through the STOR-i Centre for Doctoral Training under grant EP/L015692/1. plain Appendices tocchapterAppendices § PROOFS A simple application of the Bayes rule for continuous random variables yields p(x) = ℙ(Y=1|X=x) = f_1(x) ℙ(Y=1)/f_0(x) ℙ(Y=0) + f_1(x) ℙ(Y=1) = 1/1 + f_1(x) (1-r)/f_0(x) r = S(h(x)). The expected log-likelihood is expressed as 𝔼(l) = 𝔼( Y log(p(X)) + (1-Y) log(1-p(X)) ) = ℙ(Y=1) ∫_ℝ^n f_1(x) log(p(x)) dx + ℙ(Y=0) ∫_ℝ^n f_0(x) log(1-p(x)) dx = ∫_ℝ^n L(x,p(x)) dx , where L(x,p) = r f_1(x) log(p) + (1-r) f_0(x) log(1-p) is treated as the Lagrangian. The Euler-Lagrange equation can be generalized to a several variables (in our case there are n variables). Since there are no derivatives of p, the stationary functional satisfies ∂_p L = 0, which yields the desired result. The pdf of X is f_ω(x) = 1/C_αexp(-α^i d^i(x)/i), x ∈ℝ^n where α = σ^-1 is the precision. Using the variable transformation y = α x with Jacobian 1/α^n and remembering that α d(x) = d(y), C_α = ∫_ℝ^nexp( -α^i d^i(x)/i) dx = ∫_ℝ^nexp( - d^i(x)/i) dy/α^n= C_1/α^n. The moment generating function of d^i(X) is M_d^i(X) = ∫_ℝ^nexp( z d^i(x) ) exp( - α^i d^i(x)/i)/C_α dx = C_√(α^i/i - z)/C_α = 1/(√(1 - iσ^i z))^n, which coincides with the MGF of Γ(n/i,iσ^i). From the proof of Proposition <ref>, it was established that the normalizing constant is C_σ_Y = C_1σ_Y^e-1 for the tropical projective torus, whose dimension is n=e-1. The volume of a unit tropical sphere in the tropical projective torus ℝ^e/ℝ 1 is equal to e. If the tropical radius is r, then the volume is e r^e-1 and hence the surface area is e (e-1) r^e-2. Therefore, C_1 = ∫_ℝ^e/ℝ 1exp(-d_ tr(x, 0)) dx = ∫_0^∞ e (e - 1) r^e-2exp(-r) dr = e(e-1) Γ(e-1) = e! It follows that the normalizing constant is C_σ_Y = e! σ_Y^e-1. Suppose that X comes from the Laplace or the Normal distribution, whose pdf is proportional to exp(-d^i(x,ω^*)/(iσ^i)) for i=1 and 2 respectively, for all x ∈^n where d is the Euclidean metric. Then, X-ω^* has a distribution proportional to exp(-d^i(x, 0)/(iσ^i)). Clearly, α d(x, 0) = d(α x, 0) and so from Proposition <ref>, it follows that d^i(X-ω^*, 0) = d^i(X,ω^*) ∼ iσ^i Gamma(n/i). Note that for the normal distribution (i=2), d^i(X,ω^*) ∼σ^2 χ_n/2. The same argument applies for tropical Laplace and tropical Normal distributions, where the metric is tropical (d=d_ tr), the distribution is defined on ≅ℝ^e-1 and the dimension is hence n=e-1. Prerequisites for proof of Theorem <ref> (Theorem 4.2.1 in <cit.>) Let (Q_n(θ)) be a sequence of random functions on a compact set Θ⊂ℝ^m such that for a continuous real function Q(θ) on Θ, sup_θ∈Θ |Q_n(θ) - Q(θ)| p→ 0 as n →∞. Let θ_n be any random vector in Θ satisfying Q_n(θ_n) = inf_θ in Θ Q_n(θ) and let θ_0 be a unique point in Θ such that Q(θ_0) = inf_θ∈Θ Q(θ). Then θ_n p→θ_0. (Lemma 2.4 in <cit.>) If the data z_1,…,z_n are independent and identically distributed, the parameter space Θ is compact, f(z_i,θ) is continuous at each θ∈Θ almost surely and there is r(z) ≥ |f(z,θ)| for all θ∈Θ and 𝔼(r(z))<∞, then 𝔼(f(z,θ)) is continuous and sup_θ∈Θ| n^-1∑_i=1^n f(z_i,θ) - 𝔼(f(z,θ)) | p→ 0. Consider two points x,y ∈ℝ^e/ℝ 1. There exists η > 0 such that d_ tr(x+ϵ E_i,y) = d_ tr(x,y) + ϵϕ_i(x-y) , ∀ϵ∈ [0,η], ∀ i ∈ [e], where ϕ_i(v) = 1, if v_i ≥ v_j ∀ j ∈ [e] -1, v_i < v_j ∀ j ∈ [e] \{ i } 0, otherwise , and E_i ∈ℝ^e/ℝ 1 is a vector with 1 in the i-th coordinate and 0 elsewhere. By setting v:=x-y, M := max_j ∈ [e]{ v_j } and m := min_j ∈ [e]{ v_j }, d_ tr(x,y) = M - m d_ tr(x+ϵ E_i,y) = max_j∈[e]{ v_j + ϵδ_ij} - min_j∈[e]{ v_j + ϵδ_ij}, where ϵ≥ 0, and δ_ij=𝕀(i=j) with 𝕀 being the indicator function. Three separate cases are considered. * If v_i = M, then max_j∈[e]{ v_j + ϵδ_ij} = v_i + ϵ = M +ϵ, min_j∈[e]{ v_j + ϵδ_ij} = m, and so d_ tr(x+ϵ E_i,y) = d_ tr(x,y) + ϵ. Note that equations (<ref>) and (<ref>) hold for all ϵ > 0. * If v_i = m and v_i < v_k for all k ≠ i, i.e. if v_i is the unique minimum component of vector v, then max_j∈[e]{ v_j + ϵδ_ij} = M, for all ϵ≤ M-m min_j∈[e]{ v_j + ϵδ_ij} = v_i + ϵ = m + ϵ, for all ϵ≤ m' - m, where m' min_j: v_j > m{ v_j } > m is well-defined unless v_j = m for all j ∈ [e] i.e. for v = m · (1,…,1) = 0, which falls under the first case. Clearly, M ≥ m', so for all ϵ∈ [0, m'-m] equations (<ref>) and (<ref>) are satisfied and thence d_ tr(x+ϵ E_i,y)=d_ tr(x,y) - ϵ. * Otherwise, if none of the first two cases hold then ∃ k ≠ i such that m = v_k ≤ v_i < M and so min_j ∈ [e]{ v_j + ϵδ_ij} = v_k = m , for all ϵ > 0 max_j ∈ [e]{ v_j + ϵδ_ij} = M , if ϵ≤ M - v_i Define M' max_j: v_j < M{ v_j }<M which is well-defined for all v ≠ 0 (first case). Since v_i < M, it follows by definition that v_i ≤ M' and so M - v_i ≥ M - M' > 0. As a result, for all ϵ∈ [0, M - M'], equations (<ref>) and (<ref>) are satisfied and thence d_ tr(x+ϵ E_i,y) = d_ tr(x,y). If v= 0, set η = + ∞. Otherwise, for v ≠ 0, with m',M' being well-defined, set η = min( m'- m , M - M' ) > 0. In all three cases and for all ϵ∈ [0,η] the desired result is satisfied. Consider the function q: ℝ^e/ℝ 1→ℝ, q(x) = λ_α d_ tr(x,α) - λ_β d_ tr(x,β) -λ_γ d_ tr(x,γ) + λ_δ d_ tr(x,δ) + log( λ_β/λ_α) - log( λ_δ/λ_γ), where α,β, γ,δ∈ ℝ^e/ℝ 1, λ_α,λ_β, λ_γ,λ_δ > 0 and (α,λ_α) ≠ (β,λ_β). A set 𝒳 contains neighbourhoods of α,β,γ,δ. If q(x)=0 , ∀ x∈𝒳 then (α,λ_α) = (γ,λ_γ) and (β,λ_β) = (δ,λ_δ). According Lemma <ref>, there exists η_1 > 0 such that for all ϵ∈ [0, η_1] d_ tr(x+ϵ E_i,y) = d_ tr(x,y)+ ϵϕ_i(x-y). Moreover, d_ tr(x - ϵ E_i, y) = d_ tr(y, x - ϵ E_i) = d_ tr(y+ϵ E_i, x) and so using Lemma <ref> again (but with x and y swapped), there exists η_2>0 such that for all ϵ∈ [0,η_2] d_ tr(x - ϵ E_i, y) = d_ tr(x,y) + ϵϕ_i(y-x), for all ϵ∈ [0,ϵ_0(y-x)]. For all ϵ∈ [0,η] where ηmin(η_1,η_2), equations (<ref>), (<ref>) are satisfied and so q (x+ϵ E_i) = q(x) + ϵ( λ_αϕ_i(x-α) - λ_βϕ_i(x-β) - λ_γϕ_i(x-γ) + λ_δϕ_i(x-δ) ), q (x-ϵ E_i) = q(x) + ϵ( λ_αϕ_i(α-x) - λ_βϕ_i(β-x) - λ_γϕ_i(γ-x) + λ_δϕ_i(δ-x) ). Consequently, for all ϵ∈ [0,η], q (x+ϵ E_i) + q(x-ϵ E_i) - q(x) = 0 = ϵ( λ_α s_i(x-α) - λ_β s_i(x-β) - λ_γ s_i(x-γ) + λ_δ s_i(x-δ) ), where s_i(v) ϕ_i(v) + ϕ_i(-v) = 2, if v= 0 1, if v ≠ 0 and v_i is the non-unique maximizer or minimizer of v 0, otherwise By summing equation (<ref>) over i ∈ [e] and defining s(v) = ∑_i=1^e s_i(v), λ_α s(x-α) - λ_β s(x-β) - λ_γ s(x-γ) + λ_δ s(x-δ) = 0, ∀ x ∈𝒳. Here we try to prove by contradiction that 𝒮 := {α,δ}∩{γ,β} is not empty. Suppose that 𝒮 := {α,δ}∩{γ,β} = ∅. Then, setting x= α in equation (<ref>) and noting that s(0) = 2e and 0 ≤ s(v) ≤ e for v ≠ 0, we get 2e λ_α≤ e λ_β + e λ_γ, since β, γ≠α. Applying the same argument to x=β,γ,δ, the following system of inequalities holds 2λ_α ≤λ_β + λ_γ 2λ_β ≤λ_α + λ_δ 2λ_γ ≤λ_α + λ_δ 2λ_δ ≤λ_β + λ_γ. It follows that λ_α = λ_β = λ_γ =λ_δ. Then, rewrite equation (<ref>) as s(x-α) - s(x-β) - s(x-γ) + s(x-δ) = 0, Note now equation (<ref>) can only hold at x=α iff s(α - γ) = s(α-β) = e and s(α-δ)=0. But s(v)=e if and only if all the components of v are non-unique minimizers and maximizers or {v_i:i∈[e]}={ζ,κ}, where ζ < κ and |{i:v_i = ζ}|=n_ζ , |{i: v_i = κ}| = n_κ, such that n_ζ + n_κ = e and n_ζ, n_κ≥ 2. Consider z =v+ϵ E_i, where v_i = ζ and 0<ϵ < κ-ζ. The minimum and maximum components of z are ζ and κ, and {z_i:i∈[e]}={ζ,ζ + ϵ, κ} with |{i:z_i = ζ}|=n_ζ - 1,|{i: z_i = κ}| = n_κ. It follows that, s(z) = |{i:z_i = ζ}| +|{i: z_i = κ}| = e-1. Now consider z = v + ϵ E_i where v_i = κ. The maximum is no longer unique, but the n_ζ minima are still unique. Therefore, s(z) = n_ζ≥ 2. Combining the two cases, it is concluded that s(v + ϵ E_i) ≥ 2 for all i ∈ [e]. Set x = α + ϵ E_i, where α_i - β_i = min_k {α_k - β_k }. Then, s(x - α) = s(ϵ E_i)= e-1, since there is a unique maximizer, but all the other e-1 components are 0, which is the minimum. Furthermore, s(x - β) = s(α - β + ϵ E_i) = e-1, since for v = α - β with s(v) = e, it corresponds to the first case examined. It is assumed that ϵ < κ - ζ = d_ tr(α - β). Moreover, s(x - γ) = s(α-γ + ϵ E_i) ≥ 2, for v = α - γ with s(v) = e. Finally, since s(α - δ) = 0 and so the components of α - δ have a unique minimum and a unique maximum, there exists a neighborhood around x = α such that x-α still has that property, i.e. s(x - δ) = s(α - δ + ϵ E_i) = 0 for all ϵ < η for some η>0. From equations (<ref>) – (<ref>), it is concluded that s(x-α) - s(x-β) - s(x-γ) + s(x-δ) ≤ -2 , which contradicts equation (<ref>). Therefore 𝒮 = {α,δ}∩{γ,β}≠∅. Define another set 𝒯 = {α,β,γ,δ}. Since 𝒮≠∅, |𝒯| ≤ 3. Suppose that |𝒯| = 3 with 𝒯 = {τ, υ,ϕ}. Then, without loss of generality equation (<ref>) becomes λ_τ s(x-τ) + λ_υ s(x- υ) - λ_ϕ s(x-ϕ) = 0 Similarly to before, setting x=τ,υ,ϕ yields, 2λ_τ ≤λ_ϕ 2λ_υ ≤λ_ϕ 2λ_ϕ ≤λ_τ + λ_υ, which is contradictory since λ_τ + λ_υ>0. Therefore, |𝒯|≤ 2. There are 4 cases to consider * α = δ≠β = γ, but then 𝒮 = ∅, * α = β≠γ = δ, but then equation (<ref>) can only be satisfied x=α,γ if λ_α = λ_β and λ_γ = λ_δ which violates the statement that (α,λ_α) ≠ (β,λ_β), * α = γ≠β = δ and from equation (<ref>) at x=α,γ it follows that λ_α = λ_γ, λ_β = λ_δ and hence the desired result, * α = β=γ=δ, in which case q(x) = (λ_α - λ_β - λ_γ + λ_δ) d_ tr(x,α) +log( λ_β/λ_α) - log( λ_δ/λ_γ), which can only be uniformly 0 at 𝒳 if and only if λ_α + λ_δ = λ_β + λ_γ. Observe that (λ_α,λ_δ) and (λ_β, λ_γ) are the two roots of the same quadratic z^2 - (λ_α + λ_δ) z + λ_αλ_δ and noting that in this case λ_α≠λ_β, it follows that λ_α = λ_γ and λ_β = λ_δ. Consider a compact set Σ⊆ℝ_+ = (0,∞). Then the set Λ = {σ^-1: σ∈Σ}∈ℝ_+ is also compact. In metric spaces, a set is compact iff it is sequentially compact. Therefore, for every sequence σ_n ∈Σ, σ_n →σ∈Σ. Every sequence in Λ can be expressed as 1/σ_n, which tends to 1/σ∈Λ. Therefore, Λ is sequentially compact and hence compact. This proof has been written for precision estimators λ=1/σ instead of deviation estimators. For the rest of the proof consider λ_y = σ_y^-1 for y=0,1 and define the set Λ = {σ^-1: σ∈Σ}∈ℝ_+. According to Lemma <ref>, Λ is also compact. Define the function f and h as f: ×{0,1}×Ω^2 ×Λ^2 →, f((x,y),(ω,λ) ) = y log S(h(x,(ω,λ))) + (1-y) log S(-h(x,(ω,λ))), h: ×Ω^2 ×Λ^2 →, h(x,(ω,λ )) = λ_0 d_ tr(x,ω_0) - λ_1 d_ tr(x,ω_1) +(e-1) logλ_1/λ_0, where S is the logistic function. Also denote the empirical (Q_n) and expected (Q) log-likelihood functions as Q_n(ω,λ) = 1/n∑_i=1^n f((X_i,Y_i),(ω,λ))   with Q_n(ω_n,λ_n) = sup_ω∈Ω^2, λ∈Λ^2 Q_n(ω),  and Q(ω,λ) = 𝔼_(X,Y)( f((X,Y),(ω,λ)) ) = 𝔼_X( S(h(X,(ω^*,λ^*) )) log( S(h(X,(ω,λ) ) )    + S(-h(X,(ω^*,λ^*) )) log( S(-h(X,(ω,λ) ) ) ). The last equation follows from conditioning on Y ∼Bernoulli(S(h(X,(ω^*,λ^*) ))). Before we move on, we need to prove that f((X,Y),(ω,λ)) is integrable so that Q is well-defined. Without loss of generality assume that λ_1 ≥λ_0. It suffices to prove that 𝔼(f((X,Y),(ω,λ)),Y=y) is integrable for both y=0,1. Observe that h(X,(ω,λ)) ≤ (λ_0 -λ_1) d_ tr(X,ω_0) + λ_1 d_ tr(ω_0,ω_1) + const ≤λ_1 d_ tr(ω_0,ω_1) + const. Since h(X,(ω,λ)) is bounded above, f((X,Y),(ω,λ)) is also bounded below on Y=0 and is hence integral on Y=0. Also, observe that h(X,(ω,λ)) ≥ (λ_0 -λ_1) d_ tr(X,ω_1) - λ_0 d_ tr(ω_0,ω_1) + const and noting that log(S(x)) > x-1 for all x<0 log(S(h(X,(ω,λ)))) ≥ h(X,(ω,λ)) - 1 ≥ (λ_0 -λ_1) d_ tr(X,ω_1) +const. Since d_ tr(X,ω_1) is integrable on Y=1, the LHS is integrable on Y=1 too. It follows that f(X,(ω,λ)) is integrable and hence Q is well-defined. First, we prove that Q is maximised at (ω,λ) = (ω^*,λ^*) and that this maximizer is unique. Consider the function g: ℝ→ℝ, g(t) = S(α) log S(t) + S(-α) log S(-t), where α∈ℝ is some constant. The function g is maximised at t=α and applying Taylor's theorem yields g(x) = g(α) - 1/2 S(ξ) S(-ξ) (x-α)^2, for some ξ∈ (α,x). Setting α = h(X,(ω^*,λ^*) ) and denoting ξ as a random variable ξ(X) ∈ (h(X,(ω^*,λ^*)),h(X,(ω,λ))) observe that Q(ω,λ) = 𝔼_X( g(h(X,(ω,λ) ) ) ) = 𝔼_X ( g(h(X,(ω^*,λ^*) ) ) - 1/2𝔼_X (S(ξ(X)) S(-ξ(X)) [h(X,(ω,λ) )- h(X,(ω^*,λ^*) )]^2 ) ≤ Q(ω^*,λ^*), Hence, from the expression above it is deduced that (ω^*,λ^*) is a maximizer. Now, consider the function q: 𝒳→ q(x) = h(x,(ω^*,λ^*) ) - h(x,(ω,λ)), where Ω⊂𝒳⊂ such that for some ζ >0 𝒳 = {x ∈: inf_ω∈Ω d_ tr(x,ω)<ζ}, so that for any ω∈Ω there is a neighborhood of ω within 𝒳. Note that 𝒳 is a bounded set since Ω is bounded too. We will prove by contradiction that q(x) = 0, ∀ x ∈𝒳. Suppose there exists x_0 ∈𝒳 such that q(x_0) > 0, then since q is continuous there exists a neighborhood U with x_0 ∈ U such that q(x) > 0 for all x ∈ U and so 𝔼(q^2(X) 𝕀(X ∈ U) ) >0, where 𝕀 is the indicator function. Since h(x,(ω,λ)) is continuous with respect to x and 𝒳 is bounded, the function takes values on a bounded interval and hence ξ(x) is bounded in 𝒳 i.e. there exists ϵ>0 such that ℙ(S(ξ(X) S(-ξ(X)) > ϵ| X ∈ U) = 1 and so equation (<ref>) becomes Q(ω,λ) ≤ Q(ω^*,λ^*) - ϵ/2𝔼(q^2(X) 𝕀(X ∈ U) ) < Q(ω^*,λ^*), since ℙ(X ∈ U) >0 (X has positive density everywhere). Therefore, for (ω,λ) to be a maximizer, q(x) = 0 for all x ∈𝒳. Apply Lemma <ref> with ω^* = (α,β), ω = (γ,δ), λ^* = (λ_α,λ_β) and λ = (λ_γ,λ_δ) with the set 𝒳 containing neighbourhoods of α,β,γ,δ and q(x) = 0 for all x in those neighbourhoods. It is concluded that ω = ω^* and λ = λ^*, thus proving the uniqueness of the maximizer. Theorem <ref> provides the uniform law of large numbers. The parameter space Ω^2 ×Λ^2 is compact since Ω and Λ are compact. Moreover, f((x,y),(ω,λ) ) is clearly continuous at each (ω,λ) ∈Ω^2 ∈Λ^2. Finally, consider the function r(z) = sup_ω∈Ω^2,λ∈Λ^2{ |f(z,(ω,λ) )| } = - f(z, ω(z), λ(z)), since f is non-positive. The function ω(z), λ(z) are chosen to minimize f. Using equation (<ref>), 𝔼(r(X)) ≤ - Q(ω^*,λ^*) + 1/2𝔼( [ h(X,(ω(X),λ(X)))- h(X,(ω^*,λ^*)) ]^2) , since the sigmoid function is bounded by 1. Note that 𝔼((Z+W)^2) ≤ 2(𝔼(Z^2) +𝔼(W^2)), and set W = log(λ_1(X)/λ_0(X)) - log(λ_1^*/λ_0^*). Since λ_y(X) ∈Λ⊆ [a,b] for some b≥ a>0, it follows that W^2 is integrable and so now we just have to prove that Z is integrable, where Z=Z_1+Z_2+Z_3+Z_4 with the four terms corresponding to tropical distance function λ d_ tr(X,ω). It also holds 𝔼((Z_1+Z_2+Z_3+Z_4)^2) ≤ 2(𝔼(Z_1^2) +𝔼(Z_2^2) + 𝔼(Z_3^2) +𝔼(Z_4^2)) and so 𝔼(Z^2) is bounded above by 𝔼( ∑_i=0^1 λ_i^2 d_ tr^2(X,ω_i(X))) + (λ_i^*)^2 d_ tr^2(X,ω^*_i(X)) ) ≤ 𝔼_Y [ 2 ( ∑_i=0^1 λ_i^2 + (λ_i^*)^2 ) 𝔼(d_ tr^2(X,ω^*_Y)|Y ) + . . 2 ( ∑_i=0^1 λ_i^2 d_ tr^2( ω_i(X),ω_Y^*) + (λ_i^*)^2 d_ tr^2(ω^*_i,ω_Y^*) ) ], where the second inequality came from applying the triangular inequality four times in the form d_ tr(X,τ) ≤ d_ tr(X,ω^*_Y) + d_ tr(ω^*_Y,τ). The final expression is finite because Ω is compact and hence d_ tr( ω_i(X),ω_Y^*) is finite, d_ tr(X,ω_Y)|Y is square-integrable. Therefore, 𝔼(r(X)) is finite. All conditions of the theorem are satisfied and so sup_ω∈Ω^2| 1/n∑_i=1^n f( (X_i,Y_i), ω) - 𝔼(f( (X,Y), ω)) | = sup_ω∈Ω^2| Q_n(ω) - Q(ω) | p→ 0. Finally, using Theorem <ref> and combining the uniqueness of the maximizer with the uniform bound result, it is concluded that ωp→ω^*. First, define Δ_0 = { C(X) ≠ 1 | Y = 0 }. By definition of C(X), Δ_0 = {. (σ_0^-1 - σ_1^-1) d_ tr(X,ω) - (e-1) log( σ_1/σ_0)≥ 0 | Y=0 } = {. d_ tr(X,ω) ≥ασ_0 σ_1 | Y=0 }. Triangular inequality dictates that d_ tr(X,ω^*) - d_ tr(ω^*,ω) ≤ d_ tr(X,ω) ≤ d_ tr(X,ω^*) + d_ tr(ω^*,ω), and so it follows that Δ_0 ⊇{ d_ tr(X,ω^*) ≥σ_0 σ_1 ( α + ϵ) | Y = 0 } Δ_0 ⊆{ d_ tr(X,ω^*) ≥σ_0 σ_1 ( α - ϵ) | Y = 0 }, and since Z = σ_0^-1 d_ tr(X,ω^*)|Y=0∼ F, ℙ( Z ≥σ_1 (α + ϵ)) ≤ℙ(Δ_0) ≤ℙ( Z ≥σ_1 (α - ϵ)), which yields the desired result. Similarly, for Δ_1 = { C(X) ≠ 0 | Y = 1 } = { d_ tr(X,ω) ≤σ_0 σ_1 α}, Δ_1 ⊇{ d_ tr(X,ω^*) ≤σ_0 σ_1 ( α - ϵ) | Y = 1 } Δ_1 ⊆{ d_ tr(X,ω^*) ≤σ_0 σ_1 ( α + ϵ) | Y = 1 }, and since Z = σ_1^-1 d_ tr(X,ω^*)|Y=1∼ F, ℙ( Z ≤σ_0 (α - ϵ)) ≤ℙ(Δ_1) ≤ℙ( Z ≤σ_0 (α + ϵ)), which is the desired interval. For the second part of the proposition, ω = ω^* and so ϵ = 0. Hence, ℙ(Δ_0) = 1 - F(σ_1 α ) = 1- F( x u(x) ) ℙ(Δ_1) = F(σ_0 α ) = F( u(x) ), where x = σ_1/σ_0 and u(x) = (e-1) logx/x - 1 Consider the function g(x) = 1-F(xu(x)) - F(u(x)) Proving that g(x) <0 for all x > 1 is equivalent to proving the desired result that ℙ(Δ_0) < ℙ(Δ_1) for σ_1 > σ_0. First, lim_x → 1 u(x) = lim_x → 1 x u(x) = e-1, and so lim_x → 1 g(x) = 1-2F(e-1). It is a well-known fact that the median of the Gamma distribution is less than the mean. Hence, for Z ∼ Gamma(e-1,1) with mean e-1, F(e-1) > 1/2 and so lim_x → 1 g(x) < 0. Finally, the derivative of g is g'(x) = - F'(u(x)) u'(x) - F'(xu(x)) (xu'(x) + u(x)) The following two inequalities F'(u(x)) ≥ F'(xu(x)), u'(x) + xu'(x) + u(x) ≥ 0, imply that g'(x) ≤ - F'(xu(x)) (u'(x) + xu'(x) + u(x)) ≤ 0. From (<ref>) and (<ref>) it follows that g(x) < 0 for all x>1. For inequality (<ref>), remember that F'(x) = x^e-2exp(-x)/Γ(e-1) and so F'(u(x)) - F'(xu(x)) = F'(u(x)) ( 1 - x^e-2exp(-(x-1) u(x)) ) = F'(u(x)) ( 1 - x^e-2exp(-(e-1) log(x)) ) = F'(u(x))(1-x^-1)>0, for all x>1. For inequality (<ref>), u'(x) + xu'(x) + u(x) = e-1/(x-1)^2( x-x^-1 -2logx), is a non-negative function for x>1 iff v is a non-negative function, where v(x) = x - x^-1 - 2logx, with v'(x) = (x-1)^2/x^2≥ 0 and v(1) = 0. Clearly, v is a non-negative function for x>1, so inequality (<ref>) is satisfied. For symbolic convenience, in this proof class 0 is referred to as class -1 and so Y ∈{-1,1}. Applying the triangular inequality twice, D_X = d_ tr(X , ω^*_Y) - d_ tr(X , ω^*_-Y) ≥( d_ tr(X , ω_Y) - d_ tr( ω^*_Y , ω_Y ) ) - ( d_ tr(X , ω_-Y) + d_ tr( ω^*_-Y , ω_-Y ) ) = d_ tr(X , ω_Y) - d_ tr(X , ω_-Y) - ϵ, it follows that { C(X) ≠ Y } = {d_ tr(X , ω_Y) - d_ tr(X , ω_-Y) ≥ 0}⊆{D_X ≥ - ϵ} and so the generalization error has the following upper bound ℙ( C(X) ≠ Y ) ≤ℙ( D_X ≥ -ϵ). Note that if d_ tr(X,ω_Y^*) < Δ_ϵ, then by the use of triangular inequality D_X =d_ tr(X , ω^*_Y) - d_ tr (ω^*_-Y,X) ≤ d_ tr(X , ω^*_Y) - ( d_ tr(ω_-Y^*, ω_Y^* ) -d_ tr (ω^*_Y,X) ) < 2Δ_ϵ-d_ tr(ω_1^* , ω_-1^*) = -ϵ. Consequently, ℙ( C(X) ≠ Y ) ≤ℙ( D_X ≥ -ϵ , Z_X ≥Δ_ϵ) Since the distribution of X is symmetric around ω^*_Y, the random variable 2ω^*_Y - X has the same distribution and so ℙ( D_X ≥ -ϵ , Z_X ≥Δ_ϵ) = ℙ( D_2ω_Y^*-X≥ -ϵ , Z_2ω_Y^*-X≥Δ_ϵ). It will be proved that Z_2ω_Y^*-X = Z_X, D_X + D_2ω_Y^*-X ≤ 0, and so {D_2ω_Y^*-X≥ -ϵ , Z_2ω_Y^*-X≥Δ_ϵ}⊆{D_X≤ϵ , Z_X≥Δ_ϵ}. Then, using equation (<ref>), ℙ( D_X ≥ -ϵ , Z_X ≥Δ_ϵ) ≤ℙ(D_X≤ϵ , Z_X≥Δ_ϵ), and substituting it to inequality (<ref>), ℙ(C(X) ≠ Y)) = 1/2 ( ℙ( D_X ≥ -ϵ , Z_X ≥Δ_ϵ) + ℙ( D_X ≤ϵ , Z_X ≥Δ_ϵ) ) = ℙ( Z_X ≥Δ_ϵ) + h(ϵ) where h(ϵ) = ℙ(Z_X ≥Δ_ϵ, |D_X| ≤ϵ) is an increasing function with respect to ϵ, which completes the proof. Equation (<ref>) follows from the observation that d_ tr(2ω_Y^*-x,ω_Y^*) = d_ tr(x,ω_Y^*). For equation (<ref>), D_2ω_Y^*-X + D_X = Z_2ω_Y^*-X - d_ tr( 2 ω_Y^* - X , ω_-Y^* ) + Z_X - d_ tr(X,ω_-Y^*) (<ref>)= 2Z_2ω_Y^*-X - d_ tr( 2 ω_Y^* - X, ω_-Y^* ) - d_ tr(ω_-Y^*,X) ≤ 2Z_2ω_Y^*-X - d_ tr( 2 ω_Y^* - X, X ) = 0, where the last inequality comes from the triangular inequality. Consider the random variable d_ tr(X,α). From the triangular inequality d_ tr(X,α) ≤ d_ tr(X,ω^*) + d_ tr(α,ω^*), it is deduced that d_ tr(X,α) is integrable, bounded above by an integrable random variable. Now consider the function F: →, F(x) = d_ tr(x,ω) + d_ tr(2ω^* - x,ω) - 2d_ tr(x,ω^*). Noting that d_ tr(2ω^* - x,ω) = d_ tr(x,2ω^* - ω), it follows that F(X) is integrable as the sum of integrable random variables. From triangular inequality and the fact that d_ tr(2ω^* - x,x) = 2d_ tr(x,ω^*) it follows that F(x) ≥ 0 for all x ∈. Furthermore, F(ω^*)>0 and since F is continuous, there exists a neighbourhood U that contains ω^* such that F(x) > 0 for all x ∈ U. Moreover, the function has positive density in a neighbourhood V that contains the centre ω^*. Therefore, there exists a neighbourhood W = U ∩ V such that F(x) > 0 for all x ∈ W and ℙ(X ∈ W) > 0. Hence, since F(X) ≥ 0, 𝔼(F(X)) ≥𝔼(F(X)| X ∈ W) ℙ(X ∈ W ) > 0. In other words, 𝔼(d_ tr(X,ω)) + 𝔼(d_ tr(2ω^*-X,ω)) > 2 𝔼(d_ tr(x,ω^*)) Moreover, consider the isometry y = 2ω^*-x and note that for symmetric probability density functions around ω^*, f(ω^*-δ) = f(ω^* + δ) and so for δ = ω^* - x, we have f(y)=f(x). Applying this transformation to the following integral yields 𝔼(d_ tr(2ω^* - X,ω)) = ∫_ d_ tr(2ω^* - x,ω) f(x) dx = ∫_ d_ tr(y,ω) f(y) dy = 𝔼(d_ tr(X,ω)). Combining equation (<ref>) with inequality (<ref>) shows that the function Q(ω) = 𝔼(d_ tr(X,ω)) has a global minimum at ω^*. From Theorem <ref> (uniform law of large numbers), set f(x,ω) = d_ tr(x,ω) and observe that f(x,ω) is always continuous w.r.t. ω. Setting r(x) = sup_ω∈Ω d_ tr(x,ω), which is finite since Ω is compact, observe that r(x) := sup_ω∈Ω d_ tr(x,ω) ≤ d_ tr(x,ω^*) + sup_ω∈Ω d_ tr(ω,ω^*). Since Ω is compact, the second term is finite and hence r(X) is integrable, since d_ tr(X,ω^*) is integrable. All conditions of the theorem are satisfied so Q(ω) = 𝔼(d_ tr(x,ω)) is continuous with respect to ω and sup_ω∈Ω |Q_n(ω) - Q(ω)| p→ 0 as n →∞, where Q_n(ω) = n^-1∑_i=1^n d_ tr(X_i,ω). Since Q(ω) has a unique minimum at ω^*, all conditions of Theorem <ref> are satisfied and so ω_n →ω^* as n →∞. * If ω - X_i has a unique maximum M_i = _j{ω_j - (X_i)_j} and unique minimum m_i = _j{ω_j - (X_i)_j}, then the gradient is (∇ f(x))_j = |{i: M_i=j}| - |{i: m_i=j}|. For the converse, assume that the gradient is well-defined. From equations (<ref>)–(<ref>) and following the first few sentences of Lemma <ref> d_ tr(x+ϵ E_j,y) + d_ tr(x-ϵ E_j,y) - 2d_ tr(x,y) = ϵ s_j(x-y), where s_j is defined in equation (<ref>) of Lemma <ref>. Consequently, f(x+ϵ E_j) + f(x- ϵ E_j) - 2f(x) = ϵ∑_i=1^n s_j(X_i - ω_i) Since f has a well-defined gradient, ∑_i=1^n s_j(X_i - ω) = 0 i.e. s_j(X_i - ω) = 0 for all (i,j) ∈ [n] × [e]. This can only happen iff X_i - ω has unique maximum and minimum component for all i ∈ [n]. * Using equation (<ref>), the gradient of f vanishes at x=ω if and only if |{i: M_i=j}| = |{i: m_i=j}|. Moreover, f(ω+v) = ∑_i=1^n max_k {ω_k - (X_i)_k + v_k } - min_k {ω_k - (X_i)_k + v_k } ≥∑_i=1^n ω_M_i - (X_i)_M_i + v_M_i - ω_m_i + (X_i)_m_i - v_m_i = f(ω) + ∑_i=1^n v_M_i - v_m_i Finally, note that because of equation (<ref>), ∑_i=1^n v_M_i = ∑_j=1^e v_j |{i ∈ [n]:M_i=j}| (<ref>)=∑_j=1^e v_j |{i ∈ [n]:m_i=j}| = ∑_i=1^n v_m_i, and so f(ω+v) ≥ f(ω) for all v ∈. § SPACE OF ULTRAMETRICS Suppose we have a classical linear subspace L_m ⊂^e defined by the linear equations x_ij - x_ik + x_jk=0 for 1≤ i < j <k ≤ m. Let (L_m)⊆^e/ 1 be the tropicalization of the linear space L_m ⊂^e, that is, classical operators are replaced by tropical ones (defined in Section <ref> in the supplement) in the equations defining the linear subspace L_m, so that all points (v_12,v_13,…, v_m-1,m) in (L_m) satisfy the condition that max_i,j,k∈ [m]{v_ij,v_ik,v_jk}. is attained at least twice. Then, the image of inside of the tropical projective torus ^e/ 1 is equal to (L_m). § TROPICAL ARITHMETICS AND TROPICAL INNER PRODUCT In tropical geometry, addition and multiplication are different than regular arithmetic. The arithmetic operations are performed in the max-plus tropical semiring ( ℝ∪{-∞},⊕,⊙) as defined in <cit.>. In the tropical semiring, the basic tropical arithmetic operations of addition and multiplication are defined as: a ⊕ b := max{a, b},      a ⊙ b := a + b,      a, b ∈ℝ∪{-∞}. The element -∞ ought to be included as it is the identity element of tropical addition. Tropical subtraction is not well-defined and tropical division is classical subtraction. The following definitions are necessary for the definition of the tropical inner product For any scalars a,b ∈ℝ∪{-∞} and for any vectors v,w ∈ (ℝ∪{-∞})^e, where e ∈ℕ, a ⊙ v:= (a + v_1, … ,a + v_e), a ⊙ v ⊕ b ⊙ w := (max{a+v_1,b+w_1}, …, max{a+v_e,b+w_e}). From the definitions above, it follows that the tropical inner product is ω^T ⊙ x = max{ω + x } for all vectors ω,x ∈. In classical logistic regression a linear function in the form of a classical inner product h_ω(x) = ω^T x, ω∈^n is used. The tropical symbolic equivalent is h_ω(x) = ω^T ⊙ x = max_l ∈ [e]{ω_l + x_l }. This expression is not well-defined, since the statistical parameter and covariate vectors ω, u ∈ℝ^e / ℝ 1 are only defined up to addition of a scalar multiple of the vector (1,…,1). To resolve this issue, we fix - min_l ∈ [e]{ω_l + x_l } = c, where c ∈ is a constant for all observations. Combining equations (<ref>), (<ref>), and the definition of tropical distance (<ref>), h_ω(x) = d_ tr(x,-ω) - c. For simplicity, under the transformation -ω→ω the expression becomes h_ω(x) = d_ tr(x,ω) - c. § TROPICAL LOGISTIC REGRESSION ALGORITHM § FERMAT-WEBER POINT VISUALIZATION As noted in Section <ref>, the gradient method is much faster than linear programming. Unfortunately, there is no guarantee that it will guide us to a Fermat-Weber point. However, in practice, the gradient method tends to work well. Figure <ref> illustrates just that. Given, ten datapoint X_1, …, X_10∈ℝ^3/ℝ 1≅ℝ^2, the Fermat-Weber set is found to be a trapezoid. This is in agreement with <cit.>, which states that all Fermat-Weber sets are classical polytopes. The two-dimensional gradient vector, plotted as a vector field in Figure <ref>, always points towards the Fermat-Weber set. Therefore, the gradient algorithm should always guide us to a Fermat-Weber point. § MLE ESTIMATOR FOR Σ If Z_i iid∼ Gamma(n, k), where n is constant and k is a statistical parameter, then it is well-known that the maximum likelihood estimator is k = Z/n, where Z is the sample average. In our case Z_i = d(X_i,ω^*) and k= i σ^i. From Proposition <ref>, Z_i ∼ Gamma(n/i,iσ^i) and by substituting these parameters in equation <ref>, it follows that the MLE for σ is σ^i = Z / n, where Z is the average distance of the covariates (gene trees) from their mean (species tree). This results holds for all i ∈ℕ and both Euclidean and tropical metrics. The only difference is that for Euclidean spaces X ∈ℝ^e and so n=e, while for the tropical projective torus , n=e-1. § APPROXIMATE BHV LOGISTIC REGRESSION Similar to the tropical Laplace distribution, in <cit.> the following distribution was considered f_λ,ω(x) = K_λ,ωexp(-λ d_ BHV(x,ω) ), where λ = 1/σ is a concentration/precision parameter, d_ BHV is the BHV metric and K_λ,ω is the normalization constant that depends on λ and ω. We consider an adaptation of the two-species model for this metric, where the data from the two classes have the same concentration rate but different centre. If X|Y ∼ f_λ,ω^*_Y, then h_ω_0,ω_1(x) = λ( d_ BHV(x,ω_0^*) - d_ BHV(x,ω_1^*) ) + logK_λ,ω^*_0 /K_λ,ω^*_1. Unlike in the tropical projective torus or the euclidean space, in the BHV space K_λ,ω^*_0 ≠ K_λ,ω^*_1, because the space is not translation-invariant. However, if we assume that the two centres are far away from trees with bordering topologies, it may be assumed that the trees are mostly distributed in the Euclidean space and as a result K_λ,ω^*_0 ≈ K_λ,ω^*_1. Under this assumption, equation (<ref>) becomes h_ω_0,ω_1(x) ≈λ( d_ BHV(x,ω_0^*) - d_ BHV(x,ω_1^*) ). Therefore, the classification/decision boundary for the BHV is the BHV bisector d_ BHV(x,ω_0^*) = d_ BHV(x,ω_1^*) and the most sensible classifier is C(x) = 𝕀(d_ BHV(x,ω_0^*) > d_ BHV(x,ω_1^*)), where 𝕀 is the indicator function. § GRAPHS FOR SIMULATED DATA UNDER THE MULTI-SPECIES COALESCENT MODEL FOR DIFFERENT R
http://arxiv.org/abs/2306.01635v1
20230602155330
Q&A: Query-Based Representation Learning for Multi-Track Symbolic Music re-Arrangement
[ "Jingwei Zhao", "Gus Xia", "Ye Wang" ]
cs.SD
[ "cs.SD", "eess.AS" ]
Combining stochastic density functional theory with deep potential molecular dynamics to study warm dense matter Mohan Chen July 31, 2023 ================================================================================================================ Music rearrangement is a common music practice of reconstructing and reconceptualizing a piece using new composition or instrumentation styles, which is also an important task of automatic music generation. Existing studies typically model the mapping from a source piece to a target piece via supervised learning. In this paper, we tackle rearrangement problems via self-supervised learning, in which the mapping styles can be regarded as conditions and controlled in a flexible way. Specifically, we are inspired by the representation disentanglement idea and propose Q&A, a query-based algorithm for multi-track music rearrangement under an encoder-decoder framework. Q&A learns both a content representation from the mixture and function (style) representations from each individual track, while the latter queries the former in order to rearrange a new piece. Our current model focuses on popular music and provides a controllable pathway to four scenarios: 1) re-instrumentation, 2) piano cover generation, 3) orchestration, and 4) voice separation. Experiments show that our query system achieves high-quality rearrangement results with delicate multi-track structures, significantly outperforming the baselines. § INTRODUCTION It is sometimes easy to craft an idea of the melody but usually hard to frame a good arrangement. Formally, arrangement refers to the form of a musical piece, typically with textures and voicing carefully designed for multiple instruments as a unique style. On top of that, a piece can also be rearranged to convey new feelings. Such rearrangement scenarios include piano cover generation from multi-track music, multi-track orchestration from piano, and re-instrumentation using varied instruments, which are all common tasks in music practice. While much progress has been witnessed in automatic music generation <cit.>, rearrangement remains a challenging problem. Among various ways to rearrange a piece, most studies have focused on the reduction from complex forms to simpler ones, such as generating piano covers from multi-track music. The reduction is typically done by masking least significant notes, either identified by rule-based criteria <cit.> or learned by supervision <cit.>. While rearrangement in this way is generally faithful, it tends to produce sparse or repetitive textures that fall short of creativity. More recent works have also taken on simple-to-complex rearrangement, such as orchestration <cit.>. While these works are still fully supervised, the scarce of paired piano and multi-track data remains a major problem in this direction. Another considerable challenge for music rearrangement lies in multi-track modelling. Previous works have typically interpreted “track” as “instrument” and merge individual tracks of the same instrument class to simplify the problem <cit.>. However, instrument alone is not necessarily a good representative of a multi-track system in symbolic music. For example, pop music often has two guitar tracks of quite different functions – a melodic one and a harmonic one. When merged together, the distinctive texture structures of each track become less transparent, which may add extra burden to the model. In this paper, we aim to approach multi-track music rearrangement while balancing faithfulness with creativity. We render the content of a source piece using the style from a reference piece that is free to choose. In terms of the “style” of a multi-track piece, apart from instruments, we believe the function of each component track is also important. Specifically, we consider the function of a track as its texture density distribution along the time- and pitch-axes, respectively, which can describe both the distinctive intra-track structures (e.g., melodic v.s. harmonic) and the inter-track dependencies (e.g., pitch range and voicing). We use track functions as queries in a query-based track separation process to reconstruct individual tracks from a track-wise condensed mixture. Under the VAE framework <cit.>, we devise a pipeline consisting of four components: 1) an encoder that maps a mixture to the latent space; 2) a query-net <cit.> that encodes function features of each track; 3) a Transformer-based <cit.> query system that separates each track from the mixture at the latent representation level; and 4) a decoder that reconstructs each separated track. At inference time, our model can query a piece and rearrange it with diverse track functions. We name our model after Q&A (Query & re-Arrange), which provides a unified solution to a range of multi-track music rearrangement tasks, including: 1) re-instrumentation – to rearrange a multi-track piece with a new track system; 2) piano cover generation – to rearrange a multi-track piece into piano solo; and 3) orchestration – to rearrange a piano piece with a variable types of instruments in a variable number of tracks. By inferring track functions as voice hints, our model can additionally handle 4) voice separation – to separate distinctive voicing tracks (assuming a preset total number of voices) from an ensemble mixture by generating each track. Figure <ref> shows the relations among the four tasks. Our current model focuses on pop music rearrangement. We also test our model's voice separation performance on string quartets and Bach chorales. Experimental results show that our model not only generates high-quality arrangements, but also maintains fine-grained symbolic track structures with musically intuitive and playable textures for each track. In summary, our contributions in this paper are as follows: * A versatile rearrangement model: We present Q&A[https://github.com/zhaojw1998/Query-and-reArrangehttps://github.com/zhaojw1998/Query-and-reArrange], the first unified framework for re-instrumentation, piano cover generation, orchestration, and voice separation. The rearrangement results demonstrate state-of-the-art quality over existing models for similar purposes. * Function-aware multi-track music modelling: We design instrument-agnostic track functions for multi-track modelling, which can better describe the distinctions of parallel tracks and their dependencies. This method is applicable to a wider range of music generation tasks. * Query-based representation learning: We introduce a self-supervised query system separating parts from the mixture at latent representation level. Experiments show that our model learns style representations of each part disentangled from the mixture content, demonstrating interpretable and controllable generative modelling. § RELATED WORK §.§ Symbolic Music Rearrangement Existing studies on music rearrangement commonly rely on supervised learning to map a source piece to a target one. For example, Crestel and Esling crestel2016live project piano solo to orchestra by training a seq2seq model on a classical repertoire of paired data <cit.>. Dong et al. dong2021towards approach automatic instrumentation by predicting the instrument attribute of each note in a track-wise condensed mixture. Models for piano reduction can have more rule-based designs <cit.>, but are still generally under supervised frameworks. Except for several works that consider difficulty level as condition <cit.>, most models cannot steer the rearrangement process or change the composition style. In this paper, instead of supervised mapping, we render the content of a source piece using the style from a reference piece. In terms of content, we preserve the general melodic and harmonic structures. As for style, we introduce a new track system, i.e., textural functions of each track along with the instruments to play them, to reconceptualize the source piece. Our methodology can be formalized as composition style transfer <cit.> while existing research most relevant to us is <cit.>, which approaches re-instrumentation by transferring instrument timbres from different references. While Hung et al. hung2019musical still require supervision from audio-symbolic pairs, our model is fully symbolic-based, self-supervised, and unified for re-instrumentation, piano cover generation, and orchestration. §.§ Multi-Track Music Modelling Multi-track music is an arrangement form commonly seen in accompaniment, symphony, ensembles, etc. However, it is very challenging for machines to understand multi-track data. To capture inter-track dependency, mainstream approaches either distribute vari-instrument tracks into parallel data channels <cit.> or incorporate instrument labels as part of note event tokens <cit.>. Such methods are ideal for CNN-based and language models, respectively, yet both inevitably merging co-instrument tracks together. The event-based approach additionally enforces a positional relation to parallel tracks that are not sequentially ordered, which can damage the intrinsic structure of multi-track music <cit.>. More recently, several works target at these issues and support generating co-instrument tracks without a sequential assumption <cit.>. However, these models are not applicable to general music rearrangement. In this work, apart from instrument, we introduce track function as an equally (if not more) important feature to describe and distinguish individual tracks in multi-track music. We define and use the function of a track to represent its texture and voicing structures. We further model multi-track music via self-supervised learning, i.e., using each function to query and separate corresponding tracks from a mixture. § METHODOLOGY We propose Q&A, a query-based algorithm for multi-track music rearrangement under an encoder-decoder framework. An overview of our model is shown in Figure <ref>. In this section, we first introduce our data representation of multi-track music and track functions in Section <ref>. Then, we introduce our model architecture and training objectives in Section <ref> and <ref>. Finally, in Section <ref>, we elaborate on how Q&A can be applied to music rearrangement at inference time. §.§ Data Representation §.§.§ Multi-Track Music Given multi-track music x with N tracks, our model aims to reconstruct each track x_n, where n=1, 2, ⋯, N, from a track-wise condensed mixture x_mix. We represent x_n in the modified piano-roll format proposed by <cit.>, and x as an N-track collection. Formally, x = x_1:N{x_n}_n=1^N, where individual track x_n is a P× T matrix. P=128 represents 128 MIDI pitches, and T is the time dimension. Each data entry (p, t) of x_n is an integer value representing note duration on the onset positions. The condensed mixture x_mix is also a P× T matrix where each entry is the position-wise maximum value across N tracks. In this paper, we consider 2-bar (8-beat) music data segments in 4/4 time signature quantized at 1/4 beat unit, deriving T=32 time steps for each music sample. We also focus on composition-level aspects while disregarding performance-level dynamics like MIDI velocity. §.§.§ Track Function We define the function of each track x_n as its texture density features computed from the modified piano-roll format. Specifically, we define descriptors of pitch function f^p(·) and time function f^t(·) as follows: f^p(x_n) = rowsum(1_{x_n > 0}) / T, f^t(x_n) = colsum(1_{x_n > 0}) / P, where 1_{·} is the indicator function expressing individual note onset entries as 1. rowsum(·) and colsum(·) each sums up one dimension, resulting in a P-D and T-D vector, respectively. f^p(x_n) is essentially a pitch histogram, which is related to key, chord, and the pitch range of x_n. f^t(x_n) indicates voice densities of x_n and is related to rhythmic patterns and grooves. Each vector is normalized to [0, 1]. §.§ Model Architecture Figure <ref> shows the overall architecture of our model consisting of four key components: 1) a mixture encoder, 2) a function query-net, 3) a track separator, and 4) a track decoder. §.§.§ Mixture Encoder As x_mix is a single-track polyphony, we use the encoder module of PianoTree VAE <cit.>, the state-of-the-art polyphonic representation learning model, to encode a 256-D mixture representation z_mix^x. The PianoTree encoder first converts x_mix to a compact and ordered note event format, where each event contains pitch and duration attributes. It then applies a pitch-wise bi-directional GRU to summarize concurrent notes at time step t to an intermediate representation simu_note_t. On top of simu_note_1:T, it further applies a time-wise GRU to encode the full mixture representation z_mix^x. The encoding process of PianoTree VAE reflects hierarchical musical understanding from note via chord to grouping, which is interpretable and has proved beneficial for a range of downstream generation tasks <cit.>. §.§.§ Function Query-Net The function query-net consists of two VAEs that encode 128-D representations z^p(x)_n and z^t(x)_n for track functions f^p(x_n) and f^t(x_n), respectively. The pitch and time function encoders each consist of a 1-D convolutional layer with kernel size 12 and 4, respectively. Both are followed by ReLU activation <cit.> and 1-D max-pooling with kernel size 4 and stride 4. The decoders consist of two fully-connected layers with ReLU activation in between. It is noted that, with the encoder design, we leverage the translation invariance property of convolution and the blurry effect of pooling <cit.> to discourage the separator from simply retrieving notes that are implied in the track functions. By doing so, our model learns a general style representation instead of the exact density values from the track function. Similar method is also adopted in other VAE architectures to realize disentanglement <cit.>. §.§.§ Track Separator The track separator is a 2-layer Transformer encoder with 8 attention heads, 0.1 dropout ratio, and GELU activation <cit.>. The hidden dimensions of self-attention d_model and feed-forward layers d_ff are 512 and 1024, respectively. The input to the separator is a sequence of N+1 latent codes including mixture z_mix^x and track functions z^f(x)_1:N, where z^f(x)_n denotes the concatenation [z^p(x)_n; z^t(x)_n] as a unified track function representation. We also add learnable instrument embeddings to the corresponding tracks. It is noted that Transformer is permutation-invariant to the index of track functions so that no sequential assumption is enforced. While the self-attention mechanism allows each track function as query to attend to the mixture, it also encourages queries to attend to each other for inter-track dependency. We denote the output of the Transformer as z^x_1:N, which are the expected latent representations for individual tracks x_1:N. §.§.§ Track Decoder We use the decoder module of PianoTree VAE to reconstruct each track x_n from representation z^x_n. The decoder involves time- and pitch-wise uni-directional GRUs, which mirror the structure of the encoder. To better distinguish parallel tracks, we additionally provide the decoder with an auxiliary time sequence of symbolic features, which are priorly predicted from z^x_n. Specifically, we consider three auxiliary features: pitch centre, voice intensity, and rhythm, which can serve as strong hints to determine if one track has melodic, harmonic, and static properties <cit.>. Both pitch centre and voice intensity are time sequences of scalar values, which indicate centre pitch curve and voice number progression of a track, both normalized to [0, 1]. The rhythm feature is a time sequence of onset probabilities, which represents the rhythmic pattern in time. We use a uni-directional GRU to predict the symbolic features from z^x_n and feed them to the corresponding time steps of the time-wise GRU in the PianoTree Decoder. Similar method is also applied for disentanglement and reconstruction in <cit.>. §.§ Training Objectives The loss terms in our model include 1) reconstruction loss for each track, track functions, and auxiliary symbolic features, and 2) KL loss between all latent representations and standard normal distribution. Our model is essentially a variational autoencoder since the loss function can be formalized as the evidence lower bound (ELBO) of distribution p(x), where x=x_1:N is the multi-track music. The posterior distribution of the VAE is defined as the product of three modules including mixture encoder, query-net encoder, and track separator: q_ϕ(𝐳| x) := q_ϕ_1(z^x_mix| x_mix) ∏_n=1^N q_ϕ_2(z^f(x)_n | f(x_n)) ∏_n=1^N q_ϕ_3(z^x_n| z^x_mix, z^f(x)_1:N), where ϕ := [ϕ_1, ϕ_2, ϕ_3] denotes the parameters of the three modules, and 𝐳 := [z^x_mix, z^x_1:N, z^f(x)_1:N]. In Equation (<ref>), we collectively express two types of track functions as f(x_n) for conciseness. It is noted that both x_mix and f(x_n) are deterministically transformed from x and hence are not explicitly written in the left-hand side of Equation (<ref>). The reconstruction distribution is defined as the product of three reconstruction terms of query-net decoder, symbolic feature decoder, and track decoder: p_θ(x|𝐳) := ∏_n=1^N p_θ_1(f(x_n)| z^f(x)_n) ∏_n=1^N p_θ_2(r(x_n)| z^x_n) ∏_n=1^N p_θ_3(x_n | z^x_n, r(x_n)), where θ := [θ_1, θ_2, θ_3] denotes the parameters of the three decoders. r(x_n) denotes the auxiliary symbolic features for track x_n. The p_θ_1 term can be interpreted as a regularizer to the overall output distribution p_θ(x|𝐳). Finally, the overall loss function is as follows: ℒ(θ, ϕ; x) = - 𝔼_𝐳∼ q_ϕlog p_θ(x |𝐳) + β q_ϕ(𝐳| x)𝒩(0, 1), where β is a balancing parameter <cit.>. §.§ Style Transfer At inference time, Q&A can rearrange a multi-track source piece x=x_1:N using the track system (style) from a reference piece y=y_1:M, which can be freely selected. Let Enc^m, Enc^f, Sep, and Dec^tk be the mixture encoder, function encoder, track separator, and track decoder, respectively, the rearrangement process takes a pipeline as follows: z^x_mix = Enc^m(x_mix), z^f(y)_m = Enc^f(f(y_m)), m=1, 2, ⋯, M, z^x^'_1:M = Sep(z^x_mix, z^f(y)_1:M), x^'_m = Dec^tk(z^x^'_m), m=1, 2, ⋯, M, where x^'=x^'_1:M is the rearrangement result. x^' inherits the general harmonic structures from x, while also introducing y's track system with new textures, grooves, and track voicing played by a different set of instruments. In addition to manual selection, reference y can be automatically searched from a database 𝒟. To guarantee faithful and natural rearrangement results, we develop a simple heuristic to sample y that is “matched” with x as follows: y = y ∈𝒟argmax [ cos(f(y_mix), f(x_mix)) + α·ϵ_y ], where cos(·, ·) measures cosine similarity between the functions (essentially, texture densities) of mixture y_mix and x_mix, ϵ_y ∼𝒩(0, 1) is a noise term for balancing with generality, and α is a balancing parameter. In cases when y and x are very dissimilar, our model robustly follows x's harmony and y's texture and voicing in a general sense of style transfer. § EXPERIMENTS §.§ Dataset Our model is trained on Slakh2100 <cit.> and POP909 <cit.> datasets. In specific, Slakh2100 contains 2K MIDI files of multi-track music, most of which are in pop style. Instruments in Slakh2100 are categorized into 34 classes (while co-instrument tracks are not merged) and each piece contains at least one track of piano, guitar, bass, and drum. In our experiment, we discard the drum track because it does not follow the standard 128-pitch protocol used in other tracks. POP909 is a dataset of 1K pop songs in piano arrangement created by professional musicians. Each piece consists of three piano tracks for vocal melody, lead instrument melody, and piano accompaniment, respectively. By jointly training our model on both datasets, our model can rearrange multi-track music to piano and vice versa. §.§ Training For training Q&A, we use the official training split of Slakh2100 while randomly splitting POP909 (at song level) into training, validation, and test sets with a ratio of 8:1:1. We further augment training data by transposing each piece to all 12 keys. Our model comprises 19M learnable parameters and is trained with a mini-batch of 64 2-bar segments for 30 epochs on an RTX A5000 GPU with 24GB memory. We use Adam optimizer <cit.> with a learning rate from 1e-3 exponentially decayed to 1e-5. We apply teacher forcing <cit.> for the decoder GRUs in PianoTree VAE with a rate from 0.8 to 0. For the parameter β in Equation (<ref>), we apply KL annealing following <cit.> and set β increasing from 0 to 0.5 for z^f(x)_n and from 0 to 0.01 for the other two factors. §.§ Rearrangement Showcase An 8-bar rearrangement example (by processing every 2 bars independently) is shown in Figure <ref>. In specific, this is an orchestration example, where we use Q&A to rearrange a piano piece into multi-track music. The piano source x is from POP909 while we sample reference y of the same length from Slakh2100 following Equation (<ref>) with α=0.2. We add f(x_mel), the function of x's melody track, to y's track functions as an additional query to guarantee the preservation of the theme melody. Meanwhile, we conduct posterior sampling over z^x^'_mel to encourage melody improvisation. In this example, our model rearranges the piano piece into 11 tracks with coherent and delicate multi-track textures. Among the 11 tracks, guitar and organ are each used twice for melodic and harmonic purposes, respectively. Our model preserves the original harmony quite faithfully. Particularly, it captures the added chord notes in x (highlighted by red dotted lines in Figure <ref>) and retains the tension from the original piece. At the same time, it introduces new groove patterns, bass lines, lead instrument melodies, and a theme melody variation to reconceptualize the piece with more creativity. §.§ Subjective Evaluation on Rearrangement Based on composition style transfer, Q&A is a unified solution for a range of music rearrangement tasks. In this paper, we focus on orchestration, piano cover generation, and re-instrumentation. For evaluation, we introduce three existing models as baselines for each of the three tasks as follows: BL-Orch.: We introduce Arranger by <cit.> as our baseline for the orchestration task. We select the BiLSTM variant pre-trained on Lakh MIDI dataset <cit.>, which is a superset of Slakh2100 that we use. Orchestration by Arranger is a note-by-note mapping process, where each note in the piano source is mapped to a multi-track target by assigning instruments under a classification framework. BL-Pno.: We introduce Poly-Dis by <cit.> as our baseline for the piano cover generation task. This model is pre-trained on POP909 and is also based on style transfer. Specifically, it can generate piano cover for a multi-track source piece by reconceptualizing its chord progression using the texture from a piano reference. In our case, we provide Poly-Dis with the same piano reference as our model and extract the chord progression of the source music using the algorithm in <cit.>. BL-ReIns.: We introduce the model by <cit.> as our baseline for the re-instrumentation task. This model rearranges a source piece using the synthesized audio timbre feature from a reference piece as instrumentation style. We train this model on Slakh2100 using both the MIDI and the aligned audio that is synthesized using professional-level sample-based virtual instruments <cit.>. Besides the baselines, we also introduce three variants of our model to analyze the impact of each key component. Specifically, Q&A-T uses only time function as query to rearrange a piece. Q&A-P, on the other hand, uses pitch function only. The final variant Q&A w/o Ins. uses both functions but is trained without instrument embedding. §.§.§ Evaluation Details We invite participants to subjectively evaluate the rearrangement quality of all models through a double-blind online survey. Our survey consists of 15 rearrangement sets, each of which contains one original source piece followed by five rearrangement samples (four by our model variants and the rest by one of the baselines). Among the 15 sets, there are 5 for piano cover generation, orchestration, and re-instrumentation, respectively. The original piece x is an 8-bar musical phrase randomly selected from the validation/test set of either POP909 or Slakh2100 depending on the task. The reference piece y is sampled from the other dataset following Equation (<ref>), where we set α=0.2. For re-instrumentation, both x and y are in Slakh2100 but from different splits (validation and test sets, respectively). We use the same y for each model that requires a reference piece for style transfer. In our survey, we request each participant to listen to 5 rearrangement sets and evaluate each sample. Both the set order and the sample order in each set are randomized. The evaluation is based on a 5-point scale from 1 (very low) to 5 (very high) for four criteria as follows: * DOA: The degree of arrangement. A low DOA refers to a note-by-note copy-paste from the original music, while a high DOA means the music is appropriately restructured to fit the new track system and instruments. * Creativity: How creative the rearrangement is. * Naturalness: How likely a human arranger creates it. * Musicality: The overall musicality. §.§.§ Overall Rearrangement Performance A total of 26 participants (8 females and 18 males) with various musical backgrounds have completed our survey. We first show the statistical results for overall rearrangement performance disregarding the specific tasks. As shown in Figure <ref>, the height of each bar represents the mean rating value and the error bar represents the standard error computed via within-subject (repeated-measures) ANOVA <cit.>. Among our model variants, Q&A-T queries the mixture by time function only, and Q&A w/o Ins. has no instrument embedding. Both models essentially have fewer constraints during training and hence can produce more diverse results, which may explain the higher ratings on DOA and Creativity for both models. However, such results can also be less natural or musical. On the other hand, Q&A-P uses pitch function only and yields results inferior to other variants. This finding shows that pitch function alone is not sufficient to capture track structures in multi-track music. Indeed, pop music (at least in our datasets) is generally better characterized in grooves than in chords, as the latter can often fit in a few off-the-shelf template progressions. In terms of Naturalness and Musicality, our standard Q&A model makes a better balance and acquires significantly better results (p-value p<0.01) than all variants and the baselines ensemble. §.§.§ Task-Specific Performance We are also interested in our model's performance on each concrete rearrangement task. As shown in Figure <ref>, we show our model's task-specific ratings (top in four variants) on the same set of criteria in comparison to corresponding baseline models. We notice that BL-Orch. earns the highest rating for Naturalness in the orchestration task, which is not surprising because it adopts a note-by-note mapping strategy that virtually reproduces the original human-created music. Accompanied by this strategy is a lower degree of orchestration (DOA) and Creativity. On the other hand, our model demonstrates a more balanced and superior performance. It also outperforms BL-Orch. in Musicality as it can introduce more diversified instruments and properly rearrange the source music with new texture and voicing. In particular, we report a significantly better performance (p-values p<0.05) of our model in Musicality than all baselines in all three tasks. §.§ Objective Evaluation on Voice Separation One may wonder if the exceptional performance of our model in creativity and musicality sacrifices the faithfulness to the original music. To this end, we conduct an additional experiment on the task of voice separation and compare our model with note-by-note decision models — a BiLSTM and a Transformer encoder tailored for this task in <cit.>. Specifically, voice separation is a special case of orchestration, where we only aim to separate a mixture into individual voicing tracks without any creative factor. Note that, in such a case, note-by-note classification methods have a natural advantage over representation learning based methods because the latter gives less importance to accurate control on low-level tokens. Still, the faithfulness of our model can be validated if it can also tackle this problem. In voice separation, since the goal is to separate individual tracks, the ground-truth track functions cannot be the model input. Hence we introduce a new variant Q&A-V, which applies an additional GRU decoder to infer function representations z^f(x)_1:N from mixture representation z_mix^x, and then generate each track with inferred track functions. In our case, N=4 is preset and the inference process is conducted from high voice to low voice autoregressively. We load the rest part of the model with pre-trained parameters from standard Q&A and fine-tune the whole model on string quartets in MusicNet <cit.> and Bach chorales in Music21 <cit.>, respectively. We process the data into 8-beat segments irrespective of time signature. At test time, if a certain note in the mixture is not recalled by our model, we look for its nearest-neighbour note that is generated and assign its voice. If two note assignments form polyphonic voice, we then re-assign the note with least added distance to its second-nearest voice, which is a simple greedy-based rule. As both datasets are tiny and prone to unbalanced train-test split, we evaluate our model by 10-fold cross validation. We show the test results (percentage accuracy) in Table <ref>. Compared to the baselines, our Q&A-V model yields generally comparable results, although with a noticeable gap on Bach chorales. Specifically, Bach chorales come with very regular and transparent counterpoints, which are a good fit for note-by-note classification frameworks to separate voices. On the other hand, string quartets have much more complex and even overlapped voices that are harder to separate. For this case, our model yields highly competitive performance in general. When entry hints are provided, our model achieves the best with a good margin to both baselines. § CONCLUSION In conclusion, we contribute Q&A, a novel query-based framework for multi-track music rearrangement. The main novelty lies first in our application of a style transfer methodology to interpret the general rearrangement problem. By defining and utilizing track functions, we effectively capture the texture and voicing structure of multi-track music as composition style. Under a self-supervised query system, the number of tracks and instruments to rearrange a piece is virtually unconstrained. Q&A serves as a unified solution for piano cover generation, orchestration, re-instrumentation, and voice separation. Extensive experiments prove that it can both creatively rearrange a piece and faithfully conserve the essential structures. We believe that our contributions will inspire further advancements in computer music research, opening doors to broader possibilities for universal music co-creation. named
http://arxiv.org/abs/2306.03623v1
20230606121912
Spike-based computation using classical recurrent neural networks
[ "Florent De Geeter", "Damien Ernst", "Guillaume Drion" ]
cs.NE
[ "cs.NE", "cs.LG" ]
Supplemental material: Probing polaron clouds by Rydberg atom spectroscopy Richard Schmidt July 31, 2023 ========================================================================== Spiking neural networks are a type of artificial neural networks in which communication between neurons is only made of events, also called spikes. This property allows neural networks to make asynchronous and sparse computations and therefore to drastically decrease energy consumption when run on specialized hardware. However, training such networks is known to be difficult, mainly due to the non-differentiability of the spike activation, which prevents the use of classical backpropagation. This is because state-of-the-art spiking neural networks are usually derived from biologically-inspired neuron models, to which are applied machine learning methods for training. Nowadays, research about spiking neural networks focuses on the design of training algorithms whose goal is to obtain networks that compete with their non-spiking version on specific tasks. In this paper, we attempt the symmetrical approach: we modify the dynamics of a well-known, easily trainable type of recurrent neural network to make it event-based. This new RNN cell, called the Spiking Recurrent Cell, therefore communicates using events, i.e. spikes, while being completely differentiable. Vanilla backpropagation can thus be used to train any network made of such RNN cell. We show that this new network can achieve performance comparable to other types of spiking networks in the MNIST benchmark and its variants, the Fashion-MNIST and the Neuromorphic-MNIST. Moreover, we show that this new cell makes the training of deep spiking networks achievable. § INTRODUCTION In the last decade, artificial neural networks (ANNs) have become increasingly powerful, overtaking human performance in many tasks. However, the functioning of ANNs diverges strongly from the one of biological brains. Notably, ANNs require a huge amount of energy for training and inferring, whereas biological brains consumes much less power. This energy greediness prevents ANNs to be used in some environments, for instance in embedded systems. One of the considered solutions to this problem is to replace the usual artificial neurons by spiking neurons, mimicking the function of biological brains. Spiking Neural Networks (SNNs) are considered as the third generation of neural networks <cit.>. Such networks, when run on neuromorphic hardware (like Loihi <cit.> for instance), can show very low power consumption. Another advantage of the SNNs is their event-driven computation. Unlike usual ANNs that propagate information in each layer and each neuron at each forward pass, SNNs only propagate information when a spike occurs, leading to more event-driven and sparse computations. Nonetheless, the development of SNNs face a challenging problem: the activation function that is usually used to generate spikes is not differentiable, therefore preventing any training using usual backpropagation <cit.>, which is a the core of ANNs success. Several solutions are being considered nowadays, as discussed in <ref>. The classical approach consists in using a simple model for the spiking neurons to which are added learnable weights. Then, methods inspired from classical machine learning are used to train, either by directly training the SNN, or by first training an ANN and then converting it into a SNN. In this paper, we approach the problem from the other side: from the well-known Gated Recurrent Cell (GRU) <cit.>, we derive a new event-based recurrent cell, called the Spiking Recurrent Cell (SRC). SRC neurons communicate via events, generated with differentiable equations. The SRC and its equations are described in <ref>. Such event-based cell permits to leverage the potential of classical recurrent neural networks (RNN) training approaches to create networks that compute using spikes. The performance of SRC-based RNNs has been tested on neuromorphic versions of classical benchmarks, such as the MNIST benchmark and some variants, whose results are discussed in <ref>. SNNs built with SRCs achieve comparable results to other types of SNNs on these benchmarks. § RELATED WORKS This section aims at introducing RNNs and SNNs. Different approaches to train SNNs are also described. §.§ Recurrent Neural Networks RNNs are a type of neural networks that carry fading memory by propagating a vector, called the hidden state, through the time. More precisely, a RNN is usually composed of recurrent layers, also called recurrent cells, and classical fully-connected layers. Each recurrent cell has its own hidden state. At each time step, a new hidden state is computed from the received input and the hidden state. This allows RNNs to process sequences. Mathematically, this gives: h[t] = ϕ x[t], h[t-1]; Θ where h[t] and x[t] are respectively the hidden state and the input at time t, ϕ is the recurrent cell and Θ its parameters. Training RNNs has always been difficult, especially for long sequences, due to vanishing and exploding gradients <cit.>. Indeed, RNNs are trained using backpropagation through time (BPTT) <cit.>. This algorithm consists in first unfolding the RNN in time, i.e. turning it into a very deep feedforward network whose number of hidden layers is equal to the sequence length and whose weights are shared among layers. Then usual backpropagation is applied to this network. However, due to the huge number of layers, gradient problems are much more prone to appear than in usual feedforward networks. There exist several solutions to solve or at least attenuate these problems. For instance, exploding gradients can be easily solved using gradient clipping <cit.>. But the most notable improvement in RNNs was the introduction of the gating mechanism: gates, i.e. vectors of reals between 0 and 1, are used to control the flow of information, i.e. what is added to the hidden state, what is forgotten, etc. This has led to the two most known recurrent cells: the Long-Short Term Memory (LSTM) <cit.> and the Gated Recurrent Unit (GRU) <cit.>. LSTM uses 3 gates, while GRU is more lightweight and uses 2 gates. The new recurrent cell introduced in this paper (<ref>) is a derivation of GRU and can be expressed as a usual recurrent neural network. §.§ Spiking Neural Networks Biological neurons communicate using spikes, i.e. short pulses in neuron membrane potential, generated by a non-linear phenomena. These membrane potential variations are created from the flow of ions that go in and out of the cell. There exist a lot of different models to model neuron excitable membranes, the most notable being the Hodgkin-Huxley model <cit.>, and similar models called conductance-based models. Such models represent the neuron membrane as a capacitance in parallel with several voltage sources and variable conductances that respectively model the electrochemical gradients that apply on the different ions and the ions gates. Despite being very physiological, this model contains too many equations and parameters to be used in machine learning. That is why much more simple, phenomenological models are usually used to model spiking neurons in a SNN. A classical model of this type is the Leaky Integrate-and-Fire (LIF) model. It is composed of a leaky integrator, to integrate the input current into membrane potential variation, associated to a reset rule that is triggered once a threshold potential is reached. Once the threshold potential is reached, a spike is emitted and the potential is reset to its resting value. Unlike conductance-based models, the LIF model generates binary spikes, i.e. spikes that last one timestep and whose value is always 1. Mathematically, this gives: V[t] = α_V V[t - 1] + x[t] if V[t] > V_thresh, then s[t] = 1 and V[t] = V_rest otherwise s[t] = 0 where V[t], x[t] and s[t] are the membrane potential, the input and the output at time t, respectively, α_V is the leakage factor, V_thresh the threshold and V_rest the resting potential. The LIF model is far less physiological then conductance-based models, but it is much more lightweight and retains the core of spike-based computation. LIF neurons can be organized in layers to form a complete network. The question is now how to train such a network ? Due to the non-differentiable reset rule, usual backpropagation can not be used (or at least can not be used directly). To achieve reasonable training performance, many approaches to train SNNs have been proposed <cit.>, which can be split into three categories. First, SNNs can be trained using unsupervised learning rules, which are local to the synapses <cit.>. These learning rules are often derived from the Spike-timing-dependent plasticity process <cit.>, which strengthens or weakens synaptic connections depending on the coincidence of pre and post-synaptic spikes. This non-optimization-based training method is usually slow, often unreliable, and lead to unsubstantial performance. The second category is an indirect training. It consists in first training a usual ANN (with some constraints) and then converting it into a SNN <cit.>. Indeed, ANNs can be seen as special spiking networks that uses a rate-based coding scheme. These methods allow to use all the algorithms developed for training ANNs, and thus can reach high performance. However, they do not unlock the full potential of spiking networks, as rate-coding is not the only way of transmitting information through spikes. Also, rate-based coding usually results in a higher number of generated spikes, weakening the energy-efficiency of SNNs. The third and last approach is to rely on gradient-based optimization to directly train the SNN <cit.>. These methods usually smooth the entire networks or use a surrogate smoothed gradient for the non-differentiable activation to allow backpropagation. SNNs trained by gradient-based algorithms have achieved good performance, even competing with ANNs on some benchmarks. Notably, <cit.> used a smooth spike-generating process which replaces the non-differentiable activation of the LIF neurons. This approach is closely related to ours, as they both use soft non-linear activations to generate spikes. § SPIKING RECURRENT CELL The new spiking neuron introduced in this paper is derived from the well-known recurrent neural network GRU. This section describes its derivation and the different parts of the neuron, namely the spike-generation and the inputs-integration parts. §.§ Spike-generation As the starting point of the derivation of the SRC equations, another recurrent cell will be used, itself derived from GRU: the Bistable Recurrent Cell (BRC) created by <cit.>. Its main property is its never-fading memory created by the bistability property of its neurons. Here are the equations of GRU: z[t] = U_z x[t] + W_z h[t-1] + b_z r[t] = σU_r x[t] + W_r h[t-1] + b_r h[t] = z[t] ⊙ h[t-1] + 1 - z[t] ⊙tanhU_h x[t] + r[t] ⊙W_h h[t-1] + b_h And here are the ones of BRC: z[t] = σ U_z x[t] + w_z ⊙ h[t-1] + b_z r[t] = 1 + tanh U_r x[t] + w_r ⊙ h[t-1] + b_r h[t] = z[t] ⊙ h[t-1] + 1 - z[t] ⊙tanhU_h x[t] + r[t] ⊙ h[t-1] + b_h Both use two gates (z and r) to control the flow of information. There are two major differences between GRU and BRC, highlighted in red. First, the memory in BRC is cellular, meaning that each neuron of the cell has its own internal memory that is not shared with the others, while in GRU all internal states can be accessed by each neuron. The second difference is the range of possible values of r: in GRU, it is included between 0 and 1 while in BRC, it is included between 0 and 2. This difference allows the BRC neuron to switch from monostability (a ≤ 1) to bistability (a > 1). These two properties of BRC, i.e. the cellular memory and the bistability, can be used to generate spikes. The cellular memory can represent the membrane potential of the spiking neurons, while the bistability is created by a local positive feedback, which is the first step of a spike. Indeed, a spike can be described in two steps: a fast local positive feedback that brings the potential to a high value followed by a slower global negative feedback that brings back the potential to its resting value. Therefore, integrating such a negative feedback to BRC equations will allow the cell to generate spikes. This can be done by adding a second hidden state h_s which lags behind h (<ref>) and a new term in the update equation of h (highlighted in red in <ref>). As no information can be transmitted between neurons except when a spike occurs, the fast hidden state h is passed through a ReLU function to isolate the spikes from the small, subthreshold variations of h. This creates the output spikes train s_out (<ref>). The input of SRC, i.e. the integration of the input pulses, will be discussed afterwards, therefore we will simply use x to denote the input used by the spike generation. This leads to the equations that generate spikes: h[t] = z ⊙ h[t-1] + 1 - z⊙tanhx[t] + r ⊙ h[t-1] + r_s ⊙ h_s[t-1] + b_h z_s[t] = z_s^hyp - (z_s^hyp - z_s^dep) * 1/1 + exp -10 * (h[t-1] - 0.5) h_s[t] = z_s ⊙ h_s[t-1] + 1 - z_s⊙ h[t-1] s_out[t] = h[t] Two new gates (r_s and z_s) have to be introduced. To enforce that no computation could be achieved through alterations in the shape of a spike, the 4 gates does not depend anymore on learnable weights. Three of them are fixed to constant values: r = 2, r_s = -7 and z = 0. The fourth one, z_s, controls the speed at which h_s catches up with h: the lower the faster. To create spikes with short depolarization periods, z_s should be low at depolarization potentials, and larger at subthreshold potentials, mimicking the voltage-dependency of ion channel time constants in biological neurons. This is modeled using <ref>, where z_s^hyp is the value at hyperpolarization potentials (low h) and z_s^dep the value at depolarization potentials (high h). In practice, z_s^hyp = 0.9 and z_s^dep = 0. Finally, the bias b_h controls the propensity of neurons to fire spikes: the higher, the easier. However if it reaches too high values, the neurons may saturate. As this is a behavior that we would rather avoid, the bias should be constrained to always be smaller than some value. In the experiments, we have fixed this higher bound to -4. <ref> shows the behavior of one SRC neuron given different inputs x and biases b_h. It can be observed that for a high bias (<ref>), the neuron is able to spike even with a null input, while for a lower one (<ref>), the neuron remains silent. SNNs are often put forward for their very small energy consumption, due to the sparse activity of spiking neurons. It is thus important to be able to measure the activity of said neurons. In the context of SRC neurons, the spikes do not last exactly one timestep. It is therefore better to compute the number of timesteps during which spikes are emitted rather than the number of spikes. This brings us to define the relative number of spiking timesteps: 𝒯(s) = 1/T∑_t = 1^T H(s[t]) where H denotes the Heaviside step function. §.§ Input-integration The last point to be addressed before being able to construct networks of SRCs is how to integrate the input spikes. We have decided to use leaky integrators with learnable weights w_i: i[t] = α i[t-1] + ∑_i w_i s_i[t] where α is the leakage factor. To prevent the SRC from saturating due to large inputs, we also add a rescaled hyperbolic tangent to i[t] to create neuron input x[t]. The equations of a whole SRC layer therefore writes, starting from the input pulses s_in up to the output pulses s_out: i[t] = α i[t-1] + W_s s_in[t] x[t] = ρ·tanhi[t]/ρ z_s[t] = z_s^hyp - (z_s^hyp - z_s^dep) * 1/1 + exp -10 * (h[t-1] - 0.5) h[t] = tanhx[t] + r ⊙ h[t-1] + r_s ⊙ h_s[t-1] + b_h h_s[t] = z_s[t] ⊙ h_s[t-1] + 1 - z_s[t]⊙ h[t-1] s_out[t] = h[t] To sum up, <ref> first integrates the input pulses using a leaky integrator. The result then passes through a rescaled hyperbolic tangent in <ref>. z_s is computed, based on h, in <ref>. This forms the input used by the spike generation part (<ref> and <ref>) to update h and h_s. Finally, <ref> isolates the spikes from the small variations of h and generates the output pulses. The rescaling factor ρ is set to 3, forcing x to be between -3 and 3. Finally, like the other recurrent cells, SRC can be organized in networks with several layers. § EXPERIMENTS This section describes the different experiments that were made to assess SRC performance. §.§ Benchmarks The SRC has been tested on the well-known MNIST dataset <cit.>, as well as two variants. The Fashion-MNIST dataset <cit.> contains images of fashion products instead of handwritten digits. It is known to be more difficult than the original MNIST. The second variant is the Neuromorphic MNIST (N-MNIST) <cit.> which, as its name suggests, is a neuromorphic version of MNIST where the handwritten digits have been recorded by an event-based camera. The MNIST and Fashion-MNIST datasets are not made to be used with spike-based networks, therefore their images must first be encoded into spike trains. To do so, a rate-based coding and a latency-based coding were used in the experiments. The first one creates one spike train per pixel, where the number of spikes per time period is proportional to the value of the pixel. More precisely, the pixel is converted into a Poisson spike train using its value as the mean of a binomial distribution. To avoid having too many spikes, we have scaled the pixel values by a factor (the gain) of 0.25. Therefore, a white pixel (value of 1) will spike with a probability of 25% at each timestep, while a black one (value of 0) will never spike. The latency-based coding is much more sparse, as each pixel will spike at most one time. In this case, the information is contained in the time at which the spike occurs. The idea is that brighter pixels will spike sooner than darker ones. The spike time t_spk of a pixel is defined as the duration needed by the potential of a (linearized) RC circuit to reach a threshold V_th if this circuit is alimented by a current I equivalent to the pixel value: t_spk = min - τ I - 1 , V_th where τ is the time constant of the RC circuit. In our experiments, we have used a τ = 10 and a V_th = 0.01. The spike times are then normalized to span the whole sequence length, and the spikes located at the last timestep (i.e. the spikes whose t equals to V_th) are removed. The encodings were performed using the snnTorch library <cit.>. All the experiments were made using spikes trains of length 200. Therefore, the MNIST (or Fashion-MNIST) inputs of dimension (1, 28, 28) are converted to tensors of size (200, 1, 28, 28). On the other hand, N-MNIST already has event-based inputs. Indeed, each sample contains the data created by an event-based camera. Therefore this data just need to be converted to tensors of spikes. An event-based camera pixel outputs a event each time its brightness changes. There are therefore two types of events: the ones issued when the brightness increased and the ones issued when it decreases. A N-MNIST sample is a list of such events, which contains a timestamp, the coordinates of the pixel that emitted it, and its type. The Tonic library <cit.> was used to load the N-MNIST dataset and convert its samples into tensors of size (200, 2, 34, 34). The first dimension is the time, the second is related to the type of the event and the two last are the x and y spatial coordinates. §.§ Readout layer In order to extract the predictions from the outputs of an SRC network, the final SRC layer is connected with predefined and frozen weights to a readout layer of leaky integrators, with one integrator per label and a leakage factor of 0.99. Each integrator is excited (positive weight) by a small group of SRC neurons and is inhibited (negative weight) by the others. In our experiments, this final SRC layer contains 100 neurons. Each integrator is connected to all neurons: 10 of these connections have a weight of 10, while the others have a weight of -1. The prediction of the model corresponds to the integrator with the highest value at the final timestep. §.§ Loss function The networks were trained using the cross-entropy loss, which is usually used in classification tasks. This function takes as inputs the values x of the leaky integrators at the final timestep and the target class y. The loss is then computed (for a single sample) as: l(x, y) = -log exp(x_y)/∑_c = 1^C exp(x_c) where C is the number of classes and x_c refers to the element of x associated to the class c. This function basically applies the Softmax function to x and then computes the negative log likelihood. For a whole batch, we simply take the mean of the l's. §.§ Learning The loss function being defined, it is now possible to train networks of SRCs using the usual automatic differentiation of PyTorch. Experiments showed that bypassing the ReLU during backpropagation really fasten learning. As explained in <ref>, this ReLU is used to isolate the spikes (high variations of h) from the small fluctuations. Considering the backward pass, this ReLU blocks the gradients when no spike is currently occurring, i.e. h[t] < 0. We therefore tested to let these gradients pass even when no spike is occurring. This reminds of the surrogate gradient optimization <cit.> used to train LIF neurons, where, in our case, the activation function is a ReLU, while the backward pass assumes it was a linear activation: s_out[t] = h[t] ∂ s_out[t]/∂ h[t] = 1, ∀ h[t] <ref> shows the evolution of the accuracy and cross-entropy of two SRC networks composed of 3 layers, one trained with the surrogate gradient, the other without. For each network we have trained 5 models. Except the usage of the surrogate gradient, all the other parameters are the same. It is clear that the surrogate gradient speeds up the learning, and will therefore be used in all our experiments. §.§ Results All experiments have been performed using PyTorch on GPUs, without any modification of the automatic differentiation and backpropagation algorithm, except for the ReLU that is bypassed during backward passes. All training have lasted 30 epochs. We have used the Adam optimizer with an initial learning rate of 0.005 that decays exponentially with a factor of 0.97. For each set of parameters, 5 models were trained. §.§.§ Shallow networks As a first experiment, we have tested several shallow networks with either 1, 2, or 3 SRC layers. The final layer always contains 100 neurons connected to the readout layer, as described in <ref>. The size of the hidden layers, if the model has any, was fixed to 512 neurons. The leakage factor of the SRC integrators was set to 0.9, except in the experiments where the latency-based coding was used, where it was set to 0.99 to deal with the high sparsity of the inputs. The leakage factor of the readout integrators was fixed to 0.99. Neuron biases b_h are initialized to 6 and the Xavier uniform initialization is used for the synaptic weights W_s. <ref> shows the different testing accuracies achieved by these networks. We can observe that SRC networks were able to learn and achieved comparable performances to other non-convolutional SNNs, despite only being trained for 30 epochs. We also observe that multi-layers networks performs better than single-layer ones. As previously mentioned, another important aspect of such networks is the neurons activity. Using the measure defined in <ref>, the mean activity of the neurons has been computed and is shown in <ref>. The mean activity stays quite low, which is good. It can also be observed than when the encoding is not sparse (rate coding), the shallow the network the lower the activity, while it is the opposite for the sparse encoding (latency coding). §.§.§ Training deeper neural networks Shallow networks of SRC neurons have successfully been trained. However, one of the breakthroughs in deep learning was the ability to train deep neural networks. Training deep SNNs is known to be difficult. We have therefore tested several networks with different number of hidden layers to see if SRCs manage to learn also when the network becomes deeper. As previously, all trainings have lasted 30 epochs. These were made on the MNIST dataset with the rate-based coding. All hidden layers consist of 512 neurons, while the final SRC layer still contains 100 neurons. <ref> shows the results of this experiment. All networks manage to learn and achieve good performances after 30 epochs. However, the higher the number of hidden layers, the slower the training. This explains why the models with a high number of hidden layers do not perform as good as shallow networks at the end of the 30 epochs. Nevertheless, the goal of this experiment was not to assess the performance but rather the ability to learn of deep networks. Furthermore, the top-right graph shows the duration of each epoch for each number of hidden layer. It obviously increases with respect to the network depth but training duration stays quite small even for large number of hidden layers. For instance, the training of the 10 hidden layers networks have lasted about one day. § CONCLUSION In this paper, we have introduced a new type of artificial spiking neuron. Instead of deriving this neuron from existing spiking models, as it is classically done, we have started from a largely used RNN cell. This new spiking neuron, called the Spiking Recurrent Cell (SRC), can be expressed as a usual recurrent cell. Its major advantage is the differentiability of its equations. This property allows to directly apply the usual backpropagation algorithm to train SRCs. Spiking neural networks made of SRCs have been tested on the MNIST benchmark as well as two variants, the Fashion MNIST and the Neuromorphic MNIST. These networks have achieved results which are comparable to the ones obtained with other non-convolutional SNNs. Also, multi-layers networks have shown to be able to learn. This proof of concept shows promising results and paves the way for new experiments. For instance, trying a convolutional version of the SRC on more complex image classification tasks would be interesting. Also, adding feedback connections could increase the computational power of the SRC, as it is up to now only a feedforward SNN. Improving the initialization of the synaptic weights should also be considered. Finally, the neuron is currently only able to modify the synaptic weights and the biases via backpropagation. It is possible to add new learnable parameters in the SRC equations in order to let the possibility to the neuron to control more aspects of the neurons, like for instance the firing rate or the firing pattern. Florent De Geeter gratefully acknowledges the financial support of the Walloon Region for Grant No. 2010235 – ARIAC by DW4AI. plainnat
http://arxiv.org/abs/2306.10003v1
20230616175616
C2F2NeUS: Cascade Cost Frustum Fusion for High Fidelity and Generalizable Neural Surface Reconstruction
[ "Luoyuan Xu", "Tao Guan", "Yuesong Wang", "Wenkai Liu", "Zhaojie Zeng", "Junle Wang", "Wei Yang" ]
cs.CV
[ "cs.CV" ]
C2F2NeUS: Cascade Cost Frustum Fusion for High Fidelity and Generalizable Neural Surface Reconstruction Luoyuan Xu^1, Tao Guan^1, Yuesong Wang^1,*, Wenkai Liu^1, Zhaojie Zeng^1, Junle Wang^2, Wei Yang^1 ^1 Huazhong University of Science and Technology, ^2 Tencent {xu_luoyuan, qd_gt, yuesongwang, wenkai_liu, zhaojiezeng, weiyangcs}@hust.edu.cn [email protected] Submitted 9 June 2023. ========================================================================================================================================================================================================================================================================================= There is an emerging effort to combine the two popular technical paths, i.e., the multi-view stereo (MVS) and neural implicit surface (NIS), in scene reconstruction from sparse views. In this paper, we introduce a novel integration scheme that combines the multi-view stereo with neural signed distance function representations, which potentially overcomes the limitations of both methods. MVS uses per-view depth estimation and cross-view fusion to generate accurate surface, while NIS relies on a common coordinate volume. Based on this, we propose to construct per-view cost frustum for finer geometry estimation, and then fuse cross-view frustums and estimate the implicit signed distance functions to tackle noise and hole issues. We further apply a cascade frustum fusion strategy to effectively captures global-local information and structural consistency. Finally, we apply cascade sampling and a pseudo-geometric loss to foster stronger integration between the two architectures. Extensive experiments demonstrate that our method reconstructs robust surfaces and outperforms existing state-of-the-art methods. § INTRODUCTION Reconstructing 3D structures from a set of images is a fundamental task in computer vision, with widespread applications in fields such as autonomous vehicles, architectural preservation, virtual/augmented reality, and digital twins. Multi-view stereo (MVS) is a widely-used technique for addressing this task, exemplified by MVSNet <cit.> and its successors <cit.>. These methods construct 3D cost volumes based on the camera frustum, rather than regular euclidean space, to achieve precise depth map estimation. However, these methods typically require post-processing steps, such as depth map filtering, fusion, and mesh reconstruction, to reconstruct the 3D surface of the scene, and can not well handle noises, textureless regions, and holes. The implicit scene representation approaches, e.g., Neural Radiance Fields (NeRF) <cit.> and its peer Neural Signed Distance Function <cit.>, achieves remarkable results in view synthesis and scene reconstruction. The implicit surface reconstruction approaches typically employ Multi-layer Perceptrons (MLPs) to implicitly fit a volume field. We then can extract scene geometry and render views from the implicit volume field. These approaches usually require a large number of images from different viewpoints and adopt a per-scene optimization strategy, which means they are not generalizable to unknown scenes. There is an emerging effort to merge the two technical paths,  <cit.> and achieve good performances. MVSNeRF <cit.> combines NeRF <cit.> with MVSNet <cit.> using frustum cost volume for generalizable view synthesis. RC-MVSNet <cit.> utilizes NeRF's neural volume rendering to handle view-dependent effects and occlusions, which leads to improved accuracy in unsupervised depth estimation. The most related to our approach is the SparseNeUS <cit.> for generalizable surface reconstruction method for sparse views. It builds a regular euclidean volume (i.e., cube) to encode geometric information by aggregating 2D feature maps of multiple images. The features sampled from it and the corresponding positions are used to estimate the signed distance function (SDF). However, a regular volume doesn't fit a camera's view naturally, which can be better modeled as view frustum. More specifically, as illustrated in Fig. <ref>, a volume with a higher resolution costs more memory and collect redundant image features, while a coarser volume causes quality degradation. Instead, we propose to build the cost frustum for each view and this strategy has been proven to be effective on MVSNet <cit.> and its successors. In this paper, we propose a novel integration scheme that combines MVS with neural implicit surface reconstruction. To encode the geometric information of the scene, we first construct a volume on the camera frustum and then convert it into a cascade geometric frustum. As shown in Fig. <ref>, to fit each camera's view well, we build a cascade frustum for every view and then fuse them using a proposed cross view and scale fusion strategy that effectively captures global-local information and structural consistency. By combining the 3D position, fused feature, and view direction, we estimate the SDF and render colors using volume rendering <cit.>. Moreover, we utilize the intermediate information output by MVS part to apply cascade sampling and a pseudo-geometric loss, which further improves the quality of the reconstructed surface. Our experiments on the DTU <cit.> and BlendedMVS <cit.> datasets demonstrate the effectiveness and generalization ability of our proposed method, surpassing existing state-of-the-art generalization surface reconstruction techniques. Our approach makes the following contributions: * We introduce a novel exploration approach that integrates MVS and implicit surface reconstruction architectures for end-to-end generalizable surface reconstruction from sparse views. * We propose a cross view and scale fusion strategy to effectively fuse features from multiple views and scales. * We further utilize information from the MVS part to apply cascade sampling and a pseudo-geometric loss to the neural surface part, promoting better integration between the two architectures. § RELATED WORK Neural Surface Reconstruction Neural implicit representations enable the representation of 3D geometries as continuous functions that are computable at arbitrary spatial locations. Due to the ability to represent complex and detailed shapes in a compact and efficient manner, these representations show significant potential in tasks such as 3D reconstruction <cit.>, shape representation <cit.>, and novel view synthesis <cit.>. To avoid relying on ground-truth 3D geometric information, many of these methods employ 2D images as supervision through classical rendering techniques, such as surface rendering and volume rendering. While some methods <cit.> reconstruct the surface and render 2D images using surface rendering, they often require accurate object masks, which can be challenging to obtain in practical scenarios. As NeRF <cit.> successfully integrates implicit neural functions and volume rendering, and generates photo-realistic novel views, some methods <cit.> incorporate SDF into neural volume rendering to achieve surface reconstruction without additional masks. Despite these advancements, further improvement in surface quality is achieved by introducing additional geometric priors <cit.>. However, such methods require dense images and are not easily generalizable to unknown scenes, which limits their scalability. While SparseNeUS <cit.> provides a preliminary solution by encoding geometric information using a regular euclidean volume, it still is challenging to achieve high-quality reconstructions due to the regular volume doesn't fit a camera's view naturally. Multi-view Stereo With the rapid advancements in deep learning techniques, MVS methods based on depth map fusion have shown remarkable performance. The pioneering MVSNet <cit.> architecture constructs a 3D cost volume by leveraging differentiable homography warping operations, and generates the depth map through cost volume regularization. The key to success lies in its utilization of camera frustums instead of regular euclidean spaces for constructing 3D cost volumes. Subsequent works <cit.> progressively optimize the depth map by refining the camera frustum in a coarse-to-fine manner, achieving impressive performances on various benchmarks. However, these methods require a series of post-processing operations, such as depth map filtering, depth map fusion, and mesh reconstruction, to reconstruct the 3D structure of the scene, and can not well handle noises, textureless regions, and holes. The Integration of MVS and Neural Implicit Scene Representation The integration of MVS and neural implicit scene representation generates significant interest among researchers, leading to several recent explorations <cit.>. MVSNeRF <cit.> constructs a cost volume to enable geometry-aware scene reasoning. It then uses volume rendering <cit.> in combination with position, view direction, and volume features to perform neural radiation field reconstruction and achieve generalizable view synthesis. RC-MVSNet <cit.> add an independent cost volume for volume rendering, allowing the network to learn how to handle view-dependent effects and occlusions, and improve the quality of depth maps. MVSDF <cit.> leverages the geometry and feature consistency of Vis-MVSNet<cit.> to optimize the SDF, resulting in more robust geometry estimation. However, MVSNeRF <cit.> is difficult to generate high-quality surfaces, RC-MVSNet <cit.> requires cumbersome post-processing steps to obtain surfaces, and MVSDF <cit.> cannot generalize to unknown scenes and requires dense images. Our method, on the other hand, differs significantly as it focuses on achieving high fidelity and generalizable surface reconstruction for sparse views in an end-to-end manner. § METHOD In this section, we explain the detailed structure of our proposed C2F2NeUS, which is a novel integration scheme that better combines MVS with neural implicit surface representation. With this integration, C2F2NeUS achieves high fidelity and generalizable surface reconstruction for sparse views in an end-to-end manner. As illustrated in Fig. <ref>, by fusing the view-dependent frustums in the MVS part, we obtain more accurate geometric features which are sent to the neural implicit surface part to predict SDF and extract surfaces. Specifically, we first construct a view-dependent cascade geometric frustum for each view to encode geometric information of the scene and fully exploit the advantage of MVS(Sec. <ref>). For a given set of 3D coordinates, we then sample and fuse the feature from these frustums by using the proposed cross view and scale fusion strategy (Sec. <ref>). This strategy can effectively capture global-local information and structural consistency. Next, we introduce how to predict SDF and render color from the fused feature (Sec. <ref>). And the SDF prediction network generates a SDF field which is used for surface reconstruction, this representation leverages the smooth and complete geometry of SDF. To train the SDF prediction network in an unsupervised manner, we render color via volume rendering. Finally, we introduce the training loss of our end-to-end framework (Sec. <ref>). §.§ Cascade Geometric Frustum Generation To make the implicit neural surface reconstruction generalizable and capture the scene information more accurately, we encode the global geometric information of the scene by building a geometric frustum. Unlike SparseNeUS <cit.>, which utilizes a regular euclidean volume, we construct the volume from the perspective MVS. In MVS, the reference view is most important and other source views contribute to the depth estimation for the reference view. Since a single view-dependent frustum cannot describe the complete scene due to occlusion, we create a frustum for each image and treat the image as the reference view and other images as source views. Besides, we estimate the corresponding depth maps from each cost frustum to construct the cascade geometric frustums (only the finer level frustum only stores its difference from the coarser frustum). To accomplish this, we first extract feature maps {F_i}_i=0^N-1 using a 2D feature extraction network for N images {I_i}_i=0^N-1 of the scene. With the corresponding camera parameters {K_i, R_i, T_i}_i=0^N-1 of each image, we then build a 3D cost frustum C ∈ℝ^c × d × h × w for the reference camera via differential homography warping operations, where c, d, h, w are dimensions of feature, number of depth samples, height, width respectively. In our implementation, we construct a 3D cost frustum {C_i}_i=0^N-1 for each image as the reference view and use the remaining images as source views. The 3D cost frustums {C_i}_i=0^N-1 are then regularized by 3D CNN Ψ to obtain the geometric frustums {G_i}_i=0^N-1, G_i∈ℝ^c × d × h × w. The intermediate volumes V_i∈ℝ^c × d × h × w in the regularization are used to estimate the probability volumes P_i∈ℝ^1 × d × h × w and the depth maps D_i∈ℝ^1 × 1 × h × w of the current reference views I_i. The generation of the geometric frustums G_i is defined as: V_i, P_i, D_i=Ψ_1(C_i), G_i=Ψ_2(V_i). The depth maps D_i are used to redefine the depth hypothesis and construct the cascade 3D cost frustums, ultimately constructing cascade geometric frustums {G_i^j}_i=0,...,N-1^j=0,...,L-1, where L is the cascade level number. §.§ Cross View and Scale Fusion Strategy The cost frustum of each view and scale has different importance. Intuitively, regions with relatively smaller angles w.r.t. the viewpoint in finer frustums are more crucial. A fusion method of simply adding the features sampled from all frustums would be inappropriate, as it may result in performance degradation in our practice. Consequentially, we propose a cross view and scale fusion strategy that treats each view and scale differently according to their importance. This fusion strategy effectively captures the spatial and structural information of the scene, and produces a more precise surface. We introduce an adaptive weight A_i^j∈ℝ^1 × d × h × w for each geometric frustum G_i^j, which is normalized using the sigmoid function. Therefore, we can rewrite Equ. <ref> as V_i, P_i, D_i=Ψ_1(C_i), G_i, A_i=Ψ_2(V_i). To integrate both the global information from coarser frustums and the local information of finer frustums, we concatenate features at different scales and sum the features at different viewpoints according to their weights. Specifically, we sample the corresponding features g_i^j = G_i^j(p) ∈ℝ^1 × c and weights a_i^j = A_i^j(p)∈ℝ^1 × 1 of a given 3D position p ∈ℝ^1 × 3 from all frustums using bilinear interpolation. Then, we concatenate features and sum the weights of different scales j=0,...L-1 for each viewpoint I_i, and obtain new features g_i^L = cat({g_i^j}), and new weights a_i^L(p) = sum({a_i^j}), respectively, where g_i^L(p) ∈ℝ^ 1 × Lc and a_i^L(p) ∈ℝ^1 × 1. Finally, we fuse the concatenated features g_i^L from different viewpoints {I_i}_i=0^N-1 based on their respective weights a_i^L. The final geometric feature of the given 3D position p is defined as f_geo = Σ_i=0^N-1 a_i^L· g_i^L/Σ_i=0^N-1 a_i^L, where f_geo∈ℝ^1 × Lc. §.§ SDF Prediction and Volume Rendering We would like to exploit the advantage for neural implicit surface reconstruction, i.e., the surface generated extracted from a neural SDF network is usually very smooth and consistent. SDF Prediction. Given an SDF prediction network Φ consisting of MLP and an arbitrary 3D position p with its corresponding geometric feature f_geo, we first encode the position p using position encoding γ (·). We then use the encoded position and geometric feature f_geo as input to the SDF prediction network Φ to predict the SDF s(p) of 3D position p. Our SDF prediction operation is defined as: s(p)=Φ ( γ (p), f_geo) . Blending Weights. Similar to IBRNet <cit.>, we use blending weights to estimate color of a 3D position p and view direction d. We extract 2D color feature maps from N input images via a new feature extract network. For a given 3D position p with its corresponding geometric feature f_geo and view direction d, we project p onto N input views and extract corresponding color features f_i^col from color feature maps using bilinear interpolation. We then compute the mean u and variance v of the sampled color features f_i^col for different views to capture cross-image information and concatenate each feature f_i^col with u and v. A small shared MLP Γ is used to process the concatenated features and generate new features f_i^col2 that contain color information. We also compute the direction difference Δ d=d-d_i between the view direction d and each input image's viewpoint d_i. The color features f_i^col2, direction differences Δ d, and geometric features f_geo are fed into a new MLP network Γ_col for generating blending weights w_i(p). w_i(p) = Γ_col(Γ(f_i^col, u, v), Δ d, f_geo). Finally, we use the softmax operator to normalize blending weights {w_i(p)}_i=0^N-1. Volume Rendering. As there is no ground-truth 3D geometry, to supervise the SDF prediction network, we render the color of the query ray and calculate its consistency with the ground-truth color. Specifically, we perform the ray point sampling, where each sampled position p and viewpoint d are used to predict the corresponding SDF s(p) and blending weights w_i(p). We then project the position p onto N input images to extract their respective colors c_i(p), and compute the color c(p) of position p as a weighted sum of the sampled color c_i(p) and blending weights w_i(p). Next, we apply volume rendering as in NeUS <cit.> to render the color of the ray by aggregating the SDF and color of each position p along the ray. The rendered color is compared to the ground-truth color for calculating the consistency loss. Cascade Sampling and Pseudo-depth Generation. To further leverage the benefits of MVS and enhance the quality of the extracted surfaces, we incorporate cascade sampling on the frustum and a pseudo-geometric loss, which enforce a stronger integration between MVS and neural implicit surface. In this work, we apply an adaptive sampling strategy using the intermediate probability volume P_i of the cascade frustum generation network. Specifically, we take the query image as the reference view, and other input images as the source views, and send them to the cascade frustum generation network to obtain the depth maps D_que^j and probability volumes P_que^j at different scales. We use the probability volume P_que^j=0 of the coarsest layer for cascade sampling. We then compute the mean α and standard β deviation of the probability volume P_que^j=0 along the depth channel. The adaptive sample ranges [t_n, t_f] are defined as follows: [t_n, t_f] = [α-β, α+β]. With the high-resolution depth maps D_que^j=L-1, D_i^j=L-1 of the query image and other input images, we compute the geometric consistency to obtain the respective effective masks. The masked depth maps can be considerd pseudo-depth label, which use to supervise the SDF prediction network. Specifically, the masked depth map of the query image is used to compute a pseudo-depth consistency loss. Further, we fuse the masked depth maps of different images into point clouds, which are used to directly supervise the SDF prediction network. §.§ Loss Function Ground-truth 3D geometric labels are difficult to obtain, to address this issue, our framework employs an unsupervised learning approach. Specifically, we introduce a training loss ℒ_total as a combination of two unsupervised losses for training the depth map and SDF, respectively. ℒ_total = ℒ_dep + ℒ_sdf. Following prior research <cit.>, our framework utilizes several losses to supervise the intermediate depth map D_i^j of cascade geometric frustum network. These losses include image pixel loss ℒ_ip, image gradient loss ℒ_ig, structure similarity loss ℒ_ss, census transform loss ℒ_ce, depth smoothness loss ℒ_ds, and vertex-face normal consensus loss ℒ_vfnc <cit.>. The overall loss ℒ_dep for training the depth map is defined as the sum of these losses. ℒ_dep = ℒ_ip + ℒ_ig + ℒ_ss + ℒ_ce +ℒ_ds + ℒ_vfnc. where the corresponding weight for each term is not included for clear presentation. We also supervise the SDF using a loss that incorporates ground-truth colors and a geometry-based loss derived from pseudo-depth, which can provide reliable guidance without relying on expensive ground-truth geometry labels. The overall loss ℒ_sdf is defined as follows: ℒ_sdf = ℒ_cc + ℒ_eik +ℒ_spa + ℒ_pdc + ℒ_pgs. The color consistency loss ℒ_cc is an L1 distance between the rendered color and the ground-truth color. ℒ_cc is defined as: ℒ_cc = 1/X∑_x=0^X-1|c-ĉ|_1, where c, ĉ is the rendered color and ground-truth color, and X is the number of pixels sampled on the query image. The Eikonal term ℒ_eik <cit.> is used to regularize SDF value, which is defined as: ℒ_eik = 1/XY∑_x,y (|| ∇Φ(p_x,y) ||_2-1)^2, where Y is the sample number along the ray. p_x,y is the sampled 3D position. The sparseness regularization term ℒ_spa <cit.> is used to penalize the uncontrollable free surfaces, which is defined as: ℒ_spa = 1/||Q||∑_q ∈ Qexp (-ϵ· |s(q)|), where Q is a set of random 3D points on the 3D scene, |s(q)| is the absolute value of the SDF value of q, ϵ is a hyperparameter that is set to 100. The pseudo-depth consistency loss ℒ_pdc is an L1 distance between the rendered depth and the pseudo-depth label. ℒ_pdc is defined as: ℒ_pdc = 1/X∑_x=0^X-1|d-d̂|_1, where d, d̂ is the rendered depth and ground-truth depth. ℒ_pgs is the pseudo-geometry SDF loss. The SDF values of the pseudo point clouds are zeroes. ℒ_pgs is defined as: ℒ_pgs = 1/||Q_2||∑_q_2∈ Q_2 |s(q_2)|, where Q_2 is a set of 3D points randomly selected from the pseudo point clouds. § EXPERIMENTS In this section, we demonstrate the effectiveness of our proposed method. Firstly, we provide a detailed account of our experimental settings, which includes implementation details, datasets, and baselines. Secondly, we present quantitative and qualitative comparisons on two widely used datasets, namely DTU <cit.> and BlendedMVS <cit.>. Finally, we conduct detailed ablation studies to analyze the contribution of different components of our proposed method. §.§ Experimental Settings Implementation Details. We implement our method in PyTorch. In the cascade geometric frustum generation network, we adopt the same cascade scheme with CasMVSNet <cit.>, but make the following changes: we use the 2D feature extraction network consisting of 9 convolutional layers, share the 3D CNN Ψ across scales, and set the dimension of image and volume features to c=8. During training, we take N=5 images with a resolution of 640 × 512 as input and use an additional image as the query image to supervise the SDF prediction network Φ. The cascade stage number is set to L=3. We train our end-to-end framework on one A100 GPU with a batch size of 2 for 300k iterations. We set the same learning rate and cosine decay schedule as NeUS <cit.>. The ray number is set to X=512, and the sample number on each ray is Y=N_coarse+N_fine, where N_coarse=64 and N_fine=64. The weight of ℒ_ds is 0.0067 and of the remaining term is all 1 in Equ. <ref>. The weight of each term is 1,0.1,0.02,0.05,1 in Equ. <ref>. The losses ℒ_vfnc, ℒ_pdc, and ℒ_pgs are added after 10k iterations. Datasets. The DTU dataset <cit.> is a well-known indoor multi-view stereo dataset, consisting of 124 scenes captured under 7 distinct lighting conditions. Consistent with prior research <cit.>, we employ 75 scenes for training and 15 non-overlapping scenes for testing. Each test scene contains two sets of three images offered by SparseNeUS <cit.>. We evaluate our method using three views with a resolution of 1600 × 1152. To ensure fairness in evaluation, we adopt the foreground masks provided by IDR <cit.> to assess the performance of our approach on the test set, as in previous studies <cit.>. To examine the generalization ability of our proposed framework, we conduct a qualitative comparison of our method on the BlendedMVS dataset <cit.> without any fine-tuning. Baselines. To evaluate the effectiveness of the proposed method, we make a detailed comparison with three types of methods: 1) traditional methods, represented by COLMAP <cit.>; 2) generalizable neural implicit reconstruction methods, such as SparseNeUS <cit.>, MVSNeRF <cit.>, VolRecon <cit.>; 3) per-scene optimization-based methods, such as SparseNeUS-ft <cit.>, NeUS <cit.>. §.§ Comparisons on DTU We perform surface reconstruction for sparse views (only 3 views) on the DTU dataset <cit.> and evaluate the predicted surface against the ground-truth point clouds using the chamfer distance metric. Tab. <ref> and Fig. <ref> present a summary of the comparison between our method and other existing methods, which demonstrate that our method achieve better performance. It is important to note that our method is solely trained on the training set without any fine-tuning on the test set to assess its generalization capability. Our method surpasses the generalizable version of SparseNeUS <cit.> by 32% and significantly outperforms its fine-tuning variant. Furthermore, our method exhibits superior performance compared to VolRecon <cit.>, which employs ground-truth depth maps for supervision. §.§ Generalization on BlendedMVS To showcase the generalization capabilities of our proposed method, we conduct additional tests on the BlendedMVS dataset <cit.> without any fine-tuning. The qualitative comparison between our method and other methods is presented in Fig. <ref>. The results indicate that our method exhibits a robust generalization ability and produces a more refined surface when compared to other generalizable neural implicit reconstruction methods. §.§ Comparison with Unsupervised MVS To compare our method with MVS, we retrain CasMVSNet <cit.> with the unsupervised loss in Equ. <ref>. We estimate the depth maps of three views, filter the depth maps using geometric consistency and the masks provided by IDR <cit.>, and fuse them into a point clouds. The qualitative comparison is shown in Fig. <ref>, which demonstrates that our method is more robust and reconstructs a more complete surface. §.§ Ablation Studies Effect of Camera Frustum and Regular Euclidean Space. Regular volume doesn't simultaneously fit all cameras well, which leads to blurred features, particularly in sparser scenes. Instead, camera frustum volume can better model. We present a performance comparison between the regular euclidean volume and the camera frustum volume without cascade in Table <ref>. The comparison results reveal that our method achieves better performance with similar GPU memory. Effect of Different Components. In this experiment, we present the results of different components to demonstrate their effectiveness. Stage1 indicates one-stage cascade, while Stage3 means three-stage cascades. GeoLoss is the sum of ℒ_pdc and ℒ_pgs in Equ. <ref>. As shown in Tab. <ref>, the reconstructed surface quality significantly improves as we increase the cascade stage numbers. Moreover, The introduction of GeoLoss further improves the surface quality. Effect of Cross View and Scale Fusion Strategy. To demonstrate the effectiveness of the proposed cross view and scale fusion strategy, we remove this strategy on Stage3 and adopt simple addition. As shown in Tab. <ref> and Fig. <ref>, using only simple addition will lead to severe performance degradation and extract noisy surfaces. On the other hand, with our fusion strategy, the performance is significantly improved, and a finer surface is extracted. This demonstrates the effectiveness of our proposed fusion strategy in capturing global-local information and structural consistency. § CONCLUSIONS We propose a novel integration scheme, C2F2NeUS, for exploiting both the strengths of MVS and neural implicit surface reconstruction. Previous methods rely on regular euclidean volume for cross-view fusion, which doesn’t simultaneously fit all cameras well and may lead to blurred features. We construct a cascade geometric frustum for each view instead and conduct effective fusion. Our method achieves state-of-the-art reconstruction quality for sparse inputs, which demonstrates its effectiveness. However, our method still suffers several limitations, one is that the frustums can overlap with each other in 3D space resulting in redundant computations, and our approach constructs a cost frustum for each view, making it infeasible for dense views. In the future, we plan to optimize the frustum space and reduce computation in overlapping areas. ieee_fullname
http://arxiv.org/abs/2306.05533v1
20230608200954
Chiral-even axial twist-3 GPDs of the proton from lattice QCD
[ "Shohini Bhattacharya", "Krzysztof Cichy", "Martha Constantinou", "Jack Dodson", "Andreas Metz", "Aurora Scapellato", "Fernanda Steffens" ]
hep-lat
[ "hep-lat", "hep-ex", "hep-ph", "nucl-th" ]
http://arxiv.org/abs/2306.12523v1
20230621191742
Quantum Chiral Superfields
[ "Rita Fioresi", "María A. Lledó", "Junaid Razzaq" ]
math.QA
[ "math.QA", "16T05" ]
thmTheorem[section] theorem[thm]Theorem prop[thm]Proposition claim[thm]Claim cor[thm]Corollary corollary[thm]Corollary lemma[thm]Lemma conjecture[thm]Conjecture proposition[thm]Proposition definition definition[thm]Definition example[thm]Example
http://arxiv.org/abs/2306.03603v1
20230606114631
Trial matching: capturing variability with data-constrained spiking neural networks
[ "Christos Sourmpis", "Carl Petersen", "Wulfram Gerstner", "Guillaume Bellec" ]
q-bio.NC
[ "q-bio.NC" ]
Representative set statements for delta-matroids and the Mader delta-matroid Christos Sourmpis, Carl Petersen, Wulfram Gerstner, Guillaume Bellec School of Computer and Communication Sciences and School of Life Sciences École Polytechnique Fédérale de Lausanne (EPFL) 1015 Switzerland July 31, 2023 =========================================================================================================================================================================================================================================== Simultaneous behavioral and electrophysiological recordings call for new methods to reveal the interactions between neural activity and behavior. A milestone would be an interpretable model of the co-variability of spiking activity and behavior across trials. Here, we model a cortical sensory-motor pathway in a tactile detection task with a large recurrent spiking neural network (RSNN), fitted to the recordings via gradient-based optimization. We focus specifically on the difficulty to match the trial-to-trial variability in the data. Our solution relies on optimal transport to define a distance between the distributions of generated and recorded trials. The technique is applied to artificial data and neural recordings covering six cortical areas. We find that the resulting RSNN can generate realistic cortical activity and predict jaw movements across the main modes of trial-to-trial variability. Our analysis also identifies an unexpected mode of variability in the data corresponding to task-irrelevant movements of the mouse. § INTRODUCTION Over the past decades, there has been a remarkable advancement in neural recording technology. Today, we can simultaneously record hundreds, even thousands, of neurons with millisecond time precision. Coupled with behavior measurements, modern experiments enable us to better understand how brain activity and behavior are intertwined <cit.>. In these experiments, it is often observed that even well-trained animals respond to the same stimuli with considerable variability. For example, mice trained on a simple tactile detection task occasionally miss the water reward <cit.>, possibly because of satiation, lack of attention or neural noise. It is also clear that there is additional uncontrolled variability in the recorded neural activity <cit.> induced for instance by a wide range of task-irrelevant movements. Our goal is to reconstruct a simulation of the sensory-motor circuitry driving the variability of neural activity and behavior. To understand the generated activity at a circuit level, we develop a generative model which is biologically interpretable: all the spikes are generated by a recurrent spiking neural network (RSNN) with hard-biological constraints (i.e. the voltage and spiking dynamics are simulated with millisecond precision, neurons are either inhibitory or excitatory, spike transmission delay takes 2-4 ms). First contribution, we make a significant advance in the simulation methods for data-constrained RSNNs. While most prior works <cit.> were limited to single recording sessions, our model is constrained to spike recordings from 28 sessions covering six cortical areas. The resulting spike-based model enables a data-constrained simulation of a cortical sensory-motor pathway (from somatosensory to motor cortices responsible for the whisker, jaw and tongue movements). As far as we know, our model is the first RSNN model constrained to multi-session recordings with automatic differentiation methods for spiking neural networks <cit.>. Second contribution, using this model we aim to pinpoint the circuitry that induces variability in behavior (asking for instance what circuit triggers a loss of attention). Towards this goal, we identify an unsolved problem: “how do we enforce the generation of a realistic distribution of neural activity and behavior?” To do this, the model is fitted jointly to the recordings of spiking activity and movements to generate a realistic trial-to-trial co-variability between them. Our technical innovation is to define a supervised learning loss function to match the recorded and generated variability. Concretely the trial matching loss function is the distance between modeled and recorded distributions of neural activity and movements. It relies on recent advances in the field of optimal transport <cit.> providing notions of distances between distributions. In our data-constrained RSNN, trial matching enables the recovery of the main modes of trial-to-trial variability which includes the neural activity related to instructed behavior (e.g. miss versus hit trials) and uninstructed behavior like spontaneous movements. Related work While there is a long tradition of data fitting using the leaky integrate and fire (LIF) model, spike response models <cit.> or generalized linear models (GLM) <cit.>, most of these models were used to simulate single neuron dynamics <cit.> or small networks with dozens of neurons recorded in the retina and other brain areas <cit.>. A major drawback of those fitting algorithms was the limitation to a single recording session. Beyond this, researchers have shown that FORCE methods <cit.> could be used to fit up to 13 sessions with a large RSNN <cit.>. But in contrast with back-propagation through time (BPTT) in RSNNs <cit.>, FORCE is tied to the theory of recursive least squares making it harder to combine with deep learning technology or arbitrary loss functions. We know only one other study where BPTT is used to constrain RSNN to spike-train recordings <cit.> but this study was limited to a single recording session. Regarding generative models capturing trial-to-trial variability in neural data, many methods rely on trial-specific latent variables <cit.>. This is often formalized by abstracting away the physical interpretation of these latent variables using deep neural networks (e.g. see LFADS <cit.> or spike-GAN <cit.>) but our goal is here to model the interpretable mechanisms that can generate the recorded data. There are hypothetical implementations of latent variables in RSNNs, most notoriously, latent variables can be represented as the activity of mesoscopic populations of neurons <cit.>, or linear combinations of the neurons' activity <cit.>. These two models assume respectively an implicit grouping of the neurons <cit.> or a low-rank connectivity matrix <cit.>. Here, we want to avoid making any structural hypothesis of this type a priori. We assume instead that the variability is sourced by unstructured noise (Gaussian current or Poisson inputs) and optimize the network parameters to transform it into a structured trial-to-trial variability (e.g. multi-modal distribution of hit versus miss trials). The optimization therefore decides what is the network mechanism that best explains the trial-to-trial variability observed in the data. This hypothesis-free approach is made possible by the trial matching method presented here. This method is complementary to previous optimization methods for generative models in neuroscience. Many studies targeted solely trial-averaged statistics and ignored single-trial activity, for instance methods using the FORCE algorithm <cit.>, RSNN methods using back-propagation through time <cit.> and multiple techniques using (non-interpretable) deep generative models <cit.>. There exist other objective functions which can constrain the trial-to-trial variability in the data, namely: the maximum likelihood principle <cit.> or spike-GANs <cit.>. We illustrate however in the discussion section why these two alternatives are not a straightforward replacement for the trial matching loss function with our interpretable RSNN generator. § LARGE DATA-CONSTRAINED RECURRENT SPIKING NEURAL NETWORK (RSNN) This paper aims to model the large-scale electrophysiology recordings from <cit.>, where they recorded 4415 units from 12 areas across 22 mice. All animals in this dataset were trained to perform the whisker tactile detection task described in Figure <ref>: in 50% of the trials (the GO trials), a whisker is deflected and after a 1 s delay period an auditory cue indicates water availability if the mouse licks, whereas in the other 50% of trials (the No-Go trials), there is no whisker deflection and licking after the auditory cue is not rewarded. Throughout the paper we attempt to create a data-constrained model of the six areas that we considered to play a major role in this behavioral task: the primary and secondary whisker somatosensory cortices (wS1, wS2), motor cortices (wM1, wM2), the primary tongue-jaw motor cortex (tjM1) and the anterior lateral motor cortex (ALM), also known as tjM2 (see Figure <ref>A and <ref>A). While we focus on this dataset, the method described below aims to be broadly applicable to most contemporary large-scale electrophysiological recordings. We built a spiking data-constrained model that simulates explicitly a cortical neural network at multiple scales. At the single-cell level, each neuron is either excitatory or inhibitory (the output weights have only positive or negative signs respectively), follows leaky-integrate and fire (LIF) dynamics, and transmits information in the form of spikes with synaptic delays ranging from 2 to 4 ms. At a cortical level, we model six brain areas of the sensory-motor pathway where each area consists of 250 recurrently connected neurons (200 excitatory and 50 inhibitory) as shown in Figure <ref>A, such that only excitatory neurons project to other areas. Since the jaw movement defines the behavioral output in this task, we also model how the tongue-jaw motor cortices (tjM1, ALM) drive the jaw movements. Mathematically, we model the spikes z_j,k^t of the neuron j at time t in the trial k as a binary number. The spiking dynamics are then driven by the integration of the somatic currents I_j,k^t into the membrane voltage v_j,k^t, by integrating LIF dynamics with a discrete time step δ_t=2 ms. The jaw movement y^t_k is simulated with a leaky integrator driven by the activity of tjM1 and ALM neurons, followed by an exponential non-linearity. This can be summarized with the following equations, the trial index k is omitted for simplicity: v_j^t = α_j v_j^t-1 + (1-α_j) I_j^t - v_thr,j z_j^t-1 + ξ^t_j I_j^t = ∑_d,i W_ij^d z_i^t-d + ∑_d, i W_ij^in,d x_i^t-d y^t = α_jawy^t-1 + (1-α_jaw) ∑_i W_i^jawz_i^t y^t = exp(y^t) + b where W_ij^d, W_ij^in,d, W_i^jaw, v_thr,j, and b are model parameters. The membrane time constants τ_m=30 ms for excitatory and τ_m=10 ms for inhibitory neurons define α_j=exp(- δ t/τ_m,j) and τ_jaw=50 ms define similarly α_jaw which controls the velocity of integration of the membrane voltage and the jaw movement. To implement a soft threshold crossing condition, the spikes inside the recurrent network are sampled with a Bernoulli distribution z_j^t ∼ℬ(σ(v_j^t - v_thr,j/v_0)), where v_0 is the temperature of the sigmoid (σ). The spike trains x_i^t model the thalamic inputs as simple Poisson neurons producing spikes randomly with a firing probability of 5 Hz and increasing their firing rate when a whisker stimulation is present (see appendix). The last noise source ξ_j^t is an instantaneous Gaussian noise ξ_j^t of standard deviation β v_thr√(δ t) modeling random inputs from other areas (β is a model parameter that is kept constant over time). Session stitching An important aspect of our fitting method is to leverage a dataset of electrophysiological recordings with many sessions. To constrain the neurons in the model to the data, we uniquely assign each neuron in the model to a single neuron from the recordings as illustrated in Figure <ref>A and <ref>A. Since our model has 1500 neurons, we therefore select randomly 1500 neurons from the recordings (250 in each area, we ignore the other recorded neurons to have the same number of excitatory and inhibitory neurons in each area). This bijective mapping between neurons in the data and the model is fixed throughout the analysis and defines the area and cell types of the neurons in the model. The area is inferred from the location of the corresponding neuron in the dataset and the cell type is inferred from the action potential waveform of this cell (for simplicity, fast-spiking neurons are considered to be inhibitory and regular-spiking neurons as excitatory). Given this assignment, we denote z_j^𝒟 as the spike train of neuron j in the dataset and z_j as the spike train of the corresponding neuron in the model; in general, an upper script 𝒟 always refers to the recorded data. A consequence is that two neurons i and j might be synaptically connected in the model although they correspond to neurons recorded in separate sessions. This choice is intended to model network sizes beyond what can be recorded during a single session. Our network is therefore a “collage” of multiple sessions stitched together as illustrated in Figure <ref>A and <ref>A. This network is then constrained to the recorded data by optimizing the parameters to minimize the loss functions defined in the following section. Altogether, when modeling the dataset from Esmaeili and colleagues <cit.>, the network consists of 1500 neurons where each neuron is assigned to one neuron recorded in one of the 28 different recording sessions. Since multiple sessions are typically coming from different animals, we model a “template mouse brain” which is not meant to reflect subject-to-subject differences. § FITTING SINGLE-TRIAL VARIABILITY WITH THE TRIAL MATCHING LOSS FUNCTION We fit the network to the recordings with gradient descent and we rely on surrogate gradients to extend back-propagation to RSNNs <cit.>. At each iteration until convergence, we simulate a batch of K=150 statistically independent trials. We measure some trial-average and single-trial statistics of the simulated and recorded activity, calculate a loss function, and minimize it with respect to all the trainable parameters of the model via gradient descent and automatic differentiation. This protocol is sometimes referred to as a sample-and-measure method <cit.> as opposed to the likelihood optimization in GLMs where the network trajectory is clamped to the recorded data during optimization <cit.>. The full optimization lasts for approximately one to three days on a GPU A100-SXM4-40GB. Trial-average loss We consider the trial-averaged activity over time of each neuron from every session 𝒯_neuron, sometimes referred also as neuron peristimulus time histogram (PSTH). This is defined by 𝒯_neuron(z_j) = 1/K∑_kz_j,k∗ f where f is a rolling average filter with a window of 12 ms, and K is the number of trials in a batch of spike trains z. The statistics 𝒯_neuron(z_j^𝒟) are computed similarly on the K^𝒟 trials recorded during the session corresponding to neuron j. We denote the statistics 𝒯'_neuron after normalizing each neuron's trial-averaged activity, and we define the trial-averaged loss function as follows: ℒ_neuron = ∑_j𝒯'_neuron(z_j) - 𝒯'_neuron(z_j^𝒟) ^2 . It is expected from <cit.> that minimizing this loss function alone generates realistic trial-averaged statistics like the average neuron firing rate: Trial matching loss: fitting trial-to-trial variability Going beyond trial-averaged statistics, we now describe the trial matching loss function to capture the main modes of trial-specific activity. From the previous neuroscience study <cit.>, it appears that population activity in well-chosen areas is characteristic of the trial-specific variability. For instance, intense jaw movements are preceded by increased activity in the tongue-jaw motor cortices, and hit trials are characterized by a secondary transient appearing in the sensory cortices a hundred milliseconds after a whisker stimulation. To define single-trial statistics which can capture these features we denote the population-averaged firing rate of an area A as 𝒯_A(z_k) = 1/|A|∑_j ∈ A (z_j,k∗ f) where |A| is the number of neurons in area A, the smoothing filter f has a window size of 48 ms and the resulting signal is downsampled to avoid unnecessary redundancy. We write 𝒯'_A when each time bin is normalized to mean 0 and standard deviation 1 using the recorded trials and we use 𝒯'_A as feature vectors to characterize the trial-to-trial variability in area A. To construct a single feature vector encapsulating the joint activity dynamics in all areas and the jaw movements in a session, we concatenate all these feature vectors together into 𝒯_trial' = (𝒯'_A1, 𝒯'_A2, y_k ∗ f), where A1 and A2 are the areas recorded in this session. The challenging part is now to define the distance between the recorded statistics 𝒯_trial(z^𝒟) and the generated ones 𝒯_trial( z). Common choices of distances like the mean square error are not appropriate to compare distributions. This is because the order of trials in a batch of generated/recorded trials has no meaning a priori: there is no reason for the random noise of the first generated trial to correspond to the first recorded trial – rather we want to compare unordered sets of trials and penalize if any generated trial is very far from any recorded trial. Formalizing this mathematically we consider a distance between distributions inspired by the optimal transport literature. Since the plain mean-squared error cannot be used, we use the mean-squared error of the optimal assignment between pairs of recorded and generated trials: we select randomly K' = min (K,K^𝒟) generated and recorded trials (K and K^𝒟 are respectively the number of generated and recorded trials in one session), and this optimal assignment is formalized by the integer permutation π : { 1, … K' }→{ 1, … K' }. Then using the feature vector 𝒯_trial for any trial k, we define the hard trial matching loss function as follows: ℒ_trial = min_π∑_k || 𝒯'_trial(z_k) - 𝒯'_trial(z_π(k)^𝒟) ||^2 . We compute this loss function identically on all the recorded sessions and take the averaged gradients to update the parameters. Each evaluation of this loss function involves the computation of the optimal trial assignment π which can be computed with the Hungarian algorithm <cit.> (see inear_sum_assignment for an implementation in ). This is not the only way to define a distance between distributions of statistics 𝒯_trial'. In fact, this choice poses a potential problem because the optimization over π is a discrete optimization problem, so we have to assume that π is a constant with respect to the parameters when computing the loss gradients. We also tested alternative choices relying on a relaxation of the hard assignment into a smooth and differentiable bi-stochastic matrix. This results in the soft trial matching loss function, which replaces the optimization over π by the Sinkhorn divergence <cit.> (see the package for implementation in pytorch <cit.>). In practice, to minimize both ℒ_trial (either the soft or hard version) and ℒ_neuron simultaneously we optimize them in an additive fashion with a parameter-free multi-task method from deep learning which re-weights the two loss functions to ensure that their gradients have comparable scales (see <cit.> for a similar implementation). § SIMULATION RESULTS Validation using an artificial dataset We generated an artificial dataset with two distinct areas with 250 neurons each to showcase the effect of trial variability. In this dataset A1 (representing a sensory area) is responding always to a stimulus while A2 (representing a motor area) responds to the stimulus in only 80% of the trials (the firing rates of neurons in the artificial dataset are shown in Figure <ref>B with light shades). This is a toy representation of the variability that is observed in the real data recorded in mice, so we construct the artificial data so that a recording resembles a hit trial ("hit-like") if the transient activity in A2 is higher than 30 Hz (otherwise it's a "miss-like" trial). From the results of our simulations Figure <ref>B-C, we can observe that the models that use trial matching (either soft trial matching or hard trial matching) can re-generate faithfully this bimodal response distribution ("hit-like" and "miss-like") in A2. In this dataset we saw little difference between the solutions of soft and hard trial matching, if any, soft trial matching reached its optimal performance with fewer iterations (see appendix). As expected, when the model is only trained to minimize the neuron loss for trial-averaged statistics, it cannot generate stochastically this bimodal distribution and consistently generates instead a noisy average response. Delayed whisker tactile detection dataset We then apply our modeling approach to the real large-scale electrophysiology recordings from <cit.>. After optimization, we verify quantitatively that our model generates activity that is similar to the recordings in terms of trial-averaged statistics. First, we see that the 1500 neurons in the network exhibit a realistic diversity of averaged firing rates: the distribution of neuron firing rates is log-normal and matches closely the distribution extracted from the data in Figure <ref>B. Second, the single-neuron PSTHs of our model are a close match to the PSTHs from the recordings. This can be quantified by the Pearson trial-averaged correlation between the generated and held-out test trials which we did not use for parameter fitting. We obtain an averaged Pearson correlation of 0.30 ± 0.01 which is very close to the Pearson correlation obtained when comparing the training and testing sets 0.31 ± 0.01. Figure <ref>C shows how the trial-averaged correlation is distributed over neurons. As expected, this trial averaged metric is not affected if we do not use trial matching (0.30±0.01). To quantify how the models capture the trial-to-trial variability, we then quantify how the distributions of neural activity and jaw movement are consistent between data and model. So we need to define the trial-matched Pearson correlation to compute a Pearson correlation between the distribution of trial statistics T_trial' which are unordered sets of trials. So we compute the optimal assignment π between trial pairs from the data and the recordings, and we report the averaged Pearson correlation over all trial pairs. Between the data and the model, we measure a trial-matched Pearson correlation of 0.48 ± 0.01, with a performance ceiling at 0.52 ± 0.01 obtained by comparing the training and testing set directly (see Figure <ref>C for details). For reference, the model without trial matching has a lower trial-matched Pearson correlation 0.28±0.003. Successful recovery of trial type distribution While the neuronal activity is recorded, the behavioral response of the animal is also variable. When mice receive a stimulation they perform correctly with a 66% hit rate, while in the absence of a stimulus, mice still falsely lick with a 20% false alarm rate. Even in correct trials, the neural activity reflects variability which is correlated to uninstructed jaw and tongue movements <cit.>. We evaluate the distribution of trial types (hit, miss, correct rejection, and false alarm) from our fitted network model. Indeed, the 95% confidence intervals of the estimated trial type frequencies are always overlapping between the model and the data (see Figure <ref>A). In this Figure, we classify the trial type with a nearest-neighbor-like classifier using only the neural activity (see appendix). In contrast, a model without the trial matching would fail completely because it always produces averaged trajectories instead of capturing the multi-modal variability of the data as seen in Figure <ref>A. With trial matching it is even possible to classify trial types using jaw movement. To define equivalent trial types in the model, we rely on the presence or absence of the stimulation and a classifier to identify a lick action given the jaw movements. This classifier is a multi-layer perceptron trained to predict a lick action on the water dispenser given the recorded jaw movements (like in the data, it occurs that the model “moves” the jaw without inducing a lick action). After optimization with trial matching, since the jaw movement y^t is contained in the fitted statistics T_trial, the distribution of jaw movement and the trial types are similar in the fitted model and the trial type distribution remains consistent. In Figure <ref>B we show population-averaged activity traces where the jaw is used to determine the trial type. Unsupervised discovery of modes of variability So far we have analyzed whether the variability among the main four trial types was expressed in the model, but the existence of these four trial types is not enforced explicitly in the loss function. Rather, the trial matching loss function aims to match the overall statistics of the distributions and it has discovered these four main modes of trial-variability without explicit supervision. A consequence is that our model has possibly generated other modes of variability which are needed for the model to explain the full distribution of recorded data. To display the full distribution of generated trials, we represent the neural activity of 400 generated trials in 2D in Figure <ref>C. Formally, we apply UMAP to the sub-selection of 𝒯_trial which excludes the jaw components: ( 𝒯_wS1, …𝒯_ALM). Importantly, the representation of the trial distribution in a 2D projection is only possible with a generative model like ours. Otherwise, it would be nontrivial to define feature vectors for unique recorded trials because of the missing data: in each session, only a couple of areas are recorded simultaneously. However, to confirm that the generated distribution is consistent with the data we display template vectors for each trial condition c that are calculated from the recorded data. These templates are drawn with stars in Figure <ref>C and they are computed as follows: the coefficient 𝒯_A,c^𝒟 of this template vector is computed by averaging the population activity of area A in all recorded trials from all sessions (see appendix for details), these averaged vectors are then concatenated and projected in the 2D UMAP space. The emerging distribution in this visualization is grouped in clusters. We observe that the template vectors representing the correct rejection, miss, and false alarm trials are located at the center of the corresponding cluster of generated trials. More surprisingly the generated hit trials are split into two clusters (see the two boxed clusters in Figure <ref>C). This can be explained by a simple feature: 85% of the generated hit trials on the left-hand cluster of panel <ref>C have intense jaw movements during the delay period (max_t |y^t - y^t-1|>4 δ where δ is the standard deviation of |y^t - y^t-1| in the 200 ms before whisker stimulation). In fact, a similar criterion had been used in <cit.> to separate the hit trials in the recorded data, so we also refer to them as the "active hit" and "quiet hit" trials and show the population activity in Figure <ref>D. It shows that our algorithm has captured without supervision the same partition of trial types that neuroscientists have used to describe this dataset. We conclude that our modeling approach can be used for a hypothesis-free identification of modes of trial-to-trial variability, even when they reflect task-irrelevant behavior. § DISCUSSION We introduced a generative modeling approach where a data-constrained RSNN is fitted to multi-session electrophysiology data. The two major innovations of this paper are (1) the technical progress towards multi-session RSNN fitted with automatic differentiation, and (2) a trial matching loss function to match the trial-to-trial variability in recorded and generated data. Interpretable mechanistic model of activity and behavior Our model has a radically different objective in comparison with other deep-learning models: our RSNN is aimed to be biophysically interpretable. In the long term, we hope that this method will be able to capture biological mechanisms (e.g. predicting network structure, causal interaction between areas and anatomical connectivity), but in this paper, we have focused on numerical and methodological questions which are getting us one step closer to this long-term objective. Mechanisms of latent dynamics A long-standing debate in neuroscience is whether the brain computes with low-dimensional latent representations and how that is implemented in a neural circuit. Deep auto-encoders of neural activity like LFADS <cit.> can indeed generate trial-to-trial variability from low-dimensional latent representations. By construction, the variability is sourced by the latent variable which contains all the trial-specific information. This is in stark contrast with our approach, where we see the emergence of structured manifolds in the trial-to-trial variability of the RSNN (see the UMAP representation of Figure <ref>C), although we did not enforce the presence of low-dimensional latent dynamics. Structure in the trial-to-trial variability emerges because the RSNN is capable of transforming the unstructured noise sources (stochastic spikes and Gaussian input current) into a low-dimensional trial-to-trial variability – a typical variational auto-encoder setting would not achieve this. Note that it is also possible however to add a random low-dimensional latent as a source of low-dimensional variability like in LFADS. In the appendix, we reproduce our results on the multi-session dataset from <cit.> while assuming that all voltages v_i,k^t have a trial-specific excitability offset ξ_i,k using a 5-dimensional gaussian noise ψ_k and a one-hidden-layer perceptron F_θ such that ξ_i,k=F_θ,i(ψ_k). We observe that this latent noise model accelerates drastically the optimization, probably because ξ_i,k is an ideal noise source for minimizing ℒ_trial. However, the final solution achieves similar fitting performance metrics so, our method demonstrates that the extra assumption of a low-dimensional input is not necessary to generate realistic variability. Arguably, providing this low-dimensional input might even be counterproductive if the end goal is to identify the mechanism by which the circuit produces the low-dimensional dynamics. Alternative loss functions to capture variability The main alternative methods to constrain the trial-to-trial variability would be likelihood-based approaches <cit.> or spike-GANs <cit.>. These methods are appealing as they do not depend on the choice of trial statistics 𝒯_trial. Since these methods were never applied with a multi-session data-constrained RSNN we explored how to extend them to our setting and compare the results. We tested these alternatives on the Artificial dataset in the appendix. The likelihood of the recorded spike trains <cit.> cannot be defined with multiple sessions because we cannot clamp neurons that are not recorded (see <cit.> for details). The closest implementation that we could consider was to let the network simulate the data “freely” which requires, therefore, an optimal assignment between recorded and generated data, so it is a form of trial-matched likelihood). With this loss function, we could not retrieve the bi-model hit versus miss trial type distribution unless it is optimized jointly with ℒ_trial. We also tested the implementation of a spike-GAN discriminator. In GANs the min-max optimization is notoriously hard to tune, and we were unable to train our generator with a generic spike-GAN discriminator from scratch (probably because the biological constraints of our generator affect the robustness of the optimization). In our hands, it only worked when the GAN discriminator was fed directly with the trial statistics 𝒯_trial and the network was jointly fitted to the trial-averaged loss ℒ_neuron. It shows that a GAN objective and the trial matching loss function hold a similar role. We conclude that both of these clamping-free methods are promising to fit data-constrained RSNNs. What differs between them, however, is that trial matching replaces the discriminator with the optimal assignment π and the statistics 𝒯 which are parameter-free, making them easy to use and numerically robust. It is conceivable for future work the best results are obtained by combining trial matching with other GAN-like generative methods. This research was supported by the Swiss National Science Foundation (no. 31003A_182010, TMAG-3_209271, 200020_207426), and Sinergia Project CRSII5_198612. Many thanks to Lénaïc Chizat, James Isbister, Shuqi Wang, and Vahid Esmaeili for their helpful discussions. § APPENDIX figuresection §.§ Input Spikes Some inputs neurons are modeling thalamic input or random cortical input noise. Practically these input neurons are simulated with 300 Poisson neurons as seen in Figure <ref>. Two-thirds are encoding the thalamic sensory inputs and the remaining can be interpreted as random cortical neurons. All these neurons have a background firing activity of 5 Hz, but the 200 thalamic neurons, have a sharp and phasic increase to 20-40 Hz for a 10 ms window when the stimuli are presented. Precisely the increase in the activity starts at 4 ms after the actual stimulation, to take into account the time that the tactile/auditory signal needs to reach the thalamus. The remaining Poisson neurons are mostly inhibitory and help the network balance its activity since all other input spikes are excitatory. Figure <ref>A shows the input spikes for two trials one with and one without a whisker stimulus. §.§ Alternative methods to fit trial variability The two main alternative methods to constrain the trial variability of a generative model are likelihood-based approaches and the spike-GAN. In theory, these two methods can fit higher-order statistics of the data so they should be able to capture trial-to-trial variability. As explained below we did not see in practice that they could easily replace the benefits of the trial matching loss functions. In order to make a comparison of these two methods with trial matching we tested them on the artificial data, see Figure <ref>. For the likelihood-based approaches, we made two main adjustments to be applicable to our setting. First, we did not “clamp” the neural activity during training meaning that when the network evaluated timestep t+1 the activity was not fixed to the recordings from timestep t. This was necessary in our case because we fitted multiple sessions. Second, we optimally assigned the simulated trials with the recorded trials as in the trial matching loss, ℒ_trial. This resulted in the following loss function: ℒ = min_π∑_k BCE(z_k, z_π(k)^𝒟) , where BCE is the binary cross-entropy loss. We call this method trial-matched likelihood. For the spike-GAN, the training of a generic spike-train classifier as a discriminator was not successful in our hands. We speculate that there are two main reasons for that. First, the data are very noisy from trial to trial, and even first-order statistics sometimes are not consistent, see Figure <ref>C. Second, our generator is a recurrent spiking neural network that is more difficult to train than the deep-learning spike-train generators used previously. We could make it work however with a modified discriminator, which does not receive the full spike trains but only the 𝒯_trial statistics. We call this variant spike𝒯-GAN. Since spike𝒯-GAN has access only to the 𝒯_trial we minimize the GAN loss function jointly with the ℒ_neuron loss. Comparison results As shown in Figure <ref>, the trial-matched likelihood implementation fails to reconstruct the bimodal distribution of the data, while the spike𝒯-GAN reaches similar accuracy and behavior as our trial matching methods. In Figure <ref>C we also observed that the spike𝒯-GAN finds the correct bi-modal distribution with roughly the correct proportion of "hit-like" and "miss-like" trials. In this sense, spike𝒯-GAN loss function and the trial matching loss functions have a similar function. However despite the simplicity of the discriminator which already receives population average statistics, both hard and soft trial matching converge much faster than the spike𝒯-GAN. It happens because the discriminator needs to be trained along with the generator in GANs, while the comparable competitive optimization is implemented at each iteration by π in trial matching. We also see that the soft version converges faster than the hard, either because the computation of π is taken into account in the auto-differentiation, or because it implicitly regularizes the distributions which can be favorable if 𝒯 has high dimensions <cit.>. §.§ Trial type classification In the context of studying behavioral outcomes and neuronal activity, trial-type classification plays a crucial role in understanding the underlying processes. This section describes two approaches employed for trial type classification: the template matching (nearest-neighborhood-like classifier) and Multilayer Perceptron (MLP) lick classifier. Template Matching or Nearest-Neighbor-Like Classifier We calculate the per-area and per-trial type peristimulus time histogram (PSTH) using the recordings across all experimental sessions, denoted as 𝒯_A, c^𝒟. By concatenating these vectors, we obtain a single vector per trial condition, denoted as 𝒯_c^𝒟 = (𝒯_wS1, c^𝒟…𝒯_tjM1, c^𝒟). These vectors serve as templates since they provide a comprehensive description of the neuronal activity for each trial type. Next, we generate an equivalent signal, 𝒯_trial, for each trial using our model. By comparing this signal to the template vectors, we determine which trial type is the closest. This trial type classification method, which we refer to as template matching or nearest-neighbor-like classifier, enables us to identify the trial type based only on the model's neural activity, see Figure <ref>A. The vectors described here are the ones that were used for the UMAP dimensionality reduction in Figure <ref>C. Multilayer Perceptron (MLP) Lick Classifier We utilize a multilayer perceptron neural network to detect whether a trial corresponds to a lick or no-lick event. The input is the filtered and binned jaw movement. The classifier consists of three hidden layers, each comprising 128 hidden units. The Rectified Linear Unit (ReLU) is chosen as the activation function for the hidden layers, and the sigmoid function for the output layer, Figure <ref>B shows the network architecture. To optimize the MLP lick classifier, we use the Binary Cross Entropy loss function and with the ADAM optimizer we iteratively update the network's weights and biases, minimizing the loss function. After the training process, the MLP lick classifier achieves 93.5% correct classification on the trials of the testing set from the recordings and it can be used directly to classify the model-generated jaw movements. In Figure <ref>C, we can see that both the template matching and MLP lick classifiers reproduce the trial-type distribution observed in the recordings. §.§ Mechanisms of latent dynamics To incorporate variability into a generative model and achieve variable responses, two main sources are typically considered: noise and initial conditions. In the network described in the main text, we use noise sources that change from timestep to timestep, making it challenging to introduce variables that change with slow time constants like satiation. This limitation motivates an alternative approach, which involves using a trial-specific noise source as described in the discussion. Given our objective of generating a multimodal distribution from the same input, it is natural for the model to exploit a trial-specific noise source. As depicted in Figure <ref>A, this addition significantly accelerates the optimization process. However, it is important to note that this additional noise source can introduce non-causal connectivity.
http://arxiv.org/abs/2306.05280v1
20230608152907
Chiral EFT calculation of neutrino reactions in warm neutron-rich matter
[ "Eunkyoung Shin", "Ermal Rrapaj", "Jeremy W. Holt", "Sanjay K. Reddy" ]
nucl-th
[ "nucl-th", "astro-ph.HE", "hep-ph" ]
[email protected] Cyclotron Institute, Texas A&M University, College Station, TX 77843, USA Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843, USA [email protected] NERSC, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA University of California, Berkeley, CA 94720, USA RIKEN iTHEMS, Wako, Saitama 351-0198, Japan [email protected] Cyclotron Institute, Texas A&M University, College Station, TX 77843, USA Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843, USA [email protected] Institute for Nuclear Theory, University of Washington, Seattle, WA 98195, USA Neutrino scattering and absorption rates of relevance to supernovae and neutron star mergers are obtained from nuclear matter dynamical structure functions that encode many-body effects from nuclear mean fields and correlations. We employ nuclear interactions from chiral effective field theory to calculate the density, spin, isospin, and spin-isospin response functions of warm beta-equilibrium nuclear matter. We include corrections to the single-particle energies in the mean field approximation as well as vertex corrections resummed in the random phase approximation (RPA), including, for the first time, both direct and exchange diagrams. We find that correlations included through the RPA redistribute the strength of the response to higher energy for neutrino absorption and lower energy for antineutrino absorption. This tends to suppress the absorption rate of electron neutrinos across all relevant energy scales. In contrast, the inclusion of RPA correlations enhances the electron antineutrino absorption rate at low energy and supresses the rate at high energy. These effects are especially important at high-density and in the vicinity of the neutrino decoupling region. Implications for heavy element nucleosynthesis, electromagnetic signatures of compact object mergers, supernova dynamics, and neutrino detection from galactic supernovae are discussed briefly. Chiral EFT calculation of neutrino reactions in warm neutron-rich matter Sanjay K. Reddy July 31, 2023 ======================================================================== § INTRODUCTION Neutrinos dominate energy, momentum, and lepton number transport in extreme astrophysical phenomena. Their scattering and absorption rates in hot and dense nuclear matter play a critical role in core-collapse supernovae <cit.>, proto-neutron star cooling <cit.>, and neutron star mergers <cit.>. The effects of nuclear mean fields and correlations on neutrino reaction rates are encoded in dynamical structure functions related to the imaginary part of nuclear response functions. In the past, nuclear matter response functions have been studied using a variety of nuclear interactions and many-body approximations, including nonrelativistic and relativistic mean field models <cit.>, Fermi liquid theory <cit.>, the virial expansion <cit.>, and pseudopotentials <cit.>. Recent studies <cit.> have highlighted the important role of nuclear mean fields for calculating charged-current reactions, such as neutrino and anti-neutrino absorption, in the supernova neutrinosphere. Here the large asymmetry between proton and neutron densities leads to a strong splitting of the proton and neutron mean fields that enhance neutrino absorption and suppresses anti-neutrino absorption. This, in turn, affects the composition of matter ejected from supernovae and neutron star mergers as well as neutrino flavor and energy distributions that terrestrial neutrino detectors may observe. In addition, the use of high-precision nucleon-nucleon interactions <cit.> to calculate neutrino scattering and reaction cross sections in hot and dense matter has illustrated the important role of contributions beyond one-pion-exchange together with nonperturbative resummations of the nucleon-nucleon interaction in the particle-particle channel (see also Ref. <cit.> for the role of these effects on neutrino production). At zero energy transfer, the momentum dependence of the static density response function of neutron-rich matter is related to the poorly known isovector gradient contribution in nuclear energy density functionals used to model neutron star crusts <cit.>. Recent quantum Monte Carlo simulations of neutron matter using high-precision nuclear forces have provided constraints on the isovector gradient contribution through studies of neutron drops <cit.> and the static density response function <cit.>. In this study, we employ nuclear forces based on chiral effective field theory to investigate beyond-mean-field corrections to spin and density response functions of nuclear matter under ambient conditions typical of supernova and neutron star merger neutrinospheres, where the nucleon number density varies in the range n=10^11-10^13 g/cm^3 and the temperature varies in the range T=5-10 MeV. Specifically, we calculate Hartree-Fock mean field corrections as well as resummed particle-hole vertex corrections in the random phase approximation (RPA). In contrast to naive order-by-order perturbation theory, the RPA with self-consistent Hartree-Fock mean fields provides a thermodynamically consistent “conserving approximation” <cit.> for which the computed dynamic structure function is guaranteed to respect sum rules, detailed balance, and positivity for all values of the energy and momentum transfer. We find that at the densities of relevance, RPA vertex corrections can be as important as the mean field effects studied in earlier work and should be included in a consistent description of nuclear matter response functions for astrophysical applications. The paper is organized as follows. In Section <ref>, we present expressions for density and spin response function in isospin-asymmetric nuclear matter at finite temperature with mean field and RPA vertex corrections. In particular, we outline an exact matrix inversion method <cit.> for calculating RPA response functions, including both direct and exchange terms for an arbitrary nucleon-nucleon potential. In Section <ref> we calculate neutral-current and charged-current density and spin response functions for a range of thermodynamic conditions in beta-equilibrium matter. Finally, we conclude with a summary and outlook in Section <ref>. § RESPONSE FUNCTIONS IN ISOSPIN-ASYMMETRIC NUCLEAR MATTER In the present section we derive expressions for the first-order mean field and vertex corrections to the response functions of homogeneous nuclear matter at nonzero temperature. We also outline the calculation of the RPA response function employing a matrix eigenvalue method. In the region of the supernova or neutron star merger neutrinospheres, where neutrinos decouple from nuclear matter and their free-streaming energy spectrum is set, the proton fraction is small Y_p ≈ 0.05-0.10, the temperature is warm T = 5 - 10 MeV, and the matter is dilute ρ≈ 10^11-10^13 g/cm^3. We assume beta equilibrium, which provides the restriction μ_n - μ_p - μ_e = 0 on the proton and neutron chemical potentials and hence their number densities n_i = 2/(2π)^3∫ d^3 k 1/1+e^(e_i(k)-μ_i)/T, for i={n,p,e}. Together with charge neutrality ρ_e = ρ_p, the above equations must be solved self consistently to determine the proton, neutron, and electron chemical potentials for a fixed baryon number density and temperature. In the mean field approximation, one also has to calculate the nucleon energy-momentum dispersion relation for protons e_p(k) = k^2 / (2M_p) + Σ_p(k), and likewise for neutrons, where Σ_p(k) is the self-energy. The Hartree-Fock conserving approximation <cit.> consists of computing the irreducible self-energy to first order in perturbation theory and the response function to all orders in the random phase approximation. The resulting perturbative approximation to the response function is then guaranteed to satisfy properties of the exact response function, such as sum rules and strictly positive dynamical structure functions. In the present work we employ nucleon-nucleon (NN) potentials derived from chiral effective field theory (ChEFT) <cit.>, which provides a systematic expansion of nuclear two and many-body forces. In ChEFT the long-range part of the nuclear force comes from pion-exchange processes constrained by chiral symmetry, while the short-range part is encoded in a set of contact terms fitted to nucleon-nucleon scattering and deuteron properties. The high-momentum components of the ChEFT nuclear potentials employed in this study are regulated by exponential functions with a characteristic momentum scale Λ and smoothness parameter n: f(p,p') = exp[-(p'/Λ)^2n-(p/Λ)^2n], where p⃗ and p⃗^ ' are the incoming and outgoing relative momenta of the two nucleons. We employ ChEFT potentials with cutoff scales Λ = 414, 450, 500 MeV and associated smoothness parameters n=10, 3, 2 respectively. When supplemented by the ChEFT three-body force that appears at next-to-next-to-leading order (N2LO) in the chiral expansion, this set of nuclear potentials has been shown to predict well the properties of nuclear matter, such as the equation of state <cit.>, optical potential <cit.>, and quasiparticle interaction in Fermi liquid theory <cit.>. In dilute neutron-rich matter, three-body forces give negligible contribution to single-particle and response properties of the medium. We therefore neglect three-body forces in the present work. §.§ Nuclear matter response functions at 0th order in perturbation theory In Figure <ref> we show the zeroth-order and first-order perturbation theory contributions to nuclear response functions. The wavy lines denote the coupling to a W or Z boson defined in the non-relativistic limit by its vector/axial vector and isoscalar/isovector nature. The dashed lines in Figure <ref> represent the nucleon-nucleon interaction. The zeroth-order contribution to the neutral-current density response function χ_ρ, the neutral-current spin response function χ_σ, the charged-current density response function χ_τρ for electron-neutrino absorption, and the charged-current spin response function for electron-neutrino absorption χ_τσ in isospin-asymmetric nuclear matter at nonzero temperature are given by χ^(0)_ρ(q⃗,ω) = ∑_s_1s_2t_1t_2∫dk⃗/(2π)^3f_k⃗, t_1 - f_k⃗ + q⃗,t_2/ω+e_k⃗, t_1-e_k⃗+q⃗, t_2 + i ηδ_s_1,s_2δ_t_1,t_2, χ^(0)_σ(q⃗,ω) = ∑_s_1s_2t_1t_2∫dk⃗/(2π)^3f_k⃗, t_1 - f_k⃗ + q⃗,t_2/ω+e_k⃗, t_1-e_k⃗+q⃗, t_2 + i η |⟨ s_1 | σ_z | s_2 ⟩|^2 δ_t_1,t_2, χ^(0)_τρ(q⃗,ω) = ∑_s_1s_2∫dk⃗/(2π)^3f_k⃗, n - f_k⃗ + q⃗,p/ω+e_k⃗, n-e_k⃗+q⃗, p + i ηδ_s_1,s_2, χ^(0)_τσ(q⃗,ω) = ∑_s_1s_2∫dk⃗/(2π)^3f_k⃗, n - f_k⃗ + q⃗,p/ω+e_k⃗, n-e_k⃗+q⃗, p + i η |⟨ s_1 | σ_z | s_2 ⟩|^2, where the sums are over the single-particle spin projections s_1, s_2 and isospin projections t_1,t_2 of the particle-hole pair, f_k⃗, t is the Fermi-Dirac distribution function for a nucleon with momentum k⃗ and isospin projection t, and e_k⃗, t is the single-particle energy for a nucleon with momentum k⃗ and isospin projection t. In the case of noninteracting protons and neutrons, the density and spin response functions in the neutral-current or charged-current channels are identical since ∑_s_1 s_2δ_s_1,s_2 = ∑_s_1 s_2 |⟨ s_1 | σ_z | s_2 ⟩|^2 = 2. In addition, the single-particle energies in Eq. (<ref>) are simply the free-space kinetic energies e_k⃗=k^2/(2M). Including effects from momentum-dependent mean fields for protons Σ_p(k) and neutrons Σ_n(k), the single-particle energies that enter in the Fermi-Dirac distribution functions and the energy denominators of Eq. (<ref>) become e_k⃗,n = k^2/2M+Σ_n(k), e_k⃗,p = k^2/2M+Σ_p(k). At the Hartree-Fock level (first-order perturbation theory), the proton and neutron single-particle potentials can be well described by the effective mass approximation e_k⃗,n = k^2/2M_n^*+U_n, e_k⃗,p = k^2/2M^*_p+U_p. In the case that M^*_n,p≃ M, the mean field strengths U_n and U_p modify only the energy denominators in Eq. (<ref>), which results in a shift of the imaginary part of the response functions. Although the mean field energies e_k⃗,n and e_k⃗,p also enter in the definition of the Fermi-Dirac distribution functions, a simple mean field shift will be absorbed into a redefinition of the chemical potential. The inclusion of nuclear mean fields at the Hartree-Fock level in the calculation of the 0^ th-order response functions χ^(0, MF)(q,ω) corresponds to iterating diagrams of type (b) and (c) in Figure <ref> to all orders in perturbation theory. For neutrino scattering and absorption, the main effect will be a shift of the ω-dependent imaginary response for fixed momentum transfer q⃗. For additional technical details regarding the calculation of nucleon single-particle potentials starting from realistic nucleon-nucleon interactions, the reader is referred to Refs. <cit.>. §.§ Random Phase Approximation (RPA) In many-body perturbation theory, an order-by-order calculation of response functions can lead to unphysical dynamic structure functions that do not satisfy constraints, such as positivity and sum rules that relate the long-wavelength response to thermodynamics. The formal solution to this problem obtained by Baym and Kadanoff <cit.> is to construct so-called “conserving approximations” that relate the one-body and two-body propagators in such a way as to maintain conservation laws of energy, momentum, angular momentum, and particle number. In the Hartree-Fock conserving approximation, the one-body propagator is constructed at the self-consistent mean field level, while the two-body propagator resums to all orders the direct and exchange particle-hole diagrams in the RPA. The RPA leads to a linear inhomogeneous integral equation for the particle-hole vertex function. The discretized version of this equation can be re-expressed <cit.> as a linear algebraic equation that can be solved through matrix inversion. In the rest of this section, we will outline the solution of the RPA response function and present several benchmarks to test the method. In Figure <ref>, we show the class of response function diagrams to be resummed in the RPA, which includes all particle-hole bubble diagrams, ladder diagrams, and combinations thereof with dressed intermediate-state propagators in the mean field approximation. This infinite set of diagrammatic contributions to nuclear matter response functions cannot be resummed directly but instead must proceed through the resummed particle-hole density vertex function L(k⃗_1 s_1;q⃗ ω). In the case of the charged-current density response function, the density vertex function satisfies χ_τρ(q⃗,ω) = ∑_s∫dk⃗/(2π)^3 L(k⃗ s;q⃗ ω). The 0th-order and 1st-order vertex functions are therefore given by L^(0)(k⃗ s; q⃗ ω) = f_k⃗, n - f_k⃗ + q⃗, p/ω + e_k⃗, n - e_k⃗ +q⃗, p + i η, L^(1)(k⃗ s; q⃗ ω) = f_k⃗, n - f_k⃗ + q⃗, p/ω+e_k⃗, n-e_k⃗ +q⃗, p + i η∑_s^'∫dk⃗^'/(2π)^3 ( f_k⃗^', n - f_k⃗^' + q⃗,p ) ⟨k⃗k⃗^' +q⃗, s s^', n p | V̅ | k⃗ +q⃗ k⃗^', s s^', p n ⟩/ω + e_k⃗^', n-e_k⃗^' + q⃗, p + i η. Summing the higher-order particle-hole bubble and ladder diagrams, one obtains the integral equation L(k⃗ s;q⃗ ω) = L_0(k⃗ s;q⃗ ω) + L_0(k⃗ s;q⃗ ω) ×∑_s^'∫dk⃗^'/(2π)^3⟨k⃗k⃗^' +q⃗, s s^', n p | V̅ | k⃗ +q⃗ k⃗^', s s^', p n ⟩ × L(k⃗^' s^';q⃗ ω), shown diagrammatically in Figure <ref>. For a spin-saturated system, the vertex function for the density response function is independent of spin. One can then average Eq. (<ref>) over the spin to obtain L(k⃗;q⃗ ω) = L_0(k⃗;q⃗ ω) + L_0(k⃗;q⃗ ω) ∫dk⃗^'/(2π)^3 L(k⃗^';q⃗ ω) × [ 1/2∑_s s^'⟨k⃗k⃗^' +q⃗, s s^', n p | V̅ | k⃗ +q⃗ k⃗^', s s^', p n ⟩ ], where for convenience we will denote the quantity in squared brackets as V(k⃗,k⃗^'), suppressing the explicit dependence on q⃗. Writing the integral in Eq. (<ref>) as a summation over a discrete set {k⃗_1, k⃗_2, …} of momentum-space mesh points with associated mesh weights {w_1, w_2, …}, one can rewrite Eq. (<ref>) as a matrix equation whose formal solution is L = [ N^-1 ( E + ( ω + i η) 1 ) - V ]^-1 B, where L is a vector with elements L = [ L(k⃗_1;q⃗ ω); L(k⃗_2;q⃗ ω); ⋮ ], N is a diagonal matrix with elements N = [ f_k⃗_1, n - f_k⃗_1 + q⃗, p 0 ⋯; 0 f_k⃗_2, n - f_k⃗_2 + q⃗, p ⋯; ⋮ ⋮ ⋱ ], E is a diagonal matrix with elements E = [ e_k⃗_1, n - e_k⃗_1 + q⃗, p 0 ⋯; 0 e_k⃗_2, n - e_k⃗_2 + q⃗, p ⋯; ⋮ ⋮ ⋱ ], V is the matrix V = [ w_1 V(k⃗_1, k⃗_1) w_2 V(k⃗_1, k⃗_2) ⋯; w_1 V(k⃗_2, k⃗_1) w_2 V(k⃗_2, k⃗_2) ⋯; ⋮ ⋮ ⋱ ], and B is a vector whose elements are all 1. Our goal will be to extract the imaginary part of the vertex function, which is related to nuclear matter dynamical structure functions and neutrino scattering cross sections. In order for the imaginary part of Eq. (<ref>) to be nonzero, the matrix N^-1(E + ω1)- V must be singular, which occurs when ω takes on the values defined by (NV-E) | ℓ⟩ = ω_ℓ | ℓ⟩. In the vicinity of ω^ℓ, one can write [ N^-1 ( E + ( ω + i η) 1 ) - V ]^-1 = 1/⟨ℓ | N^-1 | ℓ⟩ [ Pr/ω - ω_ℓ - i πδ(ω -ω_ℓ) ] | ℓ⟩⟨ℓ |, where Pr denotes the principal value. The imaginary part of the response function is then given by Im χ_τρ^ RPA(q⃗,ω) = -i π∑_ℓ⟨ B | ℓ⟩^2/⟨ℓ | N^-1 | ℓ⟩δ(ω - ω_ℓ). In terms of the discrete momentum-space mesh points and weights, we have Im χ_τρ^ RPA(q⃗,ω) = -i π∑_ℓ( ∑_i w_i | ℓ⟩_i )^2/∑_i (w_i | ℓ⟩_i)^2 (w_i N_i)^-1δ(ω - ω_ℓ), where | ℓ⟩_i denotes the i^ th element of the eigenvector | ℓ⟩. In practice, the finite number of δ functions in Eq. (<ref>) obtained by discretizing the integral in Eq. (<ref>) must be appropriately smeared to obtain a continuous response function. We employ the approximation δ_ϵ(ω) = 1/√(2 πϵ)e^-ω^2/2 ϵ. and find that it is always possible to choose a smearing length ϵ that leads to a converged result. The above eigenvalue method is suitable to resum both the direct and exchange RPA bubble diagrams to all orders. To benchmark the method, we consider the simplified case of iterating just the direct part of the nuclear potential to all orders, which for a local potential V(q) can be computed analytically through χ^ RPA(q⃗,ω) = χ^(0)(q⃗,ω)/1-V(q)χ^(0)(q⃗,ω). In Figure <ref> we show the energy dependence of the charged-current density response function in beta-equilibrium nuclear matter at density n=50 × 10^11 g/cm^3 and temperature T=7 MeV for a momentum transfer q=21 MeV assuming a scalar-isovector interaction V(q) = g^2/m^2+q^2τ⃗_1 ·τ⃗_2, where we take g=5 and m=700 MeV. In Figure <ref> we plot the response function in the approximation of noninteracting particles (blue dots), the exact analytical RPA result (black dots), and the numerical eigenvalue RPA result (red line) with a delta function smearing length of ϵ = 70× 10^-4 fm^-1. One sees excellent agreement between the exact and numerical RPA resummations. The treatment of spin response functions proceeds similarly, except that we obtain a pair of coupled equations for the spin-up and spin-down response functions, which effectively doubles the dimensionality of the vectors and matrices in Eqs. (<ref>) – (<ref>). For the spin response functions, we also do not use the spin-averaging approximation employed to obtain Eq. (<ref>). In Figure <ref> we show the energy dependence of the neutral-current spin response function in beta-equilibrium nuclear matter at density n=50 × 10^11 g/cm^3 and temperature T=7 MeV for a momentum transfer q=21 MeV assuming an isoscalar spin-spin interaction V(q) = g^2/m^2+q^2σ⃗_1 ·σ⃗_2, where we take g=10 and m=700 MeV. In Figure <ref> we plot the response function in the approximation of noninteracting particles (blue dots), the exact analytical RPA result (black dots), and the numerical eigenvalue RPA result (red line) with a delta function smearing length of ϵ = 70× 10^-4 fm^-1. Again we find excellent agreement between the exact and numerical RPA resummations. §.§ Dynamic structure functions Neutrino opacities are a key input to numerical simulations of core-collapse supernovae, proto-neutron star evolution, and neutron star mergers. Both neutral-current and charged-current weak reactions are important sources of neutrino opacity across a wide range of densities and temperatures. Matter effects on neutrino scattering and absorption cross sections on baryons are encoded in dynamical structure functions related to the imaginary part of nuclear response functions. §.§.§ Neutral-current neutrino scattering The double differential cross section for low-energy neutrinos to scatter in a non-relativistic gas of nucleons is given by 1/Vd^2 σ/d cosθ dω = G_F^2/4π^2 (E_ν-ω)^2 × [ c_V^2 (1 + cosθ ) S_ρ(q,ω) + c_A^2 (3 - cosθ) S_σ(q,ω) ], where S_ρ is the neutral-current density structure function and S_σ is the neutral-current spin structure function. The energy transfer is given by ω=E_ν-E_ν^' and the momentum transfer is given by q⃗ = p⃗_ν - p⃗_ν^ ' with magnitude q=√(E_ν^2 + E_ν^'^2-2 E_ν E_ν^'cosθ). The structure functions in Eq. (<ref>) are related to the imaginary parts of the associated response functions by S (q,ω) = - 2 Imχ(q,ω)/1 - e^-ω /T. §.§.§ Charge-current neutrino absorption The double differential cross section for electron neutrino absorption is given by <cit.> 1/Vd^2σ/dcosθ dE_e = G_F^2 cos^2 θ_c/4π^2 p_e E_e (1-f_e(E_e)) × [(1+cosθ)S_τρ(ω,q) + g_A^2(3-cosθ)S_τσ(ω,q)], where S_τρ is the charged-current density structure function and S_τσ is the charged-current spin structure function. The energy transfer is given by ω=E_ν-E_e and the momentum transfer is given by q⃗ = p⃗_ν - p⃗_e with magnitude q=√(E_ν^2 + E_e^2-2 E_ν E_e cosθ). The structure functions in Eq. (<ref>) are related to the imaginary part of the associated response functions by S_τ(q,ω) = -2 Imχ (q,ω)/1 - e^-(ω + μ_n - μ_p) /T, where the detailed balance factor depends explicitly on the proton and neutron chemical potentials μ_p and μ_n. § RESULTS §.§ Response functions in the mean field approximation In Figure <ref> we show mean field effects on the charged-current density response function of nuclear matter for different choices of the nuclear potential: N3LO-414, N3LO-450, and N3LO-500. In the top panel, we consider beta-equilibrium nuclear matter at density n=0.002 fm^-3, temperature T=5 MeV, and assuming a momentum transfer of q=15 MeV. In the bottom panel, we consider beta-equilibrium nuclear matter at density n=0.02 fm^-3, temperature T=8 MeV, and assuming a momentum transfer of q=24 MeV. We find that mean field effects are larger for smaller values of the momentum-space cutoff. A similar effect was observed in the context of the nuclear equation of state <cit.> and the nuclear single-particle potential <cit.>. Namely, low-cutoff potentials are more perturbative and therefore generate more attraction at first order in perturbation theory, even though the sum of first- and second-order perturbation theory contributions to the nuclear equation of state or single-particle potential are similar. As argued in Section <ref>, the largest effect of mean field corrections in the imaginary part of the neutrino absorption charged-current response is to shift the strength by an amount U_p-U_n in the energy transfer ω. At the lower value of the density, n=0.002 fm^-3, the mean field splitting is on the order of 1 MeV, while at the larger density n=0.02 fm^-3, the mean field spliting is nearly 10 MeV for the N3LO-414 chiral nucleon-nucleon potential. However, such mean field splittings are still about a factor of 2 less than those generated from typical phenomenological mean field models <cit.>. §.§ Response functions in the random phase approximation The random phase approximation for the vertex function together with the Hartree-Fock approximation for the single-particle energies represents a conserving approximation that is guaranteed to preserve sum rules and the positivity of dynamical structure functions. In addition, it is able to capture the presence of collective oscillations such as the giant-dipole and Gamow-Teller resonances that are known to play a role in the response of nuclei <cit.>. In Figure <ref> we show a contour plot of the imaginary part of the neutrino-absorption charged-current spin response as a function of the energy ω and momentum q transfer for beta-equilibrium matter with density n=0.002 fm^-3 and temperature T=5 MeV. The top panel shows the imaginary part of the response including mean field (MF) corrections alone, while the bottom panel shows the combined effect of RPA correlations and mean fields (RPA + MF). We see that the MF response exhibits a relatively broad distribution already at low momentum transfers that peaks at a nearly constant energy ω≃ -2 MeV. In contrast, the RPA + MF response remains sharply peaked for longer, up to a momentum transfer q ≃ 0.04 fm^-1≃ 8 MeV, and peaks at a value ω≃ -1 MeV that increases slowly with q. The sharper structure of the RPA + MF response is indicative of a collective mode. This feature is even more evident in Figure <ref>, where we show the contour plot of the imaginary part of the neutrino-absorption charged-current spin response for beta-equilibrium matter with density n=0.02 fm^-3 and temperature T=8 MeV. Here the collective mode (now at positive energy) remains sharp up to a momentum transfer q≃ 0.2 fm^-1≃ 40 MeV. The shift in peak energy of the imaginary response from -7.5 MeV in the mean field approximation to 2 MeV in the RPA + MF approximation will have an important effect on electron neutrino absorption in dilute beta-equilibrium nuclear matter. The shift will push the outgoing electron energy to smaller values where Pauli blocking acts to reduce the available phase space, thereby suppressing the absorption cross section and increasing the mean free path. In the left panels of Figure <ref> we plot the neutral-current density (top) and spin (bottom) dynamic structure factors for beta equilibrium nuclear matter under the ambient conditions T=8 MeV, n=0.02 fm^-3, and momentum transfer of q=24 MeV. For both the neutral-current density and spin response functions, the mean field corrections play only a very minor role. Mean fields change slightly the proton and neutron densities for beta equilibrium matter, and the effective mass has only a small effect on the Fermi distribution functions and energy denominators. The energy shifts in the single-particle potentials are absorbed into redefinitions of the proton and neutron chemical potentials for the Fermi distribution functions, and the mean field shifts cancel in the response function energy denominators. The inclusion of RPA correlations, however, is very important, enhancing the density structure function and suppressing the spin structure function, a feature already observed <cit.> including first-order vertex corrections starting from a pseudopotential defined in terms of nucleon-nucleon scattering phase shifts. This behavior can be traced to the large neutron-neutron attraction in the ^1S_0 partial wave. In the right panels of Figure <ref> we plot the charged-current density (top) and spin (bottom) dynamic structure functions in beta-equilibrium nuclear matter under the ambient conditions T=8 MeV, n=0.02 fm^-3, and momentum transfer of q=24 MeV. Whereas mean fields drive absorption strength to lower energy transfers, RPA correlations significantly shift strength to larger energy transfers. This is due to the existence of giant dipole and Gamow-Teller collective modes in the isovector-density and isovector-spin channels. The combined effect of nuclear mean fields and RPA correlations is a redistribution of strength to energies above that of the noninteracting charged-current response functions. We conclude that the inclusion of vertex corrections is crucial for an accurate description of nuclear matter response functions at and above neutrinosphere densities. §.§ Energy-dependent neutrino absorption cross section Integrating Eq. (<ref>) over the scattering angle θ, we obtain the differential energy-dependent neutrino absorption cross section 1/Vdσ/dE_e = G_F^2 cos^2 θ_c/4π^2 p_e E_e (1-f_e(E_e)) ×∫ dcosθ [(1+cosθ)S_τρ(ω,q) + g_A^2(3-cosθ)S_τσ(ω,q)], where S_τρ and S_τσ are the density and spin dynamic structure functions. In Figures <ref> and <ref> we plot the differential cross section for electron neutrino absorption, keeping only the spin dynamic structure function in Eq. (<ref>) for beta-equilibrium matter at two densities n = 0.002 fm^-3 and n = 0.02 fm^-3. The incoming neutrino energy is E_ν = 3 T. We show the cross section assuming noninteracting nucleons (blue dotted line), the cross section keeping only nuclear mean fields at the Hartree-Fock level (blue dashed line), and finally the cross section including RPA correlations and Hartree-Fock mean fields (black solid line). The nuclear force is taken to be the N3LO-414 chiral two-body interaction. One finds that the inclusion of nuclear mean fields enhances neutrino absorption since the response is shifted to lower energy transfers ω and, therefore, higher electron energies for which the Pauli suppression factor is reduced. However, RPA correlations provide a stronger shift in the response function toward higher energy transfers, leading to a reduced absorption cross section whose peak in electron energy can be shifted below that of the non-interacting Fermi gas. §.§ Mean free path We now present and discuss results for the mean free path of electron and anti-electron neutrinos in the vicinity of the neutrinosphere due to charged current interactions. Differences between these mean free paths directly impact several key observable aspects of supernovae and neutron star mergers including dynamics, nucleosynthesis, and neutrino oscillations. The inverse of the electron and anti-electron neutrino mean free path due to their charged current interactions is obtained by integrating the differential absorption cross section per unit volume over the final-state lepton energy, e.g., 1/λ= ∫1/Vdσ/dE_e dE_e . In Figures <ref> and <ref> we plot the inverse mean free paths of electron neutrino and antineutrino absorption as a function of the incident energy for two sets of ambient conditions (n,T)=(0.002 fm^-3,5 MeV) and (0.02 fm^-3,8 MeV). The dynamic structure functions are computed from the associated charged-current spin response functions in three approximations. First, the inverse neutrino mean free paths neglecting interactions between nucleons is shown by the dotted curves. Second, we show as the dashed lines the effect of introducing proton and neutron mean fields in the Hartree-Fock approximation employing the N3LO-414 chiral nucleon-nucleon interaction. Finally, the solid curves show the combined effects of nucleon mean fields and vertex corrections obtained in the random phase approximation. We find that for the N3LO-414 potential, the mean-field effects significantly enhance the electron neutrino absorption cross-section across all energies considered in agreement with earlier studies. However, in contrast, RPA correlations redistribute strength to the vicinity of the positive-energy collective mode. This significantly reduces the outgoing electron energy into a region where Pauli blocking suppresses the reaction. This redistribution of strength due to a broad collective mode shifts the response to higher energy and undoes the enhancement of the inverse mean free path due to mean-field effects. Remarkably, correlations suppress the electron neutrino absorption cross-sections over the entire energy range and are especially large for low-energy neutrinos. In Figure <ref> we show the imaginary part of the spin response function for antineutrino absorption in beta-equilibrium nuclear matter at density n=0.02 fm^-3 and temperature T = 8 MeV. For electron antineutrinos, nuclear mean fields shift the response to higher energies. The absorption cross-section is therefore greatly reduced because the threshold energy to convert protons into neutrons is increased to such an extent that there is little phase space available for the reaction. As observed in Figure <ref>, the corresponding mean free path increases dramatically at low energies in agreement with earlier work. However, the presence of the negative-energy collective mode due to RPA correlations, shown in the lower panel of Figure <ref>, lowers the energy required for the process. It provides a reaction pathway even at low antineutrino energies, thereby increasing the cross-section. The reaction at low energies can be viewed as a process involving the absorption of a positively charged collective mode by the anti-electron neutrino to produce a positron in the final state. At higher antineutrino energies, the strong coupling to the negative-energy collective mode weakens the absorption cross section through the detailed balance factor ( 1 - e^-(ω + μ_p - μ_n) /T )^-1. The response function is also narrowly peaked in this region, which limits the available phase space for final-state positron energies. Overall, at high energies the antineutrino absorption total cross section is reduced relative to both the noninteracting and mean field approximations as seen in Figure <ref>. § CONCLUSION We have developed the framework to calculate the neutrino scattering and absorption rates in a warm neutron-rich matter that consistently includes mean-field effects and correlations through the Random Phase Approximation (RPA). We employ nuclear interactions derived from chiral effective field theory and, for the first time, include direct and exchange contributions to the mean field energies and RPA vertex functions. The combination of Hartree-Fock self energies and RPA vertex corrections to the response function constitutes a “conserving approximation” that guarantees thermodynamic consistency and the positivity of dynamic structure functions. The integral equation for the RPA vertex function was discretized, leading to a matrix eigenvalue problem that could be solved through standard diagonalization. We find that including RPA correlations produces a broad collective mode that shifts the strength of the charged-current neutrino response to higher energy transfer. For a fixed incident neutrino energy, this lowers the outgoing electron energy into a region of Pauli-blocked suppression, thereby reducing the total cross-section. In contrast, electron-antineutrino absorption is enhanced at low energies because the same broad collective mode lowers the threshold energy needed to convert protons into neutrons. For higher antineutrino energy, this enhancement is absent because of kinematic constraints. Our consistent treatment of nuclear mean fields and RPA correlations has important implications for charged current reactions near the supernova and neutron star merger neutrinospheres. When only mean-field effects are included, we confirm the results of previous studies that found a large enhancement of electron-neutrino absorption cross-section and a reduction in antineutrino absorption cross-section. However, correlations included through RPA qualitatively change the picture. As discussed in the previous section, the absorption cross-sections for electron neutrinos are reduced over the entire range of relevant energies. This would imply an increased luminosity and average energy of electron neutrinos emitted in supernovae and mergers. For anti-electron neutrinos, the absorption cross-section is enhanced at low energy and suppressed at high energy. This implies that the lepton number flux carried by neutrinos can be strongly energy-dependent when RPA correlations are included. Since neutrino oscillations and nucleosynthesis are especially sensitive to the neutrino lepton number flux and its energy dependence <cit.>, our findings will likely impact both. Our finding that vertex corrections suppress differences between ν_e and ν̅_e absorption rates in the vicinity of the neutrinosphere implies that the emerging spectra of ν_e and ν̅_e will be more similar than previously expected <cit.>. This will likely inhibit the production of neutron-rich r-process nuclei in the neutrino driven ejecta from core-collapse supernovae and dynamical ejecta in neutron star mergers (for recent reviews see <cit.>). However, collective neutrino flavor oscillations, which are also sensitive to the energy dependence of the lepton number flux, impact the final spectra at the nucleosynthesis site (see reviews <cit.> and references therein). To gauge if corrections to mean free paths calculated in this work can alter nucleosynthesis it is critical to include their effect on flavor transformation by modifying the two-point Feynman diagrams for electron neutrino absorption and emission processes in Quantum Kinetic Equation (QKE) treatments <cit.>, and collisional instabilities <cit.>. Further, larger mean free paths for both electron and anti-electron neutrinos at higher energy imply an increase in the total luminosity and average energy. This will likely increase the net neutrino energy deposition behind the shock in core-collapse supernova and aid the explosion. The results of this work can be tabulated and included in simulations of core-collapse supernovae, neutron star mergers, and neutron star cooling. Finally, we mention the limitations of our study and identify directions for future work. First, although our analysis provides a consistent treatment of excitations above the Hartree-Fock ground state, it neglects two-body currents and correlations beyond one-particle-one-hole RPA. At the low density, we expect both two-body currents and two-particle-two-hole excitations to be small because they appear at higher order in the density expansion, but more work is needed to assess their importance at densities of relevance to the neutrino sphere. It is well-known that the interplay between short-range correlations and two-body currents, especially in the axial vector channel, plays an important role in nuclear weak interactions. In addition, error estimates for neutrino interaction rates require systematic order-by-order calculations. These quantitive issues warrant further work before one can draw definite conclusions about ν_e and ν̅_e charged current reactions in the neutrino sphere and their emergent spectra. § ACKNOWLEDGEMENT The work of E. Shin and J. W. Holt is supported by the National Science Foundation under Grant Nos. PHY1652199 and PHY2209318. Portions of the research were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing. The work of S. R. was supported by the U.S. DOE under Grant No. DE-FG02-00ER41132 and by the National Science Foundation's Physics Frontier Center: The Network for Neutrinos, Nuclear Astrophysics, and Symmetries. apsrev4-1 52 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Burrows and Sawyer(1999)]burrows99 author author A. Burrows and author R. F. Sawyer, 10.1103/PhysRevC.59.510 journal journal Phys. Rev. C volume 59, pages 510 (year 1999)NoStop [O'Connor(2015)]oconnor15 author author E. O'Connor, 10.1088/0067-0049/219/2/24 journal journal Astrophys. J. Suppl. volume 219, pages 24 (year 2015)NoStop [Melson et al.(2015)Melson, Janka, Bollig, Hanke, Marek, and Müller]melson15 author author T. Melson, author H.-T. Janka, author R. Bollig, author F. Hanke, author A. Marek, and author B. Müller, 10.1088/2041-8205/808/2/L42 journal journal Astrophys. J. Lett. volume 808, pages L42 (year 2015)NoStop [Roberts et al.(2016)Roberts, Ott, Haas, O'Connor, Diener, and Schnetter]roberts16 author author L. F. Roberts, author C. D. Ott, author R. Haas, author E. P. O'Connor, author P. Diener, and author E. Schnetter, 10.3847/0004-637X/831/1/98 journal journal Astrophys. J. volume 831, pages 98 (year 2016)NoStop [Pons et al.(1999)Pons, Reddy, Prakash, Lattimer, and Miralles]pons98 author author J. A. Pons, author S. Reddy, author M. Prakash, author J. M. Lattimer, and author J. A. Miralles, 10.1086/306889 journal journal Astrophys. J. volume 513, pages 780 (year 1999)NoStop [Roberts and Reddy(2017)]roberts17 author author L. F. Roberts and author S. Reddy, 10.1103/PhysRevC.95.045807 journal journal Phys. Rev. C volume 95, pages 045807 (year 2017)NoStop [Sekiguchi(2010)]sekiguchi10 author author Y. Sekiguchi, 10.1088/0264-9381/27/11/114107 journal journal Class. Quant. Grav. volume 27, pages 114107 (year 2010)NoStop [Wanajo et al.(2014)Wanajo, Sekiguchi, Nishimura, Kiuchi, Kyutoku, and Shibata]wanajo14 author author S. Wanajo, author Y. Sekiguchi, author N. Nishimura, author K. Kiuchi, author K. Kyutoku, and author M. Shibata, 10.1088/2041-8205/789/2/L39 journal journal Astrophys. J. Lett. volume 789, pages L39 (year 2014)NoStop [Endrizzi et al.(2020)Endrizzi, Perego, Fabbri, Branca, Radice, Bernuzzi, Giacomazzo, Pederiva, and Lovato]endrizzi20 author author A. Endrizzi, author A. Perego, author F. M. Fabbri, author L. Branca, author D. Radice, author S. Bernuzzi, author B. Giacomazzo, author F. Pederiva, and author A. Lovato, 10.1140/epja/s10050-019-00018-6 journal journal Eur. Phys. J. A volume 56, pages 15 (year 2020)NoStop [Sumiyoshi et al.(2021)Sumiyoshi, Fujibayashi, Sekiguchi, and Shibata]sumiyoshi21 author author K. Sumiyoshi, author S. Fujibayashi, author Y. Sekiguchi, and author M. Shibata, 10.3847/1538-4357/abce63 journal journal Astrophys. J. volume 907, pages 92 (year 2021)NoStop [Cusinato et al.(2022)Cusinato, Guercilena, Perego, Logoteta, Radice, Bernuzzi, and Ansoldi]cusinato21 author author M. Cusinato, author F. M. Guercilena, author A. Perego, author D. Logoteta, author D. Radice, author S. Bernuzzi, and author S. Ansoldi, 10.1140/epja/s10050-022-00743-5 journal journal Eur. Phys. J. A volume 58, pages 99 (year 2022)NoStop [Sawyer(1989)]sawyer89 author author R. F. Sawyer, 10.1103/PhysRevC.40.865 journal journal Phys. Rev. C volume 40, pages 865 (year 1989)NoStop [Reddy et al.(1998)Reddy, Prakash, and Lattimer]reddy98 author author S. Reddy, author M. Prakash, and author J. M. Lattimer, 10.1103/PhysRevD.58.013009 journal journal Phys. Rev. C volume 58, pages 013009 (year 1998)NoStop [Reddy et al.(1999)Reddy, Prakash, Lattimer, and Pons]reddy99 author author S. Reddy, author M. Prakash, author J. M. Lattimer, and author J. A. Pons, 10.1103/PhysRevC.59.2888 journal journal Phys. Rev. C volume 59, pages 2888 (year 1999)NoStop [Martínez-Pinedo et al.(2012)Martínez-Pinedo, Fischer, Lohs, and Huther]martinez-pinedo12 author author G. Martínez-Pinedo, author T. Fischer, author A. Lohs, and author L. Huther, 10.1103/PhysRevLett.109.251104 journal journal Phys. Rev. Lett. volume 109, pages 251104 (year 2012)NoStop [Roberts et al.(2012)Roberts, Reddy, and Shen]roberts12 author author L. F. Roberts, author S. Reddy, and author G. Shen, 10.1103/PhysRevC.86.065803 journal journal Phys. Rev. C volume 86, pages 065803 (year 2012)NoStop [Pastore et al.(2012)Pastore, Martini, Buridon, Davesne, Bennaceur, and Meyer]pastore12 author author A. Pastore, author M. Martini, author V. Buridon, author D. Davesne, author K. Bennaceur, and author J. Meyer, 10.1103/PhysRevC.86.044308 journal journal Phys. Rev. C volume 86, pages 044308 (year 2012)NoStop [Iwamoto and Pethick(1982)]iwamoto82 author author N. Iwamoto and author C. J. Pethick, 10.1103/PhysRevD.25.313 journal journal Phys. Rev. D volume 25, pages 313 (year 1982)NoStop [Burrows and Sawyer(1998)]burrows98 author author A. Burrows and author R. F. Sawyer, 10.1103/PhysRevC.58.554 journal journal Phys. Rev. C volume 58, pages 554 (year 1998)NoStop [Horowitz and Schwenk(2006)]horowitz06 author author C. J. Horowitz and author A. Schwenk, 10.1016/j.physletb.2006.09.042 journal journal Phys. Lett. B volume 642, pages 326 (year 2006)NoStop [Horowitz et al.(2017)Horowitz, Caballero, Lin, O'Connor, and Schwenk]horowitz17 author author C. J. Horowitz, author O. L. Caballero, author Z. Lin, author E. O'Connor, and author A. Schwenk, 10.1103/PhysRevC.95.025801 journal journal Phys. Rev. C volume 95, pages 025801 (year 2017)NoStop [Bedaque et al.(2018)Bedaque, Reddy, Sen, and Warrington]bedaque18 author author P. F. Bedaque, author S. Reddy, author S. Sen, and author N. C. Warrington, 10.1103/PhysRevC.98.015802 journal journal Phys. Rev. C volume 98, pages 015802 (year 2018)NoStop [Rrapaj et al.(2015)Rrapaj, Holt, Bartl, Reddy, and Schwenk]rrapaj15 author author E. Rrapaj, author J. W. Holt, author A. Bartl, author S. Reddy, and author A. Schwenk, 10.1103/PhysRevC.91.035806 journal journal Phys. Rev. C volume 91, pages 035806 (year 2015)NoStop [Bacca et al.(2012)Bacca, Hally, Liebendorfer, Perego, Pethick, and Schwenk]bacca12 author author S. Bacca, author K. Hally, author M. Liebendorfer, author A. Perego, author C. J. Pethick, and author A. Schwenk, 10.1088/0004-637X/758/1/34 journal journal Astrophys. J. volume 758, pages 34 (year 2012)NoStop [Bartl et al.(2016)Bartl, Bollig, Janka, and Schwenk]bartl16 author author A. Bartl, author R. Bollig, author H.-T. Janka, and author A. Schwenk, 10.1103/PhysRevD.94.083009 journal journal Phys. Rev. D volume 94, pages 083009 (year 2016)NoStop [Hanhart et al.(2001)Hanhart, Phillips, and Reddy]Hanhart:2000ae author author C. Hanhart, author D. R. Phillips, and author S. Reddy, 10.1016/S0370-2693(00)01382-4 journal journal Phys. Lett. B volume 499, pages 9 (year 2001)NoStop [Lim and Holt(2017)]lim17 author author Y. Lim and author J. W. Holt, 10.1103/PhysRevC.95.065805 journal journal Phys. Rev. C volume 95, pages 065805 (year 2017)NoStop [Carreau et al.(2019)Carreau, Gulminelli, and Margueron]carreau19 author author T. Carreau, author F. Gulminelli, and author J. Margueron, 10.1140/epja/i2019-12884-1 journal journal Eur. Phys. J. A volume 55, pages 188 (year 2019)NoStop [Gandolfi et al.(2011)Gandolfi, Carlson, and Pieper]gandolfi11 author author S. Gandolfi, author J. Carlson, and author S. C. Pieper, 10.1103/PhysRevLett.106.012501 journal journal Phys. Rev. Lett. volume 106, pages 012501 (year 2011)NoStop [Buraczynski and Gezerlis(2016)]buraczynski16 author author M. Buraczynski and author A. Gezerlis, 10.1103/PhysRevLett.116.152501 journal journal Phys. Rev. Lett. volume 116, pages 152501 (year 2016)NoStop [Baym and Kadanoff(1961)]baym61 author author G. Baym and author L. P. Kadanoff, 10.1103/PhysRev.124.287 journal journal Phys. Rev. volume 124, pages 287 (year 1961)NoStop [Harris and Evans(1978)]harris78 author author C. G. Harris and author W. A. B. Evans, 10.1088/0022-3719/11/22/004 journal journal J. Phys. C: Solid State Phys. volume 11, pages 4447 (year 1978)NoStop [Entem and Machleidt(2003)]entem03 author author D. R. Entem and author R. Machleidt, 10.1103/PhysRevC.68.041001 journal journal Phys. Rev. C volume 68, pages 041001 (year 2003)NoStop [Coraggio et al.(2007)Coraggio, Covello, Gargano, Itaco, Entem, Kuo, and Machleidt]coraggio07 author author L. Coraggio, author A. Covello, author A. Gargano, author N. Itaco, author D. R. Entem, author T. T. S. Kuo, and author R. Machleidt, 10.1103/PhysRevC.75.024311 journal journal Phys. Rev. C volume 75, pages 024311 (year 2007)NoStop [Marji et al.(2013)Marji, Canul, MacPherson, Winzer, Zeoli, Entem, and Machleidt]marji13 author author E. Marji, author A. Canul, author Q. MacPherson, author R. Winzer, author C. Zeoli, author D. R. Entem, and author R. Machleidt, 10.1103/PhysRevC.88.054002 journal journal Phys. Rev. C volume 88, pages 054002 (year 2013)NoStop [Holt and Kaiser(2017)]Holt author author J. W. Holt and author N. Kaiser, 10.1103/PhysRevC.95.034326 journal journal Phys. Rev. C volume 95, pages 034326 (year 2017)NoStop [Wellenhofer et al.(2014)Wellenhofer, Holt, Kaiser, and Weise]Wellenhofer:2014hya author author C. Wellenhofer, author J. W. Holt, author N. Kaiser, and author W. Weise, 10.1103/PhysRevC.89.064009 journal journal Phys. Rev. C volume 89, pages 064009 (year 2014)NoStop [Whitehead et al.(2021)Whitehead, Lim, and Holt]Whitehead21 author author T. R. Whitehead, author Y. Lim, and author J. W. Holt, 10.1103/PhysRevLett.127.182502 journal journal Phys. Rev. Lett. volume 127, pages 182502 (year 2021)NoStop [Holt et al.(2012)Holt, Kaiser, and Weise]Holt:2011yj author author J. W. Holt, author N. Kaiser, and author W. Weise, 10.1016/j.nuclphysa.2011.12.001 journal journal Nucl. Phys. A volume 876, pages 61 (year 2012)NoStop [Holt et al.(2013)Holt, Kaiser, Miller, and Weise]holt13 author author J. W. Holt, author N. Kaiser, author G. A. Miller, and author W. Weise, 10.1103/PhysRevC.88.024614 journal journal Phys. Rev. C volume 88, pages 024614 (year 2013)NoStop [Holt et al.(2016)Holt, Kaiser, and Miller]holt16 author author J. W. Holt, author N. Kaiser, and author G. A. Miller, 10.1103/PhysRevC.93.064603 journal journal Phys. Rev. C volume 93, pages 064603 (year 2016)NoStop [Rrapaj et al.(2016)Rrapaj, Roggero, and Holt]rrapaj16 author author E. Rrapaj, author A. Roggero, and author J. W. Holt, 10.1103/PhysRevC.93.065801 journal journal Phys. Rev. C volume 93, pages 065801 (year 2016)NoStop [Qian and Woosley(1996)]Qian1996 author author Y.-Z. Qian and author S. E. Woosley, 10.1086/177973 journal journal Astrophys. J. volume 471, pages 331 (year 1996)NoStop [Arcones and Thielemann(2012)]Arcones2013 author author A. Arcones and author F.-K. Thielemann, 10.1088/0954-3899/40/1/013201 journal journal J. Phys. G: Nucl. and Part. Phys. volume 40, pages 013201 (year 2012)NoStop [Kajino et al.(2019)Kajino, Aoki, Balantekin, Diehl, Famiano, and Mathews]Kajino:2019abv author author T. Kajino, author W. Aoki, author A. B. Balantekin, author R. Diehl, author M. A. Famiano, and author G. J. Mathews, 10.1016/j.ppnp.2019.02.008 journal journal Prog. Part. Nucl. Phys. volume 107, pages 109 (year 2019)NoStop [Cowan et al.(2021)Cowan, Sneden, Lawler, Aprahamian, Wiescher, Langanke, Martínez-Pinedo, and Thielemann]Cowan:2019pkx author author J. J. Cowan, author C. Sneden, author J. E. Lawler, author A. Aprahamian, author M. Wiescher, author K. Langanke, author G. Martínez-Pinedo, and author F.-K. Thielemann, 10.1103/RevModPhys.93.015002 journal journal Rev. Mod. Phys. volume 93, pages 15002 (year 2021)NoStop [Chakraborty et al.(2016)Chakraborty, Hansen, Izaguirre, and Raffelt]Chakraborty2016 author author S. Chakraborty, author R. Hansen, author I. Izaguirre, and author G. Raffelt, https://doi.org/10.1016/j.nuclphysb.2016.02.012 journal journal Nucl. Phys. B volume 908, pages 366 (year 2016), note neutrino Oscillations: Celebrating the Nobel Prize in Physics 2015NoStop [Tamborra and Shalgar(2021)]Tamborra2021 author author I. Tamborra and author S. Shalgar, 10.1146/annurev-nucl-102920-050505 journal journal Ann. Rev. Nucl. Part. Sci. volume 71, pages 165 (year 2021)NoStop [Vlasenko et al.(2014)Vlasenko, Fuller, and Cirigliano]Vlasenko2014 author author A. Vlasenko, author G. M. Fuller, and author V. Cirigliano, 10.1103/PhysRevD.89.105004 journal journal Phys. Rev. D volume 89, pages 105004 (year 2014)NoStop [Richers et al.(2019)Richers, McLaughlin, Kneller, and Vlasenko]Richers2019 author author S. A. Richers, author G. C. McLaughlin, author J. P. Kneller, and author A. Vlasenko, 10.1103/PhysRevD.99.123014 journal journal Phys. Rev. D volume 99, pages 123014 (year 2019)NoStop [Johns and Xiong(2022)]Johns2022 author author L. Johns and author Z. Xiong, 10.1103/PhysRevD.106.103029 journal journal Phys. Rev. D volume 106, pages 103029 (year 2022)NoStop [Johns(2023)]Johns2023 author author L. Johns, 10.1103/PhysRevLett.130.191001 journal journal Phys. Rev. Lett. volume 130, pages 191001 (year 2023)NoStop
http://arxiv.org/abs/2306.06181v1
20230609181151
Quantum fluctuations spatial mode profiler
[ "Charris Gabaldon", "Pratik Barge", "Savannah L. Cuozzo", "Irina Novikova", "Hwang Lee", "Lior Cohen", "Eugeniy E. Mikhailov" ]
quant-ph
[ "quant-ph" ]
AIP/123-QED ]Quantum fluctuations spatial mode profiler Physics Department, William & Mary, Williamsburg, Virginia, USA Hearne Institute for Theoretical Physics and Department of Physics & Astronomy, Louisiana State University, Baton Rouge, Louisiana, USA Physics Department, William & Mary, Williamsburg, Virginia, USA Physics Department, William & Mary, Williamsburg, Virginia, USA Hearne Institute for Theoretical Physics and Department of Physics & Astronomy, Louisiana State University, Baton Rouge, Louisiana, USA Department of Electrical, Computer and Energy Engineering, University of Colorado Boulder, Boulder, 80309, CO, USA Physics Department, William & Mary, Williamsburg, Virginia, USA The spatial mode is an essential component of an electromagnetic field description, yet it is challenging to characterize it for optical fields with low average photon number, such as in a squeezed vacuum. We present a method for reconstruction of the spatial modes of such fields based on the homodyne measurements of their quadrature noise variance performed with a set of structured masks. We show theoretically that under certain conditions we can recover individual spatial mode distributions by using the weighted sum of the basis masks, where weights are determined using measured variance values and phases. We apply this approach to analyze the spatial structure of a squeezed vacuum field with various amount of excess thermal noise generated in Rb vapor. [ Eugeniy E. Mikhailov July 31, 2023 ======================== § INTRODUCTION Transverse spatial distribution is an important element of the description of any classical or quantum electromagnetic field. For many applications it is essential to restrict light propagation within a single, well-defined spatial mode. However, the multimode nature of light can be desirable in fields such as optical information multiplexing <cit.> or imaging <cit.>. In either case the ability to identify and characterize the spatial mode composition of an electromagnetic field becomes a helpful tool. Several solutions have been recently proposed for classical optical fields, in which specially designed dispersive elements help spatially separate various modes (often in Laguerre-Gauss or Hermit-Gauss basis) into uniquely positioned spots <cit.>. The situation becomes significantly more challenging when the multimode optical field consists primarily of squeezed vacuum quantum fluctuations, since there is no accompanying strong classical field to tune to a selected mode. In this case identification of individual modes becomes akin looking for a black cat in a dark room. Traditional quantum noise detection requires a strong local oscillator (LO) to amplify weak fluctuations to the detectable level. However, this method relies on perfect spatial overlap of the LO and the unknown quantum probe <cit.>, and thus requires a priori knowledge of the quantum field's transverse distribution, or the perfect shape of the LO need to be found via the set of optimization measurements. The situation becomes more complicated if the quantum noise is spatially multimode, such as a mixture of, e.g., squeezed vacuum and thermal light. In some cases, e.g., if the squeezed modes are not overlapping, it is possible to obtain the information about their number, shapes and squeezing parameters by reducing the size of the LO mode <cit.> or sampling nearby pixels correlations <cit.> as it was demonstrated for twin-beam squeezing. Multimode quantum field is a useful resource for quantum imaging, as the information about spatial transmission masks can be obtained by shaping the LO <cit.> or analyzing noise correlation for each camera pixel <cit.>. However, these measurements rely on the relative modification of the quantum probe noise, and may not be useful for the diagnostic of the original multimode probe itself. The Bloch-Messiah reduction <cit.> offers a promising method to extract information about squeezing modes of a multimode optical field. It was shown to recover the set of quantum eigenmodes of the frequency comb <cit.> and parametric amplifiers via the diagonalization of the measurement basis. However, this is a data intensive procedure - the required number of measured covariances is proportional to the square of the measurement basis elements. Here we propose a protocol for characterizing and reconstructing the spatial profiles of single- and two mode quantum fluctuations with no prior information. While not as general as the Bloch-Messiah reconstruction, our method is significantly simpler since the required number of measurements scales linearly with the number of basis elements (spatial pixels). In particular, we reconstruct a transverse distribution of a single squeezed vacuum mode, and then expand the formalism to describe the mixture of the squeezed and thermal modes. Our method is based on single pixel imaging techniques <cit.> adapted to the quantum domain and combining it with homodyne detection <cit.>. Full wavefront information about the phase and amplitude in each point is extracted from the quantum quadrature variance measurements. In our experimental reconstruction we use a quadrature squeezed vacuum source based on the PSR nonlinearity in Rb atoms <cit.>, and trace the modification of the output quantum state from mostly single-mode squeezed vacuum to an admixture of squeezed vacuum and excess thermal noise as the temperature of Rb vapor increases <cit.>. However, our method is general and can be adopted for wide range of squeezed light sources and wavelengths. The general idea of our quantum noise mode profiler is inspired by classical single-pixel imaging <cit.> combined with the homodyne detection <cit.>. The principle difference is that we detect the quadrature noise variance, rather than average light power, of the optical field after a set of spatial transmission masks, as illustrated in Fig. <ref>.(a). For each squeezed mode the mask changes its quantum noise by reducing their quadrature fluctuations as well as their squeezing angle. We trace these changes for each mask H_m using the homodyne detector. By analyzing the variance as a function of the LO phase, we can find minimum and maximum variance V^±_m as well as the relative phase θ_m [Fig. <ref>(b,c)] and use them to calculate the weights to reconstruct the original signal spatial profile using the masks [Fig. <ref>(d)]. Notably, the same procedure will work if the masks are placed on the LO, rather than the signal field. § QUADRATURE VARIANCE CALCULATIONS FOR MULTIMODE QUANTUM FLUCTUATIONS In this section we analytically calculate the quadrature variance of a Gaussian optical quantum state with multiple spatial modes overlapped with a LO in a balanced homodyne measurement setup and validate the mode reconstruction method from the measured noise values. To calculate the homodyne detection output in the multimode case, we need a powerful formalism that can efficiently handle the multimode complexity. Fortunately, we can model the signal quantum field as N-mode Gaussian states – continuous variable (CV) states with Gaussian Wigner function <cit.>: W(x) = exp[-1/2(x̂-⟨x̂⟩)^T V^-1(x̂-⟨x̂⟩)]/(2π)^N √(det V). These states are completely determined by their first two moments, the mean vector x̂ = (q̂_1,p̂_1,...q̂_N,p̂_N)^T and covariance matrix V_jk = 1/2⟨{ (x̂_j-⟨x̂⟩_j),(x̂_k-⟨x̂⟩_k) }⟩, where {*,*} denotes anticommutator, q̂_k = 1/√(2)(â^†_k+â_k) and p̂_k=1/√(2)i(â^†_k-â_k) are the quadrature operators associated with the kth mode defined via standard creation (â^†_k) and annihilation (â_k) operators. Diagonal elements of the covariance matrix represent the quadrature variance of the field modes. For example, a signal field consisting of two squeezed vacuum modes with different spatial profiles is represented by a 4×4 covariance matrix V^(in) = ([ v_1 0; 0 v_2 ]) with k ∈{1,2} and v_k = [ e^r_kcos^2ϕ_k+ e^-r_ksin^2ϕ_k [e^-r_k-e^r_k]cosϕ_ksinϕ_k; [e^-r_k-e^r_k]cosϕ_ksinϕ_k e^r_ksin^2ϕ_k+ e^-r_kcos^2ϕ_k, ] where r_k and ϕ_k are the squeezing parameter and squeezing angle for each mode. Next, we need to describe the transformation of the quantum state after the mask and predict the output of the homodyne detector. We can model these optical elements using two symplectic matrices. Matrix B models a mask as a beam splitter with transmission T, and matrix R represents the single mode phase rotation θ: B = [ √(T) 0 √(1-T) 0; 0 √(T) 0 √(1-T); √(1-T) 0 -√(T) 0; 0 √(1-T) 0 -√(T) ], R = [ cosθ sinθ 0 0; -sinθ cosθ 0 0; 0 0 1 0; 0 0 0 1 ]. The first diagonal matrix element of the final covariance matrix <cit.> provides the value of the the output noise quadrature at the output of the homodyne detection V(θ): V(θ)=V^(out)_11=R_1jB_jkV^(in)_klB^†_lmR^†_m1=R_1jR_1mB_jkB_mlV^(in)_kl , where we connect the initial and final covariance matrices by applying the transformation with matrix multiplication. Taking into account the positions of the nonzero elements of the matrices involved, indices j and m can be only 1 and 2 and consequently indices k and l can be 1 and 3 or 2 and 4. This simplifies the matrix product to only 8 nonzero terms: V(θ)=∑_j,m,k,l R_1jR_1m T δ_jkδ_mlV^(in)_kl + ∑_j,m,k,lR_1jR_1m(1-T)δ_j+2,kδ_m+2,lV^(in)_kl The beam splitter matrix B accounts for the transformation of each of the squeezed modes by the mask. The transformation (matrix) coefficients are exactly the overlap between the input (u_k(x,y)) and output (H(x,y)) modes which is defined by the integral: O_k = ∫ u_k(x,y) H(x,y) dx dy. In this case of two modes, T=| O_1|^2 , 1-T=| O_2|^2, and the final expression for the two single mode squeezing signal variance after the mask becomes: V(θ)= 1 + ∑_k = 1^N=2 | O_k|^2 [e^r_kcos^2(θ-ϕ_k) + e^-r_ksin^2(θ-ϕ_k)-1]. This result can be easily generalized to N modes, by applying N-way beam splitter transformation and absorbing the phases, induced by the transformation, into the squeezing angles. Next, we can further extend the treatment into the squeezed thermal states. The covariance matrix is changed by adding additional factors 2n̅_ th,k to all diagonal terms, where n̅_ th,k is the average thermal photon number in the kth mode. Combining all this into Eq. <ref> we get: V(θ)= 1 + ∑_k = 1^N | O_k|^2 [(e^r_k+2n̅_ th,k)cos^2(θ-ϕ_k) + + (e^-r_k+2n̅_ th,k)sin^2(θ-ϕ_k)-1]. It is easy to see that the results in Eqs.<ref> and <ref> can be written in the same general form: V(θ) = 1 + ∑_k | O_k|^2 ( V^+_k cos^2(θ̃_k) + V^-_ksin^2(θ̃_k) -1 ), θ̃_k = θ - ϕ_k - Arg O_k, if we identify V_k^+=e^r_k+2n̅_ th,k and V_k^-=e^-r_k+2n̅_ th,k. § UNKNOWN SPATIAL MODE RECONSTRUCTION VIA QUADRATURE NOISE MEASUREMENTS Now we are ready to discuss the reconstruction of the unknown signal mode profile by measuring its quadrature variance after a complete set of transmission masks H_m(x,y). In general, the signal may consist of multiple spatial modes, each described by u_k(x,y). For the purpose of this discussion, we will assume that the quantum fluctuations of each modes are defined by its maximum and minimum quadrature noise V^±_k (normalized to the vacuum state noise), and θ_k is the squeezing angle with respect to the local oscillator, that we assume to be a single-mode coherent field with the spatial distribution u_LO(x,y). To gain information about the spatial profile of the input field we modify the signal field by passing it through various mask H_m(x,y) and measuring the corresponding quadrature variance V_m(θ): V_m(θ) = 1 + ∑_k | O_km|^2 ( V^+_k cos^2(θ - θ_k) + V^-_ksin^2(θ - θ_k) -1 ) , where O_km = ∫_A_d u_LO^*(x,y) u_k(x,y) H_m(x,y) dx dy . Note that such a mask can instead be introduced into the LO path as it would not change above overlap parameter definition except the mask would appear as complex conjugated. In most situations, we do not have information about either spatial distribution or noise statistics of either of the participating modes, and need to extract them from the measurements. This may not be possible under general conditions, since the contributions of all modes can be combined into one simple functional dependence: V_m(θ) = V^+_m cos^2(θ - θ_m) + V^-_msin^2(θ - θ_m) where V^+_m and V^-_m are the maximum and minimum quadratures detected for the m_th mask respectively and θ_m is some global mask dependent phase shift. While these parameters are relatively simple to extract from experimental data (see Fig. <ref>), the system of measurements is under-constrained, and we generally do not have enough information to independently extract V_k^±, O_km, θ_k, and θ_km. Nevertheless, below we consider several important cases, for which we can obtain the quantum mode profiles. §.§ Reconstruction of a spatial mode for a single-mode squeezed vacuum Let's assume that the input state consists of a squeezed vacuum field in a single unknown spatial mode. In this case, Eq. <ref> simplifies to V_m(θ) = 1 + | O_sq_m|^2 ( V^+ cos^2(θ - θ_m) + V^- sin^2(θ - θ_m) -1 ) here we dropped the mode index k=1. It is easy to see that minimum and maximum values of the measured quadrature variance are equal to V_m^± =| O_sq_m|^2 (V^±-1)+1, and therefore we can extract the value of overlap parameter as O_sq_m ∝ e^i θ_m√(V^+_m -V^-_m) where we omitted the factor 1/√(V^+ - V^-) since it is a common normalization factor for any mask H_m. We can use well established single pixel camera methods for the intensity<cit.> or field<cit.> spatial distribution reconstruction modified to recover the squeezed field multiplied by the LO field profile, which we call the shaped squeezed field: U_ sq(x,y) = u_LO^*(x,y) u_ sq(x,y) which is the main interest of this manuscript. We can see that projection of the shaped squeezed field to a mask is given by O_sq_m = ∑_p H_m(p) U_ sq(p) , where O_sq_m is the weight of the mth mask in the reconstruction of the shaped squeezed field. The above equation can be written in matrix notation as O⃗_⃗ ⃗s⃗q⃗^T= 𝐇U⃗_⃗ ⃗s⃗q⃗^T , where we move from the continuous two dimensional xy representation (Eq. <ref>) to pixel basis (p) and unfold 2D space to a single column tracking pixel location. To have fully define system, we need as many independent mask measurements as there are sampled pixels. The rest is just linear algebra. The shaped squeezed field can be calculated based on measurements as U⃗_⃗ ⃗s⃗q⃗^T= 𝐇^-1O⃗_⃗ ⃗s⃗q⃗^T Here rows of matrix 𝐇 consist of the pixel representations of the masks. If the 𝐇^T = 𝐇^-1, such is in the case of the Hadamard masks, the above equation simplifies to U⃗_⃗ ⃗s⃗q⃗^T= 𝐇^TO⃗_⃗ ⃗s⃗q⃗^T = ∑_m H^T_m O_sq_m One potential obstacle comes from the requirement of mask overlaps to have a ±1 factor (Eq. <ref> ). This ambiguity is resolved by measuring a complementary mask shape 1-H_m that defines an overlap with the unity mask as the reference (O_r). A mask overlap added to its complementary needs to be equal to the reference overlap for any mask, thus constraining the sign. Then a simple comparison of possible permutations of ±1 multipliers for the mask and its complementary one provides the correct sign. Overall, we have the method to obtain the shaped squeezing field u_LO^*(x,y) u_ sq(x,y) up to some normalization numerical factor for the single squeezed mode state. §.§ Mode decomposition reconstruction for thermal and squeezed vacuum modes Now we consider the input state as combination of one squeezed mode (sq subindex) and one thermal mode (th subindex). In this case we can use Eq. <ref> to calculate the expected quadrature variance: V_m(θ) = 1 + | O_th_m|^2 (V_th -1)+ +| O_sq_m|^2 ( V^+ cos^2(θ - θ_m) + V^- sin^2(θ - θ_m) -1 ) Note that the variance of the thermal state (V_th) does not depend on the quadrature angle, and thus its contribution is phase-independent. This equation obeys general form, Eq. <ref>. Thus we can easily detect V_m^± and θ_m however there is not enough information to find O_sq_m, O_th_m, V^±, and V_th from just 3 observables. The large thermal mode shifts the observed quantum noise up and dominate it. But Eq. <ref> shows that to obtain squeezed mode overlap we need to track noise contrast (difference between maximum and minimum noise) as shown in Eq. <ref>. This is correct even in the presence of a strong thermal mode. To reconstruct the shaped squeezed mode U_sq, we can use exactly the same formalism as we used for the case of single squeezed mode above. Moreover, we assume that thermal mode is much noisier than shot noise, i.e. V_th≫ 1, and consequently thermal mode variance is much large than squeezed quadrature variance, i.e V_th≫ V^-. With this assumption | O_th_m|^2 ≈ V^-_m where we again neglected the common normalization factor 1/V_th. We can reconstruct intensity overlap of the local oscillator with the thermal mode, i.e. the shaped thermal field intensity | u^*_LO(p) u_th(p) |^2 = ∑_m H_m(p) | O_th_m|^2 Here we use the fact that variance of the thermal mode is proportional to its intensity and this relationship does not depend on the loss of the system. This allows us to generalize single pixel detector intensity formalism <cit.>. § EXPERIMENTAL REALIZATION The experimental apparatus used to illustrate our method is depicted in Fig. <ref>. The pump laser beam input power is 7.3 mW at the entrance of the Rb cell and has radius of 60 μm in the focus (at the center of the cell). We use a strong linearly polarized pump tuned to the 5S_1/2 F=2→ 5P_1/2 transition of the ^87Rb atoms to generate squeezed vacuum field in the orthogonal polarization via polarization self-rotation (PSR) effect <cit.>. The output squeezed vacuum field is the input state to the quantum mode spatial profiler, as previous research indicated that this field may contain several squeezed or thermal modes <cit.>. For the measurements we reuse the pump field as a LO for the homodyning balanced photodiode detector (BPD) which measures quadrature fluctuations in the squeezed field. We use an interferometer consisting of two polarizing beam splitters (PBS) and two mirrors (one of which is mounted on a PZT transducer) to introduce the controllable phase shift (θ) between the LO and squeezed field. We use a phase-only liquid crystal spatial light modulator (SLM, model Meadowlark Optics PDM512-0785). We take advantage of the polarization dependence of the SLM to impose spatial masks only on the squeezed field, without affecting the local oscillator. This arrangement is crucial to reduce the effect of the temporal common phase flicker due to the liquid crystal driving circuit. Since both optical fields propagate and bounce off the SLM together, they see the SLM phase flicker as a common phase which cancels out in the measurement. To introduce a field amplitude mask, we apply a blazing diffraction grating pattern with different modulation depth <cit.> and select its zeroth order. This way we can controllably apply “on” or “off” patterns of the Hadamard mask basis set to shape the squeezed field. Technically, we need masks with 1 and -1 amplitudes for the Hadamard patterns. As -1 intensities are physically not feasible, we use 1 and 0 patterns and their complementary, following a well established technique for single pixel camera detectors <cit.>. After the SLM, the unchanged LO and masked squeezing field enter the homodyning BPD, and we record the squeezed field quadrature variance (noise level) with a spectrum analyzer. We measure noise level as a function of the LO phase for every mask (V_m(θ)), see Fig. <ref>, and extract maximum noise levels V_m^+, minimum noise levels V_m^-, and the corresponding phase shift θ_m for every mask, as shown in Eq. <ref>. A blank mask with no modifications to the input squeezed beam is used to define a reference phase with the LO. From this measurement using Eqs. <ref> and <ref>, we are able to reconstruct the mask overlap for squeezed (O_sq_m) and thermal (O_th_m) fields. Once we know this, we reconstruct the shaped squeezed and thermal fields using Eqs. <ref> and <ref>. § EXPERIMENTAL MODE RECONSTRUCTIONS PSR squeezing makes a potent subject for the mode decomposition analysis, as many previous experiments demonstrated that it is far from pure, and it is plagued by excess noise <cit.> that increases with temperature of Rb vapor. The spatial mode analysis can shine light on the nature of the excess noise. In particular, we assume that the optical field coming out of the Rb cell consists of a single-mode squeezed vacuum and some thermal noise mode. Previous measurements suggest that shapes of these two modes do not match each other. To distinguish between them we run the mode decomposition analysis for two different Rb cell temperatures: T=65^oC, for which the maximum PSR squeezing is detected, and we suspect relatively small contribution from the thermal noise as this low temperature regime is close to the single squeezed mode <cit.>, and at T=80^oC, for which the excess noise dominates due to the significant addition of the thermal mode. For a direct comparison, see Fig. <ref>a,c where squeezing reconstruction has larger noise values and Fig <ref>a,c where thermal reconstruction has larger noise values compared to the squeezed amplitude. Fig. <ref> and Fig. <ref> present the 32x32 pixel reconstructions of the squeezed vacuum output that follows the analysis described in Sec.<ref>. Each figure has three distinct columns. The first column shows the amplitude and phase of the overlap between the squeezed mode and the LO, reconstructed using Eq. <ref>. The second column shows the thermal mode shaped intensity reconstructed with Eq. <ref>. Note the thermal state by definition has no phase dependence. This is used as an implicit assumption during reconstruction. Finally, the last column shows the classical reconstruction <cit.> using a small leakage of the classical LO field into the squeezing polarization due to the limited extinction ratio of the polarizing beam displacer). The lower temperature corresponds to a lower atomic density and weaker nonlinear effect which is in charge of squeezing and output mode structure<cit.>. The reconstruction at 65^∘C temperature (Fig. <ref>) shows a clear fundamental Gaussian beam shape in both classical (Fig. <ref>d,e) and quantum (Fig.<ref>a,b) reconstructions. This is expected, since the squeezing is generated in the mode very similar to the LO which was used as a pump for the squeezer <cit.>. At 65^∘C we observe -2.0 dB of squeezing (noise suppression relative to the shot noise level) directly out of the Rb cell. Due to some absorption in optical elements such as polarizers and less than 100% reflection off the SLM, this amount of squeezing is reduced to -0.5 dB when the squeezing propagates through the imaging optics (see Fig. <ref>). We also detect about 5.7 dB of antisqueezing at the detector after passing through the imaging optics, hinting about the thermal noise presence. While we cannot predict the shape of the thermal mode, we must assume that it occupies similar space as the squeezed vacuum as we observe its negative effect on observed squeezing noise <cit.>. This prediction is supported by the measured thermal mode profiles. To increase atomic density we raise the Rb cell temperature to 80^∘C. At this high temperature, we no longer have any squeezing (measured V_m^- exceeds the shot noise level) as the minimum noise is 2.7 dB above shot noise (due to increased contribution of the thermal mode) and the maximum noise is 11.5 dB above the shot noise after passing through the imaging optics. This noise increase is expected with higher temperatures. When compared to the low temperature reconstruction Fig. <ref>, we see a spatial mode change in both classical and quantum reconstructions (see Fig. <ref>). In the classical fields overlap reconstruction, an additional “ring” appears (Fig.<ref>e), likely due to self-defocusing of the laser field in hot atomic vapor. The quantum reconstructions (Fig.<ref>a,b) also show modification of the original Gaussian, even though they suffer from some digital “boxiness” that is highly dependent on post-processing phase choices. However, even the imperfectly reconstructed thermal mode shape (Fig.<ref>c) (that is phase-independent) is very distinct from the classical shapes, as two “lobes” appear. One can notice similar two-lobe structure even in the low-temperature thermal mode reconstruction (Fig.<ref>c), albeit much less obvious. The magnitude of the reconstructed fields is proportional to input squeezing and thermal variances (recall that we did not normalize by √(V^+-V^-) and V_ th in Eqs. <ref> and <ref>). Thus we can see that at higher atomic densities a noisier (higher input variance) field is generated. We would like to note that it is possible to get higher resolution images, since we were mainly limited by the acquisition time for each mask and speed of SLM (Meadowlark PDM512) liquid crystal settling, which was the bottleneck of our setup. It takes about 45 minutes to collect a 32x32 pixel reconstruction. § CONCLUSION We demonstrated a method to reconstruct spatial profile of an optical field consisting of several quantum noise modes with different transverse profiles. The proposed formalism is general but we specifically considered the case of a single-mode squeezed vacuum field, alone or with some contribution of a thermal mode. We applied this analysis for the squeezed vacuum generated in Rb vapor due to PSR effect, and observe signs of thermal noise emergence at higher temperatures, as expected from previous experimental results. Potentially, when measurements extract enough information about the covariance matrix a back transformation can be applied and the initial covariance matrix can be exactly reconstructed. We can verify the reconstruction fidelity when the process finds a diagonal covariance matrix. The developed profiler technique has potential use in many quantum communication and precision measurement applications, where exact mode matching with an unknown quantum mode is necessary for high-fidelity quantum state detection. § AUTHOR DECLARATIONS §.§ Conflict of Interest The authors have no conflicts to disclose. § DATA AVAILABILITY The experimental data that support the findings of this study are available from the corresponding author upon reasonable request. § FUNDING Air Force Office of Scientific Research (FA9550-19-1-0066). § REFERENCES 43 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Gibson et al.(2004)Gibson, Courtial, Padgett, Vasnetsov, Pas'ko, Barnett, and Franke-Arnold]Gibson2004OEoams author author G. Gibson, author J. Courtial, author M. Padgett, author M. Vasnetsov, author V. Pas'ko, author S. Barnett, and author S. Franke-Arnold, title title Free-space information transfer using light beams carrying orbital angular momentum, 10.1364/OPEX.12.005448 journal journal Opt. Express volume 12, pages 5448–5456 (year 2004)NoStop [Christ, Lupo, and Silberhorn(2012)]qCom author author A. Christ, author C. Lupo, and author C. Silberhorn, title title Exponentially enhanced quantum communication rate by multiplexing continuous-variable teleportation, 10.1088/1367-2630/14/8/083007 journal journal New Journal of Physics volume 14, pages 083007 (year 2012)NoStop [Boyer, Marino, and Lett(2008)]PhysRevLett.100.143601 author author V. Boyer, author A. M. Marino, and author P. D. Lett, title title Generation of spatially broadband twin beams for quantum imaging, 10.1103/PhysRevLett.100.143601 journal journal Phys. Rev. Lett. volume 100, pages 143601 (year 2008)NoStop [Brida et al.(2011)Brida, Genovese, Meda, and Berchera]multimodetwinGenovese2011 author author G. Brida, author M. Genovese, author A. Meda, and author I. R. Berchera, title title Experimental quantum imaging exploiting multimode spatial correlation of twin beams, 10.1103/PhysRevA.83.033811 journal journal Phys. Rev. A volume 83, pages 033811 (year 2011)NoStop [Zhang, Boyer, and Scully(2022)]PhysRevA.105.023725 author author L. Zhang, author V. Boyer, and author M. O. Scully, title title Quadrature squeezing of 10^4 spatial modes via manipulation of diffraction, 10.1103/PhysRevA.105.023725 journal journal Phys. Rev. A volume 105, pages 023725 (year 2022)NoStop [Berkhout et al.(2010)Berkhout, Lavery, Courtial, Beijersbergen, and Padgett]Berkhout2010PRLoamsorting author author G. C. G. Berkhout, author M. P. J. Lavery, author J. Courtial, author M. W. Beijersbergen, and author M. J. Padgett, title title Efficient sorting of orbital angular momentum states of light, 10.1103/PhysRevLett.105.153601 journal journal Phys. Rev. Lett. volume 105, pages 153601 (year 2010)NoStop [Zhou et al.(2018)Zhou, Zhao, Shi, Rafsanjani, Mirhosseini, Zhu, Willner, and Boyd]ZhouOL18 author author Y. Zhou, author J. Zhao, author Z. Shi, author S. M. H. Rafsanjani, author M. Mirhosseini, author Z. Zhu, author A. E. Willner, and author R. W. Boyd, title title Hermite-gaussian mode sorter, 10.1364/OL.43.005263 journal journal Opt. Lett. volume 43, pages 5263–5266 (year 2018)NoStop [Fu et al.(2018)Fu, Zhou, Qi, Oliver, Wang, Rafsanjani, Zhao, Mirhosseini, Shi, Zhang, and Boyd]FuOE18 author author D. Fu, author Y. Zhou, author R. Qi, author S. Oliver, author Y. Wang, author S. M. H. Rafsanjani, author J. Zhao, author M. Mirhosseini, author Z. Shi, author P. Zhang, and author R. W. Boyd, title title Realization of a scalable laguerre-gaussian mode sorter based on a robust radial mode sorter, 10.1364/OE.26.033057 journal journal Opt. Express volume 26, pages 33057–33065 (year 2018)NoStop [Fontaine et al.(2019)Fontaine, Chen, Neilson, Kim, and Carpenter]fontaineNC2019 author author R. Fontaine, Nicolas K.and Ryf, author H. Chen, author D. T. Neilson, author K. Kim, and author J. Carpenter, title title Laguerre-gaussian mode sorter, https://www.nature.com/articles/s41467-019-09840-4 journal journal Nature Communication volume 10, pages 1865 (year 2019)NoStop [Gerry and Knight(2005)]Gerry_and_Knight_book author author C. Gerry and author P. Knight, @noop title Introductory Quantum Optics (publisher Cambridge University Press, year 2005) pp. pages 152,153NoStop [Bennink and Boyd(2002)]Bennink_Boyd_PhysRevA author author R. S. Bennink and author R. W. Boyd, title title Improved measurement of multimode squeezed light via an eigenmode approach, 10.1103/PhysRevA.66.053815 journal journal Phys. Rev. A volume 66, pages 053815 (year 2002)NoStop [Boyer et al.(2008)Boyer, Marino, Pooser, and Lett]lettSci08 author author V. Boyer, author A. M. Marino, author R. C. Pooser, and author P. D. Lett, title title Entangled images from four-wave mixing, 10.1126/science.1158275 journal journal Science volume 321, pages 544–547 (year 2008)NoStop [Embrey et al.(2015)Embrey, Turnbull, Petrov, and Boyer]Embrey2015 author author C. S. Embrey, author M. T. Turnbull, author P. G. Petrov, and author V. Boyer, title title Observation of localized multi-spatial-mode quadrature squeezing, 10.1103/PhysRevX.5.031004 journal journal Physical Review X volume 5, pages 031004 (year 2015)NoStop [Kumar, Nunley, and Marino(2018)]MarinoPhysRevA.98.043853 author author A. Kumar, author H. Nunley, and author A. M. Marino, title title Comparison of coherence-area measurement techniques for bright entangled twin beams, 10.1103/PhysRevA.98.043853 journal journal Phys. Rev. A volume 98, pages 043853 (year 2018)NoStop [Kumar and Marino(2019)]MarinoPhysRevA.100.063828 author author A. Kumar and author A. M. Marino, title title Spatial squeezing in bright twin beams generated with four-wave mixing: Constraints on characterization with an electron-multiplying charge-coupled-device camera, 10.1103/PhysRevA.100.063828 journal journal Phys. Rev. A volume 100, pages 063828 (year 2019)NoStop [Marino et al.(2012)Marino, Clark, Glorieux, and Lett]MarinoEJPD2012 author author A. Marino, author J. Clark, author Q. Glorieux, and author P. Lett, title title Extracting spatial information from noise measurements of multi-spatial-mode quantum states, 10.1140/epjd/e2012-30037-1 journal journal The European Physical Journal D volume 66, eid 288 (year 2012), 10.1140/epjd/e2012-30037-1NoStop [Clark et al.(2012)Clark, Zhou, Glorieux, Marino, and Lett]Clark2012 author author J. B. Clark, author Z. Zhou, author Q. Glorieux, author A. M. Marino, and author P. D. Lett, title title Imaging using quantum noise properties of light, 10.1364/OE.20.017050 journal journal Opt. Express volume 20, pages 17050–17058 (year 2012)NoStop [Barge et al.(2022)Barge, Niu, Cuozzo, Mikhailov, Novikova, Lee, and Cohen]pratik2022 author author P. J. Barge, author Z. Niu, author S. L. Cuozzo, author E. E. Mikhailov, author I. Novikova, author H. Lee, and author L. Cohen, title title Weak thermal state quadrature-noise shadow imaging, 10.1364/OE.455646 journal journal Optics Express volume 30, pages 29401–29408 (year 2022), http://arxiv.org/abs/2202.02231 arXiv:2202.02231 NoStop [Cuozzo et al.(2022a)Cuozzo, Barge, Prajapati, Bhusal, Lee, Cohen, Novikova, and Mikhailov]Cuozzo2022camera author author S. L. Cuozzo, author P. J. Barge, author N. Prajapati, author N. Bhusal, author H. Lee, author L. Cohen, author I. Novikova, and author E. E. Mikhailov, title title Low-light shadow imaging using quadrature-noise detection with a camera, 10.1002/qute.202100147 journal journal Advanced Quantum Technologies volume 5, pages 2100147 (year 2022a), http://arxiv.org/abs/2106.00785 arXiv:2106.00785 NoStop [Bloch and Messiah(1962)]Bloch1962 author author C. Bloch and author A. Messiah, @noop title The canonical form of an antisymmetric tensor and its application to the theory of superconductivity, (year 1962)NoStop [Braunstein(2005)]braunstein2005squeezing author author S. L. Braunstein, title title Squeezing as an irreducible resource, @noop journal journal Physical Review A volume 71, pages 055801 (year 2005)NoStop [Horoshko et al.(2019)Horoshko, La Volpe, Arzani, Treps, Fabre, and Kolobov]horoshko2019bloch author author D. Horoshko, author L. La Volpe, author F. Arzani, author N. Treps, author C. Fabre, and author M. Kolobov, title title Bloch-messiah reduction for twin beams of light, @noop journal journal Physical Review A volume 100, pages 013837 (year 2019)NoStop [Araújo et al.(2014)Araújo, Roslund, Cai, Ferrini, Fabre, and Treps]Medeiros2014 author author R. M. D. Araújo, author J. Roslund, author Y. Cai, author G. Ferrini, author C. Fabre, and author N. Treps, title title Full characterization of a highly multimode entangled state embedded in an optical frequency comb using pulse shaping, 10.1103/PhysRevA.89.053828 journal journal Physical Review A volume 89 (year 2014), 10.1103/PhysRevA.89.053828NoStop [Cai et al.(2017)Cai, Roslund, Ferrini, Arzani, Xu, Fabre, and Treps]Cai2017 author author Y. Cai, author J. Roslund, author G. Ferrini, author F. Arzani, author X. Xu, author C. Fabre, and author N. Treps, title title Multimode entanglement in reconfigurable graph states using optical frequency combs, 10.1038/ncomms15645 journal journal Nature Communications volume 8 (year 2017), 10.1038/ncomms15645NoStop [Clemente et al.(2013)Clemente, Durán, Tajahuerce, Andrés, Climent, and Lancis]Clemente_Single_Pixel_Holography_2013 author author P. Clemente, author V. Durán, author E. Tajahuerce, author P. Andrés, author V. Climent, and author J. Lancis, title title Compressive holography with a single-pixel detector, 10.1364/OL.38.002524 journal journal Opt. Lett. volume 38, pages 2524–2527 (year 2013)NoStop [Sidorenko and Cohen(2016)]Sidorenko2016 author author P. Sidorenko and author O. Cohen, title title Single-shot ptychography, 10.1364/optica.3.000009 journal journal Optica volume 3, pages 9 (year 2016)NoStop [Gibson, Johnson, and Padgett(2020)]Gibson2020 author author G. M. Gibson, author S. D. Johnson, and author M. J. Padgett, title title Single-pixel imaging 12 years on: a review, 10.1364/oe.403195 journal journal Optics Express volume 28, pages 28190 (year 2020)NoStop [Li et al.(2021)Li, Bian, Zheng, Maiden, Liu, Li, Suo, Dai, and Zhang]Li2021 author author M. Li, author L. Bian, author G. Zheng, author A. Maiden, author Y. Liu, author Y. Li, author J. Suo, author Q. Dai, and author J. Zhang, title title Single-pixel ptychography, 10.1364/ol.417039 journal journal Optics Letters volume 46, pages 1624 (year 2021)NoStop [Sephton et al.(2023)Sephton, Nape, Moodley, Francis, and Forbes]Sephton2023 author author B. Sephton, author I. Nape, author C. Moodley, author J. Francis, and author A. Forbes, title title Revealing the embedded phase in single-pixel quantum ghost imaging, 10.1364/optica.472980 journal journal Optica volume 10, pages 286 (year 2023)NoStop [Cuozzo et al.(2022b)Cuozzo, Gabaldon, Barge, Niu, Lee, Cohen, Novikova, and Mikhailov]Cuozzo2022spi author author S. L. Cuozzo, author C. Gabaldon, author P. J. Barge, author Z. Niu, author H. Lee, author L. Cohen, author I. Novikova, and author E. E. Mikhailov, title title Wave-front reconstruction via single-pixel homodyne imaging, 10.1364/oe.472253 journal journal Optics Express volume 30, pages 37938 (year 2022b)NoStop [Matsko et al.(2002)Matsko, Novikova, Welch, Budker, Kimball, and Rochester]matsko_vacuum_2002 author author A. B. Matsko, author I. Novikova, author G. R. Welch, author D. Budker, author D. F. Kimball, and author S. M. Rochester, title title Vacuum squeezing in atomic media via self-rotation, 10.1103/PhysRevA.66.043815 journal journal Phys. Rev. A volume 66, pages 043815 (year 2002)NoStop [Ries, Brezger, and Lvovsky(2003)]ries_experimental_2003 author author J. Ries, author B. Brezger, and author A. I. Lvovsky, title title Experimental vacuum squeezing in rubidium vapor via self-rotation, 10.1103/PhysRevA.68.025801 journal journal Phys. Rev. A volume 68, pages 025801 (year 2003)NoStop [Mikhailov and Novikova(2008)]mikhailov2008ol author author E. E. Mikhailov and author I. Novikova, title title Low-frequency vacuum squeezing via polarization self-rotation in Rb vapor, 10.1364/OL.33.001213 journal journal Opt. Lett. volume 33, pages 1213–1215 (year 2008), http://arxiv.org/abs/0802.1558 arXiv:0802.1558 NoStop [Hsu et al.(2006)Hsu, Hetet, Peng, Harb, Bachor, Johnsson, Hope, Lam, Dantan, Cviklinski, Bramati, and Pinard]hsu_effect_2006 author author M. T. L. Hsu, author G. Hetet, author A. Peng, author C. C. Harb, author H.-A. Bachor, author M. T. Johnsson, author J. J. Hope, author P. K. Lam, author A. Dantan, author J. Cviklinski, author A. Bramati, and author M. Pinard, title title Effect of atomic noise on optical squeezing via polarization self-rotation in a thermal vapor cell, 10.1103/PhysRevA.73.023806 journal journal Phys. Rev. A volume 73, pages 023806–9 (year 2006)NoStop [Mikhailov et al.(2009)Mikhailov, Lezama, Noel, and Novikova]mikhailov2009jmo author author E. E. Mikhailov, author A. Lezama, author T. W. Noel, and author I. Novikova, title title Vacuum squeezing via polarization self-rotation and excess noise in hot Rb vapors, 10.1080/09500340903159503 journal journal Journal of Modern Optics volume 56, pages 1985–1992 (year 2009), http://arxiv.org/abs/0903.3156 arXiv:0903.3156 NoStop [Zhang et al.(2016)Zhang, Lanning, Xiao, Dowling, Novikova, and Mikhailov]mizhang2016pra author author M. Zhang, author R. N. Lanning, author Z. Xiao, author J. P. Dowling, author I. Novikova, and author E. E. Mikhailov, title title Spatial multimode structure of atom-generated squeezed light, 10.1103/PhysRevA.93.013853 journal journal Phys. Rev. A volume 93, pages 013853 (year 2016)NoStop [Lanning et al.(2017)Lanning, Xiao, Zhang, Novikova, Mikhailov, and Dowling]nicktheorypaper author author R. N. Lanning, author Z. Xiao, author M. Zhang, author I. Novikova, author E. E. Mikhailov, and author J. P. Dowling, title title Gaussian-beam-propagation theory for nonlinear optics involving an analytical treatment of orbital-angular-momentum transfer, 10.1103/PhysRevA.96.013830 journal journal Phys. Rev. A volume 96, pages 013830 (year 2017), http://arxiv.org/abs/1702.01095 arXiv:1702.01095 NoStop [Weedbrook et al.(2012)Weedbrook, Pirandola, García-Patrón, Cerf, Ralph, Shapiro, and Lloyd]weedbrook2012gaussian author author C. Weedbrook, author S. Pirandola, author R. García-Patrón, author N. J. Cerf, author T. C. Ralph, author J. H. Shapiro, and author S. Lloyd, title title Gaussian quantum information, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.84.621 journal journal Reviews of Modern Physics volume 84, pages 621 (year 2012)NoStop [sym()]symplectictransform @noop note Action of any Gaussian process can be represented by a symplectic matrix S with the transformations x̅⟶ Sx̅, V ⟶ S V S^T. These transformations connect the moments of the states before and after the process, and since the moments completely describe the state these transformations completely describe the state after the process.Stop [Bolduc et al.(2013)Bolduc, Bent, Santamato, Karimi, and Boyd]Boyd2013ol_slm_profiling author author E. Bolduc, author N. Bent, author E. Santamato, author E. Karimi, and author R. W. Boyd, title title Exact solution to simultaneous intensity and phase encryption with a single phase-only hologram, 10.1364/OL.38.003546 journal journal Opt. Lett. volume 38, pages 3546–3549 (year 2013)NoStop [Zhang et al.(2017)Zhang, Guidry, Lanning, Xiao, Dowling, Novikova, and Mikhailov]zhangPRA2017 author author M. Zhang, author M. A. Guidry, author R. N. Lanning, author Z. Xiao, author J. P. Dowling, author I. Novikova, and author E. E. Mikhailov, title title Multipass configuration for improved squeezed vacuum generation in hot Rb vapor, 10.1103/PhysRevA.96.013835 journal journal Phys. Rev. A volume 96, pages 013835 (year 2017)NoStop [Horrom et al.(2013)Horrom, Romanov, Novikova, and Mikhailov]mikhailov2012jmo_sq_filter author author T. Horrom, author G. Romanov, author I. Novikova, and author E. E. Mikhailov, title title All-atomic generation and noise-quadrature filtering of squeezed vacuum in hot Rb vapor, 10.1080/09500340.2012.732620 journal journal J. Mod. Opt. volume 60, pages 43–49 (year 2013), http://arxiv.org/abs/1204.3967 arXiv:1204.3967 NoStop [Horrom, Novikova, and Mikhailov(2012)]HorromJPB12 author author T. Horrom, author I. Novikova, and author E. E. Mikhailov, title title All-atomic source of squeezed vacuum with full pulse-shape control, http://stacks.iop.org/0953-4075/45/i=12/a=124015 journal journal J. Phys. B volume 45, pages 124015 (year 2012), http://arxiv.org/abs/1201.4372 arXiv:1201.4372 NoStop
http://arxiv.org/abs/2306.02595v1
20230605045841
Explore and Exploit the Diverse Knowledge in Model Zoo for Domain Generalization
[ "Yimeng Chen", "Tianyang Hu", "Fengwei Zhou", "Zhenguo Li", "Zhiming Ma" ]
cs.LG
[ "cs.LG", "stat.ML" ]
[ Explore and Exploit the Diverse Knowledge in Model Zoo for Domain Generalization equal* Yimeng Chenamss,ucas Tianyang Hunoah Fengwei Zhounoah Zhenguo Linoah Zhiming Maamss,ucas amssAcademy of Mathematics and Systems Science, Chinese Academy of Sciences ucasUniversity of Chinese Academy of Sciences noahHuawei Noah's Ark Lab Tianyang [email protected] Machine Learning, ICML 0.3in ] The proliferation of pretrained models, as a result of advancements in pretraining techniques, has led to the emergence of a vast zoo of publicly available models. Effectively utilizing these resources to obtain models with robust out-of-distribution generalization capabilities for downstream tasks has become a crucial area of research. Previous research has primarily focused on identifying the most powerful models within the model zoo, neglecting to fully leverage the diverse inductive biases contained within. This paper argues that the knowledge contained in weaker models is valuable and presents a method for leveraging the diversity within the model zoo to improve out-of-distribution generalization capabilities. Specifically, we investigate the behaviors of various pretrained models across different domains of downstream tasks by characterizing the variations in their encoded representations in terms of two dimensions: diversity shift and correlation shift. This characterization enables us to propose a new algorithm for integrating diverse pretrained models, not limited to the strongest models, in order to achieve enhanced out-of-distribution generalization performance. Our proposed method demonstrates state-of-the-art empirical results on a variety of datasets, thus validating the benefits of utilizing diverse knowledge. § INTRODUCTION Although remarkable success has been achieved on multiple benchmarks, machine learning models encounter failures in their real-world applications <cit.>. A central cause for such failures has been recognized as the vulnerability to the distribution shifts of the test data <cit.>. This can occur when test data is collected under new conditions such as different weather <cit.>, locations <cit.>, or light conditions <cit.>, resulting in a distribution that differs from the training set. To address this challenge, the task of domain generalization (DG) has gained significant attention, where models are trained on multiple source domains in order to improve their generalizability to unseen domains <cit.>. Multiple DG algorithms have been proposed from various perspectives. However, this problem is still far from being resolved. For example, Ye et al. () have identified two distinct categories of data distribution shifts, namely diversity shift and correlation shift, and empirically observed that the majority of existing algorithms are only able to surpass the simple empirical risk minimization (ERM) in at most one of the categories. Exploiting pretrained models (PTMs) has shown to be one of the most promising directions for addressing the challenge of DG tasks <cit.>. Research has demonstrated that pretraining can provide a significant improvement in performance for DG tasks <cit.>. The growing PTM hubs further bring in great opportunities. With the thriving of pretraining technologies, we now have a huge amount of pretrained models (PTMs) published. For example, Hugging Face Hub () contains over 80K models that vary in data sources, architectures, and pretraining frameworks. Such a zoo of PTMs thus enjoys both high transfer ability and diversity. By selecting optimal PTMs for given DG datasets from a zoo of PTMs, Dong et al. () boosted the state-of-the-art DG performance on some benchmarks for over 14%. While utilizing PTMs has proven to be a promising approach for domain generalization, it remains unclear how to effectively leverage the diverse inductive biases present in different PTMs. Ensemble methods of PTMs have been explored <cit.>, however, these methods typically only consider the top-performing models based on performance ranking scores. For example, Dong et al. () proposed a feature selection method on the concatenated features of the top-3 ranked PTMs. However, without incorporating diversity, such ensembles can perform worse than single models. Although some previous studies have examined certain characteristics of different PTMs <cit.>, they are not specified for DG tasks but focus on the in-distribution behavior of the models. This makes it unclear how to effectively utilize these analyses for tackling DG tasks. To address this challenge, it is crucial to first investigate the compatibility of different PTMs on specific DG tasks and to understand their inductive biases as thoroughly as possible. To achieve this, we propose to profile the shift behaviors of each PTM when conditioned on a given DG task, and then to design an ensemble algorithm that can effectively utilize the profiled shift types. Specifically, similar to the definition presented in <cit.>, we interpret the behaviors of PTMs across different domains of downstream tasks by characterizing the variation in their encoded representations from two dimensions, namely feature diversity shift and feature correlation shift. Through this design, we empirically demonstrate that the differences in shift patterns not only exist among datasets but also among different PTMs. Such profiling provides guidance for utilizing the inductive bias of poorly performed models which have typical shift patterns on one of the dimensions. As these models capture features that induce a specific kind of distribution shift, we can design ensemble algorithms that prevent the classifier from encountering similar failures, thus improving the out-of-distribution (OOD) generalization ability. To accomplish this, we introduce two key components in our ensemble algorithm: the sample reweight module and the independence penalization module. The sample reweight module utilizes the output of a correlation shift-dominated model to balance the weights of sub-populations, while the independence penalization module requires the main classifier's output to be independent of features that encounter significant diversity shifts among domains. These ensemble procedures are applied during the training process, introducing no additional computational cost for inference. We empirically verify the value of such model zoology on image classification benchmarks, with a model zoo that consists of 35 PTMs varying in architecture, pretraining algorithm, and datasets. The results of our empirical analysis demonstrate the effectiveness of our approach in leveraging poor models to enhance performance, as our new algorithm outperforms top model ensembles. We show that the selected models are different across different datasets, which indicates that our method is adaptive to the specific DG tasks. Our contributions can be summarized as follows. * We propose a novel methodology for profiling the behavior of pretrained models (PTMs) on a given domain generalization (DG) task by quantifying the distribution shift of the features from two dimensions, namely feature diversity shift and feature correlation shift. * We introduce a new ensemble algorithm that leverages the insights from the profiled shift types to effectively utilize the diverse inductive bias among different PTMs for DG tasks. * Through extensive experiments on image classification DG benchmarks, we demonstrate the effectiveness of our proposed approach, which outperforms top-performing PTM ensembles. This work provides a new perspective on how to effectively leverage the diverse inductive bias of PTMs for domain generalization tasks and highlights the importance of understanding the shift behaviors of models for such tasks. § RELATED WORKS Domain generalization. Numerous domain generalization algorithms have been proposed to alleviate the accuracy degradation caused by distribution shifts via exploiting training domain information  <cit.>. However, <cit.> empirically show that recent domain generalization algorithms show no improvement compared with ERM. More fine-grained analyses are further conducted <cit.>, where distribution shifts are decomposed into multiple categories. Ye et al. () empirically observed that the majority of the algorithms are only able to surpass the simple ERM in at most one kind of distribution shift. Wiles et al. () show that progress has been made over a standard ERM baseline. Though best methods are not consistent over different data shifts, pretraining and augmentations usually offer large gains. PTMs for domain generalization. Methods leveraging pretraining models have shown promising improvements in domain generalization performance <cit.>. Among them, ensemble methods combined with PTMs show further advantages. Weight averaging methods combine weights of PTMs of the same architecture over different runs <cit.> or tasks <cit.>. Arpit et al. () ensemble the predictions of moving average models. Recent methods <cit.> further consider the ensemble of models with different architectures to exploit the growing large PTM hubs. Specifically, Li et al. () ensemble predictions of multiple different PTMs via instance-specific attention weights. ZooD <cit.> releases the inference cost by only concatenating the representations of top models selected from a diverse model zoo and further conducts Bayesian feature selection. However, as shown in <cit.>, such an ensemble does not always outperform the single model. The diversity in the model zoo has not been fully understood and exploited, which is the focus of this paper. Understanding PTMs. The paradigm of PTM reusing triggers the need for understanding the behavior of a PTM on a given downstream task. Recently, studies on the difference in PTM features have been proposed <cit.>, which focus on the in-distribution behavior of models. Gontijo-Lopes et al. () suggest that models under different pretraining techniques learn diverse features. They propose that the correct predictions of high-accuracy models do not dominate those of low-accuracy models, and model ensembles with diverse training methodologies yield the best downstream performance. Idrissi et al. () introduced ImageNet-X, which is a set of human annotations pinpointing failure types for the ImageNet <cit.> dataset. ImageNet-X labels distinguishing object factors (e.g. pose, color) for each image in the validation set and a random subset. They found that most models when trained, fine-tuned, or evaluated on ImageNet, have the same biases. However, this paper shows different observations on the DG datasets, which will be further discussed in Section <ref>. § MODEL EXPLORATION To effectively leverage diversity within a model zoo, we need to understand the difference between PTMs conditioned on each specific DG task. To accomplish this, we propose analyzing and describing the changes in PTM feature distributions across downstream domains. §.§ Feature Diversity and Correlation Shifts Consider a dataset 𝒟 that contains samples collected under multiple domains ℰ, i.e., 𝒟 = {D_e}_e∈ℰ. D_e={x_i^e, y_i^e}_i=1^n^e contains instances of random variables (X, Y) that are i.i.d. sampled from the probability distribution ℙ^e(𝒳×𝒴). Consider a PTM that can be viewed as a feature encoder ϕ: 𝒳→𝒵_ϕ. To understand the behavior of such an encoder between different domains, we are in fact concerned with the difference between the distributions of (ϕ(X), Y) on different ℙ^e, ∀ e ∈ℰ. As ℙ^e(ϕ(X), Y) = ℙ^e(Y|ϕ(X))ℙ^e(ϕ(X)), the variation of ℙ^e(ϕ(X), Y) can be decomposed into the shift of ℙ^e(ϕ(X)) and the shift of ℙ^e(Y|ϕ(X)), namely the feature diversity shift and the feature correlation shift. In this paper, we use the following two metrics for measuring the diversity shift and correlation shift of ϕ: 𝐱↦𝐳 between a pair of domains e, e', respectively: F_div(ϕ, e, e') = 1/2∫_𝒮 |p_e(𝐳) - p_e'(𝐳)| d𝐳, F_cor(ϕ, e, e') = 1/2∫_𝒯p̃_e, e'(𝐳) ∑_y ∈𝒴 |p_e(y|𝐳) - p_e'(y|𝐳)| d𝐳, where p̃_e,e' is an geometric average of p_e and p_e'. 𝒮 and 𝒯 are partitions of the image set Z_ϕ of ϕ defined as follows: 𝒮(ϕ, e, e') := {𝐳∈𝒵_ϕ | p_e(𝐳) · p_e'(𝐳) = 0 }, 𝒯(ϕ, e, e') := {𝐳∈𝒵_ϕ | p_e(𝐳) · p_e'(𝐳) ≠ 0 }. Intuitively, F_div describes the proportion of values of features ϕ(𝐱) not shared between two domains. F_cor measures how the correlation between the features and the target label changes between domains. Such definitions are similar to that of diversity shift and correlation shift of datasets in OOD-Bench <cit.>. Note that the two metrics in this paper are defined for general feature encoders, not a specific encoder Z_2 which encodes the latent spurious variable assumed in the data generating process as in <cit.>. By specific to that encoder, <cit.> view the two metrics as a characteristic of the dataset itself. In contrast, we focus on the difference between general encoders on a given dataset. That generality requires a new design for the estimation methods of the two metrics than that in <cit.>. We further introduce the practical estimation method we proposed in Section <ref>. Relation with OOD performance. For diversity shift, the model's decision on data from the set 𝒮 depends on the classification layer's extrapolation behavior, which is hard to infer with in-distribution data. For correlation shift, it directly causes the change of prediction precision and results in the gap between in-distribution and out-of-distribution performance. As a result, we would prefer a representation with both low diversity and correlation shifts so that the in-distribution training controls the out-of-distribution error. Note that by splitting the data into 𝒮 and 𝒯, we leave out the part that is affected by the classification layer's extrapolation behavior in the correlation shift estimation and the in-domain density shift in the diversity shift estimation. This is the main difference from the scores designed in ZooD. §.§ Practical Estimation In this section, we show how the two metrics can be computed practically for general latent features of an arbitrary PTM. Diversity shift. Denote 𝒮_e(e', ϕ) := {𝐳∈𝒵_ϕ| p_e(𝐳) > 0, p_e'(𝐳) = 0 }, 𝒮_e'(e, ϕ) := {𝐳∈𝒵_ϕ| p_e(𝐳) = 0, p_e'(𝐳) > 0 }, F_div(ϕ, e, e') can be written as F_div(ϕ, e, e') = 1/2 ( ℙ^e[𝒮_e(e', ϕ)] + ℙ^e'[𝒮_e'(e, ϕ)]). We design the following empirical estimation of ℙ^e[𝒮_e(e', ϕ)]: ℙ̂^e[𝒮̂_̂ê(e', ϕ)] := ℙ̂^e ({𝐱∈ D_e | p̂_e'(𝐳) < ϵ_e', 𝐳 = ϕ(𝐱) }). Intuitively, we estimate the no-overlap set 𝒮_e(e', ϕ) using the estimated probability of the instance in the estimated distribution p̂_e'. When the probability is lower than a given small threshold ϵ_e', it is considered as in the set 𝒮_e(e', ϕ). The threshold ϵ_e' is estimated by ℙ̂^e' ({𝐱∈ V_e' | p̂_e'(𝐳) < ϵ_e', 𝐳 = ϕ(𝐱) }) = 0.01. We approximate p_e with a Gaussian distribution 𝒩(μ_e, Σ_e), and estimate the parameters with empirical statistics on D_e. In the same way we can get the estimation of ℙ^e'[𝒮_e'(e, ϕ)]. The empirical diversity metric is then the average of the two estimations. Correlation shift. For each pair of domain e, e'. We have the empirical set 𝒯̂(ϕ, e, e') := (D_e ∖Ŝ_e(e', ϕ)) ∪ (D_e'∖Ŝ_e'(e, ϕ)). Denote p_e, e' = 1/2(p_e + p_e') and D̂_cor = 1/2∑_𝐱∈𝒯̂p̂_e, e'(𝐱) ∑_y ∈𝒴 |p̂_e(y|ϕ(𝐱)) - p̂_e'(y|ϕ(𝐱))|. As D_e, D_e' are independently sampled, p̂_e, e'(𝐱) can be estimated by the empirical distribution, i.e., p̂_e, e'(𝐱) = 1 / |D_e ∪ D_e'|. To estimate p̂_e(y|ϕ(𝐱)), we first get a primary estimation p̃_e(y|ϕ(𝐱)) with the following equation, where the coefficient matrices (𝐌_0, 𝐌_1, …,𝐌_|𝒴|) are estimated by minimizing the empirical evidence as in LogME <cit.>, i.e., p̃_e(y|ϕ(𝐱)) := m(𝐌_0ϕ(𝐱), 𝐌_1ϕ(𝐱),…,𝐌_|𝒴|ϕ(𝐱)), where m denotes the normalization operator. We then calibrate p̃_e(y|ϕ(𝐱)) with the empirical accuracy estimated on 𝒯̂(ϕ, e, e') to get the final estimation p̂_e(y|ϕ(𝐱)). More details are provided in Appendix <ref>. §.§ Observations In this section, we present the results of our empirical analysis on the distribution shifts of PTMs for different DG datasets. We quantify these shifts using the metrics previously described and discuss the various patterns observed. We conduct experiments on five domain generalization benchmarks: PACS <cit.>, VLCS <cit.>, Office-Home <cit.>, TerraIncognita <cit.>, DomainNet <cit.>. According to <cit.>, PACS, OfficeHome, and TerraIncognita all only encounter diversity shifts, while DomainNet shows both diversity and correlation shifts. We adopt the model zoo constructed in <cit.>, which consists of 35 PTMs with diverse architectures, pre-training methods, and pre-training datasets. The two shift scores for each model are the average of the two metrics in Section <ref> computed on each pair of domains in the dataset. More details are provided in Appendix <ref>. The primary findings in this section are as follows. * Within a specific DG dataset, the shift patterns of PTMs exhibit substantial diversity. * The architectural diversity contributes to distinct shift patterns, and their interrelationships tend to maintain consistency across datasets. * The influence of pretraining frameworks on shift behavior is noteworthy. Particularly, self-supervised learning leads to relatively higher feature diversity shifts. * An increase in the size of the pretraining data results in a decrease in the feature correlation shift. We introduce those findings in detail in the following paragraphs. Different shift patterns of PTMs on the datasets. As shown in <cit.>, different datasets exhibit different trends of shifts. A natural question is how the distribution shift of data interacts with the shift in the feature space of a PTM. The observations in this section show that the shift patterns of PTMs can have a great variety within a given DG dataset. Specifically, we compute the average shift metric scores between domain pairs on each dataset. The results are shown in Figure <ref>. On Terra Incognita, the diversity shift of models varies from 0.21 to 0.89. Notably, some PTMs encounter significant correlation shifts on Terra Incognita, which is different from the dataset correlation shift shown in <cit.>. We further compare the results within the following 3 groups of models to show the effect of architectures, training frameworks, and datasets on shift behavior. The details of the 3 groups are introduced in Appendix <ref>. Architectures. We compare models with different architectures but pre-trained with the same framework on the same dataset. As shown in Figure <ref>, when comparing PTMs pretrained under the ERM framework on ImageNet-1K <cit.>, we found that the variation of architectures resulted in a wide range of shift patterns. It can be observed that across different datasets, ResNet-152 generally exhibits a larger diversity shift compared to ResNet-50, and a smaller correlation shift. Additionally, after fine-tuning, ResNet-152 achieves higher OOD accuracy than ResNet-50. These findings suggest an interesting observation that while ResNet-152 captures domain-specific features, they do not result in a geometric skew <cit.>. Pretraining frameworks. To show the effect of pretraining frameworks, we compare models with a fixed architecture but trained with different optimization objectives on the same dataset. Figure <ref> shows the results comparing ResNet-50s pretrained on ImageNet under different frameworks, i.e., ERM, self-supervised learning (SSL), and adversarial training (AT) <cit.>. We can find models pretrained using SSL methods exhibit overall higher diversity shifts. This is not unexpected, as SSL methods tend to learn features that maximally preserve the original information of the raw input, including the domain-specific part. For example, generative-based SSL such as the Masked autoencoder <cit.> learns to reconstruct images with only a small fraction of the pixels. Additionally, contrastive learning methods have been observed to suffer from the negative transfer phenomenon <cit.>, where the learned features perform poorly on downstream tasks. Furthermore, the use of cosine similarity in contrastive learning has been noted to result in overly complex feature maps <cit.>, which can negatively impact out-of-distribution generalization. Among SSL methods, PIRL <cit.> and InsDis <cit.> usually have the most significant diversity shifts and worse OOD performance on these datasets <cit.>. Datasets. To demonstrate the impact of dataset size on the distribution shifts of PTMs, we compare the performance of Swin transformers <cit.> pretrained on ImageNet-1K and both ImageNet-1K and ImageNet-22K <cit.>, as shown in Figure <ref>. It indicates that the use of larger pretraining data results in a significant decrease in correlation shift, which may be attributed to the increased complexity of the supervised pretraining tasks. § MODEL ZOO EXPLOITATION In this section, we demonstrate how the characteristic of diversity in models can be employed to enhance the domain generalization performance of strong models. In the previous section, we established that models exhibit two distinct types of shift patterns. Our observations indicate that some PTMs are dominated by one type of shift, for example, PIRL on TerraIncognita. This insight inspires the design of an ensemble algorithm that addresses the two dimensions of feature shifts. By leveraging two auxiliary models that are dominated by the two shifts respectively, we design corresponding algorithms to resolve the specific shifts. §.§ Diversity Ensemble Method To prevent potential failure caused by the diversity shift, we utilize the auxiliary model which encodes features that encounter significant diversity shifts. We propose to require the prediction of the main model to be independent of those features thus mitigating the effect of diversity shift on the predictor. To constraint the independence, we adopt a differentiable independence measure, the Hilbert-Schmidt independence criterion (HSIC) <cit.>. The idea of using HSIC is inspired by the algorithm proposed in <cit.>, where HSIC is used for penalizing the dependency between the predicts of the main model and multiple biased models. Formally, denote Z_l = l_m ∘ f_M(X), where l_m : 𝒵_M →𝒵_l is the classifier on the top of the main model f_M: 𝒳→𝒵_M. Denote Z_d = f_d(X), where f_d : 𝒳→𝒵_d is the diversity auxiliary model. Our target is then to constrain the dependency between Z_l and Z_d. Denote k as a kernel function on 𝒵_d ×𝒵_d, l as a kernel function on 𝒵_l ×𝒵_l. The HSIC statistic between the main model f_M and the auxiliary model f_d is defined as follows: HSIC(f_M, f_d) := 𝔼[k(Z_d, Z_d^') l(Z_l, Z_l^')] + 𝔼[k(Z_d, Z_d^')] 𝔼[l(Z_l, Z_l^')] -2 𝔼[𝔼_Z_d^'[k(Z_d, Z_d^')] 𝔼_Z_l^'[l(Z_l, Z_l^')]]. Instead of the unbiased estimator in <cit.>, we used the biased empirical estimate HSIC_b <cit.>: HSIC_b(f_M, f_d):= 1/m^2trace(𝐊 𝐇 𝐋 𝐇), where we suppose the sample size is m, 𝐊 denotes the m × m matrix with entries k_ij := k(f_d(x_i), f_d(x_j)), 𝐋 denotes the m × m matrix with entries l_ij := l(l_m ∘ f_M(x_i), l_m ∘ f_M(x_j)). 𝐇 = 𝐈 - 1/m11^T, where 1 is an m × 1 vector of ones. The final training objective of the main model writes as follows: ℒ(f_M) := min_f_M𝔼_X, Y ∼ℙ_𝒟 [ℒ_c (Y, f_M(X)) + λHSIC_d(f_M, f_d)]. In our implementation, we use the Gaussian kernel l(z, z') = exp(-γ_1 ‖ z - z' ‖^2), k(z, z') = exp(-γ_2 ‖ z - z' ‖^2). To mitigate the effect of the dimension, we rescale γ_1 and γ_2 by dividing by the dimension of the representation z in the calculation. Following methods in invariant learning literature <cit.>, we introduce an additional hyperparameter N_warm-up which controls the number of warm-up steps before the HSIC penalty is added to the loss. §.§ Correlation Ensemble Method To prevent potential failure caused by the correlation shift, we adopt the auxiliary model which encodes features that encounter significant correlation shifts. In this module, we reweight training instances to weaken the correlation between the features and the target labels. By that, we avoid the predictor from skewing to that unstable correlation across domains. Specifically, denote the auxiliary model as f_c and its uncertainty output for instance 𝐱 as 𝐩_c(𝐱). We follow the classical strategy which has been proven effective in the debias literature <cit.> to reweight the instance loss with w_c(𝐱, y) = p(y)/p_c(𝐱)_y, where p_c(𝐱)_y is the y-th component of 𝐩_c(𝐱). During training steps, the weights in each batch are smoothed with a hyperparameter T and normalized <cit.>. The loss on a batch |ℬ| is then ℒ_ℬ(f_M) := 1/|ℬ|∑_(𝐱, y) ∈𝒟 m(p(y)/p_c(𝐱)_y · T) ℒ_c (y, f_M(𝐱)), where m denotes the normalization operation over samples in ℬ. We introduce an additional hyperparameter N_anneal which controls the number of annealing steps where T is infinitely large, i.e., before the adjusted weights are attached to the samples. § EXPERIMENTS We conduct experiments on domain generalization benchmarks to evaluate the effectiveness of our proposed zoo exploiting method. Our results demonstrate that it consistently outperforms single top models and improves the performance of top model ensembles, highlighting the benefits of exploiting model diversity. Additionally, we analyze the correlation between OOD accuracy and the feature diversity and correlation shifts of the fine-tuned classifiers. §.§ Experiment Settings Datasets. We conduct experiments on five domain generalization benchmarks: PACS <cit.>, VLCS <cit.>, OfficeHome <cit.> , TerraIncognita <cit.>, and DomainNet <cit.>. During training on each dataset, one of the domains is chosen as the target domain and the remaining are the training domains, where 20% samples are used for validation and model selection. The final test accuracy on the dataset is the mean of the test results on each target domain. Baselines. We compare the proposed algorithm with previous SOTA OOD methods and three versions of ZooD, including 1) Single: fine-tune the top-1 model ranked by ZooD; 2) Ensemble: fine-tune an ensemble of the top-K models; 3) F. Selection: fine-tune an ensemble of the top-K models with feature selection, which is the expected result using ZooD. Our algorithm also has three versions. 1) Single+Rew: fine-tune the top-1 model ranked by ZooD with reweight auxiliary; 2) Single+HSIC: fine-tune the top-1 model with HSIC auxiliary; 3) Single+Both: fine-tune the top-1 model with both kinds of auxiliary. Configurations. We follow the setting of ZooD to construct a model zoo consisting of 35 PTMs. As discussed in Section <ref>, these models vary in architectures, pretraining methods, and datasets. For auxiliary models, we select models that are extreme at one shift metric. For the main model, we use the Top-1 model ranked by ZooD. The detailed statistics of selected auxiliary models and the main models are shown in Table <ref>. We use a 3-layer MLP as the prediction head on the top of the main model and fine-tune it on the downstream tasks. Following ZooD, we adopt the leave-one-domain-out cross-validation setup in DomainBed for hyper-parameter selection and run 3 trials. More details on the experimental setup are in Appendix <ref>. §.§ Experiment Results Table <ref> presents the main results of our proposed methods on the five datasets of the DomainBed benchmark. The results indicate that the incorporation of the independence penalization module and the combination of reweight and independence penalization modules consistently improve the performance of the single top model. On average, the combination of both methods (Single+Both) results in an approximate 6% improvement in accuracy. Notably, on the PACS, VLCS, and TerraIncognita datasets, our proposed methods even outperform the F. Selection method. This observation highlights the potential of utilizing model inductive bias to leverage weak models in boosting performance, rather than relying solely on strong models. On the OfficeHome and DomainNet datasets, the proposed methods do not show significant improvements over top-3 ensembles (the Ensemble version of ZooD). To further investigate this, we also conducted experiments using our methods on top-3 ensembles. The results, presented in Table <ref>, reveal that compared to the F. Selection method, the incorporation of the independence penalization module can significantly enhance the overall accuracy. It is worth noting that, unlike the independence penalization module, the improvements brought by the reweight module are only significant on the VLCS and TerraIncognita datasets. For the PACS dataset, this may be attributed to the fact that the F_cor of the main model is already non-significant, as reported in Table <ref>. For the OfficeHome and DomainNet datasets, this may be due to the limited effectiveness of the reweighting strategy when the number of classes is large (65 and 345). Previous literature has only validated its success on tasks with a number of classes lower than 10 <cit.>. To further interpret the results, we analyze the shift pattern of the main predictor. Table <ref> shows the scores comparison of the last layer features (logits) of the main predictor. The results are obtained using the following hyperparameter set: λ=100, N_warm-up=500, γ_1=0.5, γ_2=0.25, T=1, N_anneal=2000. As expected, compared to the results obtained using ERM, HSIC, and Rew. lead to a decrease in F_div and F_cor, respectively. The results obtained using both modules show a compromise between the two modules. It is worth noting that the use of HSIC on the VLCS dataset leads to a significant decrease in F_cor, which can explain the result in Table <ref> where incorporating the reweight module in Two does not further improve the results of HSIC. § CONCLUSION In this work, we have presented a novel approach for utilizing the diverse knowledge present in a model zoo for domain generalization tasks. The main takeaway findings of this study are two-fold. Firstly, it emphasizes that even the most powerful models have the potential for further enhancements in downstream DG tasks. Secondly, it illustrates that the enhancements do not solely come from powerful models, but rather from a combination of models with diverse characteristics, a weak model can also contribute to the enhancement of an already strong model. This highlights the importance of maintaining a diverse zoo of pretrained models for the community. It is worth emphasizing that our proposed profiling method is general and can be applied to other tasks and domains, making it an interesting avenue for further research. Overall, this work provides a new perspective on how to better utilize the diverse knowledge in a model zoo and opens up new possibilities for improving performance on out-of-distribution tasks. icml2023 § SHIFT METRICS §.§ Practical Estimation In this section, we show how the two metrics can be computed practically for general latent features of an arbitrary PTM. Notations. Consider a dataset 𝒟 that contains samples collected under multiple domains ℰ, i.e., 𝒟 = {D_e}_e∈ℰ. D_e={x_i^e, y_i^e}_i=1^n^e contains instances of random variables (X, Y) that are i.i.d. sampled from the probability distribution ℙ^e(𝒳×𝒴). A PTM is denoted as a feature encoder ϕ: 𝒳→𝒵_ϕ. Suppose the dimension of ϕ(x) is d. The feature matrix on the domain D_e is denoted as Φ_e := ( ϕ(x^e_1), ϕ(x^e_i2), …, ϕ(x^e_n^e) )^⊤∈ℝ^n^e × d. Diversity shift. Denote 𝒮_e(e', ϕ) := {𝐳∈𝒵_ϕ| p_e(𝐳) > 0, p_e'(𝐳) = 0 }, 𝒮_e'(e, ϕ) := {𝐳∈𝒵_ϕ| p_e(𝐳) = 0, p_e'(𝐳) > 0 }, F_div(ϕ, e, e') can be written as F_div(ϕ, e, e') = 1/2 ( ℙ^e[𝒮_e(e', ϕ)] + ℙ^e'[𝒮_e'(e, ϕ)]). We design the following empirical estimation of ℙ^e[𝒮_e(e', ϕ)]: ℙ̂^e[𝒮̂_̂ê(e', ϕ)] := ℙ̂^e ({𝐱∈ D_e | p̂_e'(𝐳) < ϵ_e', 𝐳 = ϕ(𝐱) }). Intuitively, we estimate the no-overlap set 𝒮_e(e', ϕ) using the estimated probability of the instance in the estimated distribution p̂_e'. When the probability is lower than a given small threshold ϵ_e', it is considered as in the set 𝒮_e(e', ϕ). The threshold ϵ_e' is estimated by ℙ̂^e' ({𝐱∈ V_e' | p̂_e'(𝐳) < ϵ_e', 𝐳 = ϕ(𝐱) }) = 0.01. For each e ∈ℰ, we approximate p_e with a Gaussian distribution 𝒩(μ_e, Σ_e), and estimate the parameters with empirical statistics on D_e. Specifically, μ̂_e = 1/n^eΦ_e^⊤1_n^e Σ̂_e = 1/n^e (Φ_e - 1_n^eμ̂_ϕ^⊤ )^⊤ (Φ_e - 1_n^eμ̂_ϕ^⊤ ), Given the estimated distribution 𝒩(μ̂_e, Σ̂_e), the probability density at a given point 𝐳∈𝒵_ϕ is computed as p̂_e(𝐳) = p̂_e(𝐳|μ̂_e, Σ̂_e) = √(1/(2π)^d |Σ̂_e| )exp(-1/2(𝐳-μ̂_e)^⊤Σ̂_e^-1 (𝐳-μ̂_e) ). Denote C_e := (2π)^-d/2 |Σ̂_e|^1/2, d̂_e(𝐳) := (𝐳-μ̂_e)^⊤Σ̂_e^-1 (𝐳-μ̂_e), we have p̂_e (𝐳) = C_e exp(-1/2d̂_e(𝐳)). As C_e is constant for any 𝐳, and the exponential function is monotonic, we can empirically estimate ℙ^e[𝒮_e(e', ϕ)] using d̂_e' instead as follows: ℙ̂^e[𝒮̂_̂ê(e', ϕ)] := ℙ̂^e ({𝐱∈ D_e | d̂_e'(𝐳) > ϵ_e', 𝐳 = ϕ(𝐱) }), where ϵ_e' satisfies ℙ̂^e' ({𝐱∈ V_e' | d̂_e'(𝐳) > ϵ_e', 𝐳 = ϕ(𝐱) }) = 0.01. Note that it connects to the common practice in OOD detection methods where the Mahalanobis distance is estimated <cit.>. The estimation of ℙ^e'[𝒮_e'(e, ϕ)] is defined in the same way. The empirical diversity metric is then the average of the two estimations, i.e., F̂_div(ϕ, e, e') = 1/2( ℙ̂^e[𝒮̂_e(e', ϕ)] + ℙ̂^e'[𝒮̂_e'(e, ϕ)] ). Correlation shift. For each pair of domain e, e'. We have the empirical set 𝒯̂(ϕ, e, e') := (D_e ∖Ŝ_e(e', ϕ)) ∪ (D_e'∖Ŝ_e'(e, ϕ)). Denote p_e, e' = 1/2(p_e + p_e') and D̂_cor = 1/2∑_𝐱∈𝒯̂p̂_e, e'(𝐱) ∑_y ∈𝒴 |p̂_e(y|ϕ(𝐱)) - p̂_e'(y|ϕ(𝐱))|. As D_e, D_e' are independently sampled, p̂_e, e'(𝐱) can be estimated by the empirical distribution, i.e., p̂_e, e'(𝐱) = 1 / |D_e ∪ D_e'|. To estimate p̂_e(y|ϕ(𝐱)), we first get a primary estimation p̃_e(y|ϕ(𝐱)) with the following equation: p̃_e(𝐲|ϕ(𝐱)) := m(𝐌_0ϕ(𝐱), 𝐌_1ϕ(𝐱),…,𝐌_|𝒴|ϕ(𝐱)), where m denotes the normalization operator. The coefficient matrices (𝐌_0, 𝐌_1, …,𝐌_|𝒴|) are estimated by minimizing the empirical evidence on D_e as in LogME <cit.>. Specifically, denote K:=|𝒴|, 𝐲∈ℝ^K is the one-hot label vector . Denote y_i as the i-th component. We adopt the following linear model assumption: y_i = 𝐰_i^⊤ϕ(x) + ϵ, 𝐰_i ∈ℝ^d, ϵ∈ℝ, where ϵ is the Gaussian noise variable with variance β^-1. As we assume that the prior distribution of weights 𝐰_i is an isotropic Gaussian distribution with zero mean and parameterized by α, i.e. 𝐰_i ∼𝒩(0, α^-1𝕀_d), and the conditional distribution of y_i given ϕ(x) is y_i |ϕ(x), 𝐰_i ∼𝒩(𝐰_i^⊤ϕ(x), β^-1), then according to the definition of evidence, p(y_i | ϕ(x), α, β) = ∫_𝐰_i ∈ℝ^d p(𝐰_i|α) p(y_i | ϕ(x), 𝐰_i, β) d𝐰_i. Denote Φ_e ∈ℝ^n_e × d as the feature matrix of all training samples in the environment e, and 𝐲_i^e ∈ℝ^n_e as the label vector composed by y_i. Denote A = α I + βΦ_e^⊤Φ_e, m = β A^-1Φ_e^⊤𝐲_i^e, we have the following log-likelihood: ℒ(α, β) =log p(𝐲_i^e | Φ_e, α, β) =n_e/2logβ+d/2logα-n_e/2log 2 π -β/2Φ_e m-𝐲_i^e_2^2-α/2 m^⊤ m-1/2log |A|. Solve (α^*, β^*) = max_α, βℒ(α, β) by using the same iterative approach as in  <cit.>, we can get an estimate of 𝐰_i: 𝐰̂_i = β^* (α^* I + β^* Φ_e^⊤Φ_e)^-1Φ_e^⊤𝐲_i^e. Substituting the above estimate into the formula <ref>, we get the estimate p̃_e(y|ϕ(𝐱)). Alternatively, we can also consider directly using square loss for classifying the features and estimating the conditional probability <cit.>. We then calibrate p̃_e(y|ϕ(𝐱)) with the empirical accuracy estimated on 𝒯̂(ϕ, e, e') to get the final estimation p̂_e(y|ϕ(𝐱)). Specifically, denote ℬ_0, ℬ_1, …, ℬ_b as b average sized blocks in [0, 1]. We define ℬ_i(ϕ, e, y) = {(𝐱,y) ∈ D_e ∩T̂(ϕ, e, e') | p̃_e(y|ϕ(𝐱)) ∈ℬ_i }. We then estimate p̂_e(y|ϕ(𝐱)) for 𝐱∈ℬ_i(ϕ, e, y) as follows: p̂_e(y|ϕ(𝐱)) := |{(𝐱, y_x) ∈ℬ_i(ϕ, e, y)| y_x = y }|/|ℬ_i(ϕ, e, y)|. The final D̂_cor is then computed with the estimated p̂_e(y|ϕ(𝐱)) and p̂_e'(y|ϕ(𝐱)). §.§ The Model Zoo We follow the model zoo setting of ZooD <cit.> which consists of 35 PTMs having diverse architectures, pre-training methods, and pre-training datasets. A summary of the PTMs can be found in Table <ref>. Dong et al. () divide the models into three groups. In the main paper, we also introduce 3 subsets of models, with results shown in Figure <ref>, <ref>, and <ref>, respectively. Figure <ref> contains results for models of 10 different architectures (CNNs) trained on ImageNet-1K with ERM. The architectures are as follows: ResNet-50, ResNet-152 <cit.>, ResNeXt-50 <cit.>, DenseNet-169, DenseNet-201 <cit.>, Inception v1 <cit.>, Inception v3 <cit.>, MobileNet v2 <cit.>, EfficientNet-B2, EfficientNet-B4 <cit.>. Figure <ref> contains 10 ResNet-50s trained via following pre-training methods: Adversarial Training <cit.>, BYOL <cit.>, MoCo-v2 <cit.>, InsDis <cit.>, PIRL <cit.>, DeepCluster-v2 <cit.>, PCL-v2 <cit.>, SeLa-v2 <cit.>, SwAV <cit.>. Figure <ref> shows the results of 2 different versions of Swin-B <cit.> pre-trained on ImageNet-1K or on both ImageNet-1K and ImageNet-22K <cit.>. § EXPERIMENTS §.§ Experiment Details Datasets. Details of the five datasets in our experiments are introduced as follows. PACS <cit.>: This dataset contains a total of 9,991 images, drawn from four distinct domains (art, cartoons, photos, sketches), and encapsulates seven different classes. VLCS <cit.>: This compilation features 10,729 images from four domains (Caltech101, LabelMe, SUN09, VOC2007), comprising five distinct classes. Office-Home <cit.>: This dataset includes images from four domains (art, clipart, product, real), primarily illustrating common objects in office and home environments. It is composed of a total of 15,588 images distributed across 65 classes. TerraIncognita <cit.>: This dataset encompasses photographs of wildlife captured by camera traps at four different locations. It contains a total of 24,788 images across 10 classes. DomainNet <cit.>: Recognized as one of the most challenging DG datasets, it comprises 586,575 images from six diverse domains (clipart, infographics, painting, quickdraw, real, sketch), spanning 345 classes. Main and auxiliary models. For the main model, we use the Top-1 model ranked by ZooD. For the auxiliary model, we select models that are extreme at one shift metric on that dataset. Table <ref> shows some detailed statistics of selected auxiliary models and the main models. finetuned denotes the averaged OOD accuracy of the linear classifier on the top of the corresponding PTM when finetuned on the dataset. rank (ZooD) denotes the rank of the PTM according to the ZooD evaluation metric. According to the results in Table <ref>, the finetuned performance of the selected auxiliary models are mostly weak. We use a 3 layers MLP as the prediction head on the top of the main model and fine-tune it on the downstream tasks. The dimension of the first hidden layer is half of that of the output of the main model. The dimension of the second layer is set to 256 for all the main models except for ResNext-101, which is set to 512. The last layer is linear with the outputs of the same dimension as the class numbers. For the reweight auxiliary model, we use a linear layer on top of it and fine-tune it as the classifier. The reweight auxiliary classifier is trained under the following hyperparameter setting: learning rate=1× 10^-5, batch size=16, dropout=0, weight decay=0, steps=1000. For DomainNet, the training steps are increased to 5000. Hyperparameters. Following ZooD, we adopt the leave-one-domain-out cross-validation setup in DomainBed for hyper-parameter selection and run 3 trials. We list all hyperparameters, their default values, and the search range for each hyperparameter in our grid search sweeps, in Table <ref>. All models are optimized using Adam <cit.>. Note that The hyperparameters γ_1 and γ_2 are intrinsic to each PTM and define the bandwidth of the Gaussian kernel in HSIC. They are manually set to ensure the penalty term's initial value falls within the proper range [0, 1], which is influenced by the PTM's feature scale and range. Consequently, the value of γ_1 varies with the choice of the main model. For CLIP-ViT and ResNext-101, γ_1 is set to 0.1, while for Swin-B, it is set to 0.5. Empirically, we observe that results are not very sensitive to small deviations (0.25) from these chosen values. §.§ Detailed Results We include some detailed results of the experiments on the DomainBed. Table <ref> shows the classification accuracy on Terra-Incognita. Table <ref> show the classification accuracy on VLCS. It shows that our proposed scheme outperforms ZooD on each target domain in the dataset.
http://arxiv.org/abs/2306.08425v1
20230614103742
Fine structures inside the PreLie operad revisited
[ "Vladimir Dotsenko" ]
math.KT
[ "math.KT", "math.CT" ]
Institut de Recherche Mathématique Avancée, UMR 7501, Université de Strasbourg et CNRS, 7 rue René-Descartes, 67000 Strasbourg CEDEX, France [email protected] We prove the conjecture of Chapoton from 2010 stating that the pre-Lie operad, as a Lie algebra in the symmetric monoidal category of linear species, is freely generated by the free operad on the species of cyclic Lie elements. Fine structures inside the PreLie operad revisited Vladimir Dotsenko ================================================== Recall that a pre-Lie algebra is a vector space with a binary operation a,b↦ a◃ b satisfying the identity (a_1◃ a_2)◃ a_3 - a_1◃ (a_2◃ a_3) = (a_1◃ a_3)◃ a_2- a_1◃ (a_3◃ a_2). This is a remarkable algebraic structure appearing in many different areas of mathematics, including category theory, combinatorics, deformation theory, differential geometry, and mathematical physics, to name a few. It seems to have first appeared independently in the work of Gerstenhaber <cit.> and Vinberg <cit.> in 1960s. In particular, in each pre-Lie algebra the “commutator” [a,b]=a◃ b-b◃ a satisfies the Jacobi identity, and hence defines a Lie algebra structure on the same vector space. A fact that could have been discovered by Cayley <cit.> in 1857, but had to wait till the work of Chapoton and Livernet <cit.> in 2000, states that free pre-Lie algebras can be described using rooted trees. For the case of the free pre-Lie algebra on one generator, the corresponding commutator algebra, the Lie algebra of rooted trees, plays an important role in the celebrated construction of Connes and Kreimer <cit.>. A result going back to the work of Foissy <cit.> in 2001 states that the Lie algebra of rooted trees is a free Lie algebra; in fact, it is easy to generalize that proof to the case of the commutator algebra of any free pre-Lie algebra. In 2007, Chapoton <cit.> and Bergeron and Livernet <cit.>, the same result is established in a more functorial way: the pre-Lie operad is free as a Lie algebra in the symmetric monoidal category of linear species. In 2010, Bergeron and Loday proved <cit.> that in any pre-Lie algebra, the symmetrized product a∙ b=a◃ b+b◃ a does not satisfy any identity other than commutativity, which is a striking contrast with the symmetrized product in associative algebras <cit.>. Soon after their work was circulated, Chapoton proposed <cit.> a beautiful conjecture substantially generalizing this result. Specifically, he conjectured that the species of generators that freely generates the pre-Lie operad as a Lie algebra is a free operad on infinitely many generators, and he constructed a conjectural set of generators of this operad, which he identified with the species of cyclic Lie elements (whose components are known as the Whitehouse modules <cit.>); the reader not familiar with cyclic Lie elements should think of them as universal values of invariant bilinear forms on Lie algebras, see <cit.>. In this paper, a simple proof of Chapoton's conjecture is given. All vector spaces and chain complexes are defined over a ground field $̨ of zero characteristic. For necessary information about operads, we refer the reader to <cit.>. We have an isomorphism of algebras ≅∘(). More precisely, if we denote by the subspecies of spanned by all elements of the form ℓ∙ℓ', where ℓ,ℓ' are Lie monomials, the following statements hold: * we have an isomorphism ≅; * the suboperad generated by is free; * the -subalgebra of generated by is free and coincides with . Let us begin with two simple observations. First, there is a surjective map ∘()↠. Indeed, each tree in the free operad generated by -∙- and [-,-] can be written as a linear combination of compositions γ(S;T_1,…,T_p) of a tree tensor S all whose vertices are labelled by [-,-] with tree tensors T_1,…, T_p each of which is either the trivial tree or has the labelled by -∙-. Furthermore, each such tree tensor T_i can be written as an iterated composition of tree tensors of the same kind for which only the root is labelled by -∙-, and all other vertices are labelled by [-,-]. In the quotient operad , the tree tensors whose vertices are all labelled by [-,-] become Lie monomials, and the statement follows. Second, the surjective map we constructed defines a decreasing filtration F^∙ as follows. A tree tensor with vertices labelled by -∙- and [-,-] belongs to F^p if it is of the form γ(S;T_1,…,T_p) of a tree tensor S all whose vertices are labelled by [-,-] with tree tensors T_1,…, T_p each of which is either the trivial tree or has the labelled by -∙-, so that S∈(k), the total number of labels -∙- in the trees T_i is equal to m, and k+m≥ p. This is a filtration by subspecies, but in fact it is compatible with some extra structures: by a direct inspection, we have [F^p,F^q]∈ F^p+q, and F^p∘_i[-,-]∈ F^p. In other words, it is a filtration by -bimodules. From our first observation, it follows that there is a surjective map ∘(_F)↠_F. To make a key advance in the proof, we shall do a small calculation. Recall that the symmetrised pre-Lie product -∙- and the bracket [-,-] in the operad satisfy the following relation: (a_1∙ a_2)∙ a_3 - a_1∙ (a_2 ∙ a_3) - a_1∙ [a_2, a_3] - [a_1, a_2]∙ a_3 - 2[a_1, a_3]∙ a_2+ [a_1, a_2∙ a_3] + [a_1∙ a_2, a_3] + [[a_1, a_3], a_2] = 0 , see <cit.>. Denoting this relation r, and computing 1/3(r-2r.(23)), we obtain the relation [a_1, a_2]∙ a_3-a_1∙ [a_2, a_3]= 1/3(-(a_1∙ a_2)∙ a_3- a_1∙ (a_2 ∙ a_3)+2(a_1∙ a_3)∙ a_2- [a_1∙ a_2, a_3] . . + [a_1,a_2∙ a_3] +2 [a_1∙ a_3, a_2] + [a_1,[a_2, a_3]] + [[a_1, a_2], a_3]). Note that the elements on the left, being compositions of the unit element of with elements of the -∙- weight one, belong to F^2, and all elements on the right, being either compositions of the unit element of with elements of the -∙- weight two, or compositions of Lie monomials of arity two with elements of the -∙- weight one, or, finally, Lie monomials of arity three, belong to F^3. This means that in the associated graded -bimodule, the cyclic Lie axiom <cit.> is satisfied, if we pretend that -∙- is a symmetric bilinear form, and not a binary operation. Note that is clearly a right -module, and since F^∙ is a filtration of -bimodules, there is a surjective map of right -modules ↠_F. Consequently, there is a surjective map of -bimodules ∘()↠_F. However, a calculation that was already done in <cit.> shows that these species are of the same dimension in each arity, so this surjection must be an isomorphism. To construct that surjection, we used several different surjections, each of which must be an isomorphism. First, the surjection ↠_F is an isomorphism, and since we work over a field of characteristic zero, this means that we have a species isomorphism ≅. Next, the surjection ∘()↠ which clearly is a composite of two surjections ∘()↠∘↠ , is an isomorphism, and this can only happen if the suboperad generated by is free and the -subalgebra of generated by is free and coincides with , which concludes the proof. In our previous work <cit.>, an alternative proof of the result of Bergeron and Loday <cit.> on freeness of the suboperad of generated by -∙- was suggested. However, that proof contains a gap that cannot be fixed by any moderate adjustment: in fact, it is possible to show that there is no quadratic Gröbner basis or convergent rewriting system of the operad with the indicated leading terms. The proof above uses the same presentation of in a conceptually different way, and, in particular, furnishes a new proof of the result of Bergeron and Loday. Let us conclude by outlining an argument on how one can show that the specific embedding ofintodefined by Chapoton in <cit.> generates a free suboperad ofwhich in turn generatesas a Lie algebra. For that, it is useful to revisit the argument of Bergeron and Livernet <cit.> for freeness ofas a Lie algebra. Their argument uses, as an intermediate step, a choice of an order of vertex labels of trees, which essentially means consideringas a shuffle operad, some time before shuffle operads were defined in <cit.>. Bergeron and Livernet show that the shuffle pre-Lie operad is free as a left module over the shuffle Lie operad, and use that to furnish a proof of their main result. They also show that the ordered species of generators of that left module has a structure of a nonsymmetric operad. In fact, it appears that, if one modifies their definitions a little bit, one can find a species of generators that has a structure of a shuffle operad, and, moreover, to show that the latter shuffle operad is free. Specifically, one should identify the shuffle Lie operad with the linear span of decreasing trees (those whose vertex labels decrease on all paths away from the root); then an argument identical to <cit.> shows that the ordered speciesof trees whose root label is smaller than the labels of vertices directly connected to the root generates the shuffle pre-Lie operad as a left module over the shuffle Lie operad. Furthermore, it is easy to see thatis stable under all shuffle compositions and so is a shuffle suboperad of. Finally, if we consider the ordered speciesof all trees whose root is labelled by the next-to-maximal element with a decreasing subtree grafted at it, it appears thatfreely generatesas a shuffle operad. To relate this to Chapoton's embedding ofinto, one can show that, on the level of shuffle operads, Chapoton's formula produces, modulo elements in deeper terms of our filtrationF^∙, all elements of the ordered species. * To examine the embedding of Chapoton in the way proposed in the above paragraph, the following observation is useful. Chapoton first constructs an embedding of into the Hadamard product of species ×, remarking that the image is contained in the Hadamard product ×. Note that ≅, so we are in fact dealing with the Hadamard product of anticyclic operads ×. To construct a map into that product from , it is enough to observe that the canonical map →^!×≅×, which exists for general operad theory reasons <cit.>, is compatible with the cyclic structures on both sides (this can be done either by a direct calculation using the explicit formulas for the anticyclic structures of and from <cit.>, or by noting that the theory of Manin products for operads <cit.> is compatible with cyclic structures), yielding a map →×≅×↪× that is the same as the one constructed by Chapoton <cit.>. * It is worth noting that the combinatorics we use in the above paragraph here is also featured in the very prominent way in <cit.>; in particular, the second statement of Corollary of <cit.> corresponds to the grading on arising from the weight grading of the free operad (), confirming the suggestion made by Chapoton in <cit.>. § FUNDING This research was supported by Institut Universitaire de France, by the University of Strasbourg Institute for Advanced Study through the Fellowship USIAS-2021-061 within the French national program “Investment for the future” (IdEx-Unistra), and by the French national research agency project ANR-20-CE40-0016. § ACKNOWLEDGEMENTS I am grateful to Frédéric Chapoton for useful discussions dating back to 2010 and for comments on the first draft of this article, and to Paul Laubie whose ongoing research project made me return to this question. plain Note that the cyclic Lie operadis, in particular, a right-module, so the free operad()generated by the species of the cyclic Lie elements is a right-module as well. A linear species is a contravariant functor from the groupoid of finite sets (the category whose objects are finite sets and whose morphisms are bijections) to the category of vector spaces. A linear speciesis said to be reduced if(∅)=0. The Cauchy product of two linear species_1and_2is defined by the formula (_1·_2)(I):=⊕_I=I_1⊔ I_2_1(I_1)⊗_2(I_2). This product makes linear species into a symmetric monoidal category with the unit which vanishes on nonempty finite sets and is given by$̨ on the empty set. The composition product of linear species is compactly expressed via the Cauchy product as _1∘_2:=⊕_n≥ 0_1({1,…,n})⊗_S̨_n_2^· n, that is, if one unwraps the definitions, (_1∘_2)(I) =⊕_n≥ 0_1({1,…,n})⊗_S̨_n(⊕_I=I_1⊔⋯⊔ I_n_2(I_1)⊗⋯⊗_2(I_n)). This product makes linear species into a monoidal category that is not at all symmetric; its unit vanishes on a finite set I unless |I|=1, and is given by ą on I={a}. Monoids in that category are called symmetric operads. An ordered linear species is a contravariant functor from the groupoid of finite ordered sets (the category whose objects are finite totally ordered sets and whose morphisms are order preserving bijections) to the category of vector spaces. An ordered linear species is said to be reduced if (∅)=0. The shuffle Cauchy product of two ordered linear species _1 and _2 is defined by the same formula as in the symmetric case: (_1·__2)(I):=⊕_I=I_1⊔ I_2_1(I_1)⊗_2(I_2). The divided powers of a reduced ordered linear species are defined by the formula ^(n)(I):=⊕_I=I_1⊔⋯⊔ I_n, I_1,…, I_n∅, min(I_1)<⋯<min(I_n)(I_1)⊗⋯⊗(I_n). Using that notion, the shuffle composition product of two reduced ordered linear species _1 and _2 is defined by the formula _1∘__2:=⊕_n≥ 1_1({1,…,n})⊗_2^(n), that is, if one unwraps the definitions, (_1∘__2)(I)=⊕_n≥ 1_1({1,…,n})⊗(⊕_I=I_1⊔⋯⊔ I_n, I_1,…, I_n∅, min(I_1)<⋯<min(I_n)_2(I_1)⊗⋯⊗_2(I_n)). This product makes ordered linear species into a monoidal category with the unit defined in the same way as for linear species. Monoids in that category are called shuffle operads. Note that there is a forgetful functor ↦^u from all linear species to ordered linear species; it is defined by the formula ^u(I):=(I^u), where I is a finite totally ordered set and I^u is the underlying finite set (with the total order ignored). The reason to consider ordered linear species, shuffle algebras and shuffle operads is explained by the following proposition. For any two linear species _1 and _2, we have ordered linear species isomorphisms (_1·_2)^f≅_1^f·__2^f, (_1∘_2)^f≅_1^f∘__2^f. In particular, applying the forgetful functor to a twisted associative algebra (a monoid for the Cauchy product) produces a shuffle algebra, and applying a forgetful functor to a reduced symmetric operad gives a shuffle operad. The forgetful functor sends modules over symmetric operads to modules over shuffle operads, ideals to ideals, free symmetric operads to free shuffle operads, etc. § BETWEEN SPECIES AND ORDERED SPECIES To achieve that goal, we recall that according to <cit.>, the underlying linear species of the operad is the linear species of rooted trees , and the operad structure is defined in a simple combinatorial way: if T_1∈(I), and T_2∈(J), then for i∈ I, the element T_1∘_i T_2 is given by T_1∘_i T_2=∑_fin(T_1,i)→vert(T_2) T_1∘_i^f T_2 . Here in(T_1,i) is the set of incoming edges of the vertex i in T_1 and vert(T_2) is the set of all vertices of T_2; the tree T_1∘_i^f T_2 is obtained by replacing the vertex i of the tree T_1 by the tree T_2, and grafting the subtree corresponding to the incoming edge e of i at the vertex f(e) of T_2. For example, we have @M=3pt@R=5pt@C=5pt *+[o][F-]1@-[dr] *+[o][F-]3@-[dl] *+[o][F-]2 ∘_2 @M=3pt@R=5pt@C=5pt *+[o][F-]a@-[d] +[o][F-]c= @M=3pt@R=5pt@C=5pt *+[o][F-]1@-[dr] *+[o][F-]a@-[d] *+[o][F-]3@-[dl] *+[o][F-]c + @M=3pt@R=5pt@C=5pt *+[o][F-]1@-[dr] *+[o][F-]3@-[dl] *+[o][F-]a@-[d] *+[o][F-]c + @M=3pt@R=5pt@C=5pt *+[o][F-]3@-[d] +[o][F-]1@-[dr] *+[o][F-]a@-[d] *+[o][F-]c + @M=3pt@R=5pt@C=5pt *+[o][F-]1@-[d] +[o][F-]a@-[d] *+[o][F-]3@-[dl] +[o][F-]c . Bergeron and Livernet <cit.> studied the shuffle operad ^u. If we change the order of vertices to the opposite one, their results imply that ^u is a free left ^u-module, with the ordered linear species of generators for which (I) is spanned by all rooted trees with vertices labelled by I for which the root label is larger than all labels of vertices that are directly connected to the root. Making the same change of order, we define, for a tree T∈^u(I), the maximal increasing connected subtree MI(T) is the subtree consisting of all vertices v for which the labels on the unique paths from the root to v increase from the root to v. We also define the weight d(T) as the number of vertices in MI(T) minus one. The weight defines a filtration on ^u: the weight of any element appearing in the formula for the shuffle composition of several elements does not exceed the sum of weights of composed elements. This is very close to the proof of <cit.>. Indeed, it is enough to consider elementary shuffle compositions (corresponding to the “pointed shuffles” of <cit.>) which are maps ∘_i (I)⊗(J)→(K) where I⊔ J=K⊔{i}, and {x∈ I x<i}={x∈ K x<i}. Let T_1∈(I), T_2∈(J), and let us examine the term in the composition of T_1 and T_2 corresponding to fin(T_1,i)→vert(T_2)=J. First, if i is not a vertex of MI(T_1), then it is clear that MI(T_1∘_i^f T_2)=MI(T_1). Suppose that i is a vertex of MI(T_1). To obtain the tree T_1∘_i^f T_2, we should replace the vertex i by the tree T_2 whose all vertices are greater than i by definition of an elementary shuffle composition, and reconnect the incoming edges of the vertex i in T_1 according to the function f. If e is an incoming edge of i for which the label of the other edge is smaller than i, connecting it to a vertex of T_2 will not create an increasing path. If e is an incoming edge of i for which the label of the other edge is greater than i, connecting it to a vertex of T_2 may create an increasing path, depending on the vertex we connect it to (it has to be a vertex of MI(T_2) whose label is less than the label of the vertex in question). Thus, if the tree MI(T_1) has d_1 vertices and the tree MI(T_2) has d_2 vertices, the tree MI(T_1∘_i^f T_2) has at most d_1+d_2-1 vertices (the union of all of these except i). This shows that d(T_1∘_i^f T_2)≤ d(T_1)+d(T_2), as required. As a consequence, the trees of weight zero, that is trees that span the species , form a shuffle suboperad of ^u. Let us prove the main result of this section. Increasing trees freely generate as a shuffle operad.
http://arxiv.org/abs/2306.08981v1
20230615092007
Overcoming the Limitations of Localization Uncertainty: Efficient & Exact Non-Linear Post-Processing and Calibration
[ "Moussa Kassem Sbeyti", "Michelle Karg", "Christian Wirth", "Azarm Nowzad", "Sahin Albayrak" ]
cs.CV
[ "cs.CV" ]
Overcoming the Limitations of Localization Uncertainty M. Kassem Sbeyti et al. Continental AG, Germany {moussa.kassem.sbeyti, michelle.karg, christian.2.wirth, azarm.nowzad}@continental-corporation.com DAI-Labor, Technische Universität Berlin, Germany [email protected] Overcoming the Limitations of Localization Uncertainty: Efficient & Exact Non-Linear Post-Processing and Calibration Moussa Kassem Sbeyti1,2() Michelle Karg1 Christian Wirth1 Azarm Nowzad1 Sahin Albayrak2 July 31, 2023 ===================================================================================================================== Robustly and accurately localizing objects in real-world environments can be challenging due to noisy data, hardware limitations, and the inherent randomness of physical systems. To account for these factors, existing works estimate the aleatoric uncertainty of object detectors by modeling their localization output as a Gaussian distribution 𝒩(μ, σ^2), and training with loss attenuation. We identify three aspects that are unaddressed in the state of the art, but warrant further exploration: (1) the efficient and mathematically sound propagation of 𝒩(μ, σ^2) through non-linear post-processing, (2) the calibration of the predicted uncertainty, and (3) its interpretation. We overcome these limitations by: (1) implementing loss attenuation in EfficientDet, and proposing two deterministic methods for the exact and fast propagation of the output distribution, (2) demonstrating on the KITTI and BDD100K datasets that the predicted uncertainty is miscalibrated, and adapting two calibration methods to the localization task, and (3) investigating the correlation between aleatoric uncertainty and task-relevant error sources. Our contributions are: (1) up to five times faster propagation while increasing localization performance by up to 1%, (2) up to fifteen times smaller expected calibration error, and (3) the predicted uncertainty is found to correlate with occlusion, object distance, detection accuracy, and image quality. § INTRODUCTION Object detectors in safety-critical systems face multiple challenges, including limited sensor resolution, difficult weather conditions, and ambiguous situations <cit.>. These challenges decrease performance regardless of the training frequency, as they induce an inevitable uncertainty called aleatoric uncertainty <cit.>. Therefore, existing works explicitly integrated aleatoric uncertainty into object detectors via loss attenuation <cit.> for varying applications, such as enhancing safety, robustness, and performance <cit.>. This paper prioritizes localization due to the absence of confidence information from the localization head in object detectors, when compared to the scores provided by the classification head. EfficientDet <cit.>, a one-stage anchor-based detector, demonstrates state-of-the-art performance in terms of both accuracy and speed on various benchmark datasets, making it an ideal use-case for this paper. An anchor-based detector predicts anchor-relative offsets, which are subjected to non-linear transformations during post-processing to compute the final object coordinates. These offsets are modeled as distributions to account for uncertainty, which raises the crucial question: How is the output distribution, including the uncertainty, propagated through non-linear functions? Le et al. <cit.> is the only work known to us, that considers the propagation of the anchor-relative offsets through non-linearities, and addresses it via sampling from the estimated distribution. However, sampling has the downside of either a high computation time for a large sample size or a reduced accuracy for a small sample size. We therefore develop two novel, fast and exact approaches. The first method is based on normalizing flows, with the main advantage of a universal applicability to many non-linear, arbitrarily complex functions and output distributions. The second method is tailored towards a normal output distribution 𝒩(μ,,σ^2), transformed by an exponential function. It utilizes the properties of the log-normal distribution, and its main advantage is an efficient usage of computational resources. Once the uncertainty is propagated, the focus shifts to assessing its quality: Is the predicted localization uncertainty well-calibrated? Other research on localization uncertainty estimation in object detection typically overlooks its calibration <cit.>. Hence, we introduce different approaches to calibrate it, inspired by calibration for general classification and regression tasks. We select two established methods: calibrating via an auxiliary model, e.g. isotonic regression <cit.>, and factor scaling <cit.>. We extend the first method to coordinate- and class-specific calibration. For the second calibration method, we establish and evaluate various loss functions during the optimization phase of the scaling factor, which directly adjusts the predicted uncertainty by considering its proximity to the residuals. Both methods are further improved by incorporating the object size, where each object's uncertainty is normalized by its width and height, resulting in a balanced calibration of objects of all sizes and aspect ratios. Furthermore, we provide a data selection process for calibration, which allocates all predictions to their ground-truth based on proximity, in contrast to, e.g. thresholding detections based on the classification score. After the localization uncertainty is estimated, propagated and calibrated, its interpretability is required to define potential applications (see <ref>): What correlations exist between the data and the uncertainty? Related works discover that aleatoric uncertainty correlates with occlusion <cit.> and object distance due to sparsity in point clouds <cit.>, but not with detection accuracy <cit.>. We investigate the latter and discover the contrary. We verify and show to which extent aleatoric uncertainty correlates with occlusion and detection performance, and extend the analysis to the object area in an image, i.e. object distance, and the quality of the image cropped around each detection. In summary, the contributions of our work are: * Development of two novel, exact and fast methods for uncertainty propagation through non-linear functions, enabling accurate uncertainty estimation without additional drawbacks. * Development and extension of two calibration methods and a data selection approach for accurate calibration in the context of object localization. * A comprehensive experimental overview of the quality and correlation between aleatoric uncertainty and traceable metrics, which further advances the understanding of aleatoric uncertainty. § BACKGROUND AND RELATED WORK This section presents a concise overview of existing works on aleatoric uncertainty estimation, decoding in object detectors, and calibration for regression. Loss Attenuation. A widely adopted approach for estimating aleatoric uncertainty is the sampling-free loss attenuation, which assumes that observation noise is dependent on the input <cit.>. By extending the network output to include both the mean μ and variance σ^2, i.e. modeling it as a Gaussian distribution, and training the network on the negative log-likelihood (NLL), the uncertainty can be learned as a function of the data. Choi et al. <cit.>, Kraus and Dietmayer <cit.> and Feng, Rosenbaum and Dietmayer <cit.> show that loss attenuation enhances the performance of 2D and 3D object detectors. They find that the estimated uncertainty correlates with occlusion <cit.> and object distance based on LiDAR data <cit.>, but it does not correlate with detection accuracy, measured via the intersection over union (IoU) <cit.>. The focus of these works is primarily performance enhancement of object detectors, as they place less emphasis on the reliability and interpretability of the uncertainty estimates. Anchor-Relative Localization. Choi et al. <cit.> and Kraus and Dietmayer <cit.> implement loss attenuation in YOLOv3 <cit.>. Anchor-based object detectors such as YOLOv3 <cit.>, single-shot detector (SSD) <cit.>, and EfficientDet <cit.> divide their final feature maps into a grid. Whereby each grid cell contains a pre-defined set of static bounding boxes known as anchors. During training, the detector learns the offsets for the center, width and height between the pre-defined anchors and the ground truth. In the post-processing, the predicted offsets are decoded based on their corresponding anchors, usually via non-linear functions, such as exponential and sigmoid <cit.>. This transforms them into bounding box coordinates, which are then scaled to the original image size. As introduced in <ref>, Le et al. <cit.> is the only work that considers the non-linearity in the decoding process. They implement loss attenuation in SSD <cit.>. To decode the anchor-relative coordinates along their corresponding variances, they draw samples from the predicted multivariate normal distribution 𝒩(μ, σ^2), decode the samples, then calculate the mean and variance of the decoded values. Other works do not explicitly address the non-linearity in the decoding process, i.a. decode the predicted variance by reversing the encoding equation of the mean <cit.>. Therefore, there is currently no deterministic and exact method available for decoding the values of both μ and σ^2. Regression Uncertainty Calibration. Calibration is crucial after estimating and propagating the uncertainty. Approximate Bayesian approaches such as loss attenuation produce miscalibrated uncertainties <cit.>. Laves et al. <cit.> and Feng et al. <cit.> argue that minimizing the NLL should result in the estimation of σ^2 matching the squared error. However, they and Phan et al. <cit.> find that the prediction of σ^2 is in reality biased, since it is predicted relative to the estimated mean. Kuleshov, Fenner and Ermon <cit.> propose a calibration method, which is guaranteed to calibrate the regression uncertainty given sufficient data. Calibration via an (1) auxiliary model implies training a model, e.g. isotonic regression, on top of a network so that its predicted distribution is calibrated. Its main disadvantage is that it is not suitable for fitting heavy-tailed distributions, and is prone to over-fitting <cit.>. Laves et al. <cit.> propose (2) factor scaling, another approach which consists of scaling the predicted uncertainty using a single scalar value s. The latter is optimized using gradient descent with respect to the NLL on the validation dataset. Method (2) is more suitable for embedded applications and requires less data than (1), but it has less calibration potential since one value is equally applied to all the uncertainties. Phan et al. <cit.> adapt method (1) for the localization of single objects, and show that it results in more reliable uncertainty estimates. Part of their future work and Kraus et al.'s <cit.> is to extend it to multiple-object detection; as addressed in this work. § METHOD This section presents our approach to loss attenuation in EfficientDet <cit.> and outlines its decoding process. Furthermore, it introduces our uncertainty propagation methods, and explains our extensions for uncertainty calibration in localization tasks. The proposed methods are model agnostic, i.e. they are identically applicable to any other object detector. §.§ Uncertainty Estimation The loss attenuation introduced by Kendall and Gal <cit.> is defined as follows: ℒ_NN=1/2N∑_i=1^N𝐲_i^*-𝐟(𝐱_i)^2/σ(𝐱_i)^2+logσ(𝐱_i)^2 with N samples, ground truth 𝐲^*, variance σ(𝐱)^2 and output 𝐟(𝐱) for input 𝐱. The output of the localization head in anchor-based object detectors consists of four variables: the anchor-relative object center coordinates (𝐱̂, ŷ), width ŵ, and height ĥ. For the estimation of the aleatoric uncertainty, the four variables are modeled via a multivariate Gaussian distribution 𝒩(μ, σ^2) with a diagonal covariance approximation. Hence, we extend <ref> for object detection: ℒ_NN=1/8N_pos∑_i=1^N∑_j=1^4(y^*_ij-μ̂_̂ĵ(𝐱_i)^2/σ̂_̂ĵ(𝐱_i)^2 +logσ̂_̂ĵ(𝐱_i)^2)⊙ m_i with N_pos as the number of anchors with assigned ground truth in each batch of input images, and the mask 𝐦 consisting of foreground ground truth boxes 𝐦=[𝐲^*≠0]. These features are specific for the EfficientDet baseline loss. §.§ Uncertainty Propagation The default decoding process of the localization output in EfficientDet is similar to other anchor-based object detectors such as SSD <cit.> and YOLO <cit.>. The final coordinates (𝐲, 𝐱, 𝐡 and 𝐰) are computed via two post-processing steps. The first step consists of transforming the anchor-relative center coordinates 𝐱̂, ŷ, width ŵ and height ĥ based on the center coordinates 𝐱_𝐚, 𝐲_𝐚, width 𝐰_𝐚, and height 𝐡_𝐚 of the corresponding anchor: 𝐲 = ŷ⊙𝐡_𝐚 + 𝐲_𝐚 𝐡 = exp(ĥ)⊙𝐡_𝐚 𝐱 = 𝐱̂⊙𝐰_𝐚 + 𝐱_𝐚 𝐰 = exp(ŵ)⊙𝐰_𝐚 <ref> is calculated for each prediction in the five feature maps, resulting in A_cell·(I_H· I_W/128^2+I_H· I_W/64^2+I_H· I_W/32^2+I_H· I_W/16^2+I_H· I_W/8^2) iterations, with A_cell as the number of anchors per grid cell, I_H as the height of the input image and I_W as its width. The decoding process yields coordinates that are relative to the scaled input image rather than the corresponding anchors. As a result, the second step consists of linearly rescaling the decoded coordinates to the original image size. Sampling is the only approach in existing works that enables the transformation of a distribution via a non-linear function such as the exponential in <ref>. It however either increases computation time or reduces accuracy. We therefor present two novel, exact and fast methods for decoding, via (1) normalizing flows and via (2) properties of the log-normal distribution. (1) Decoding via Normalizing Flows. As explained by Kobyzev, Prince and Brubaker <cit.>, a normalizing flow is a transformation of a probability distribution via a sequence of invertible and differentiable mappings. The density of a sample in the transformed distribution can be evaluated by computing the original density of the inverse-transformed sample, multiplied by the absolute values of the determinants of the Jacobians for each transformation: p_𝐘(𝐲) =p_𝐙(𝐟(𝐲))|det𝐃𝐟(𝐲)|=p_𝐙(𝐟(𝐲))|det𝐃𝐠(𝐟(𝐲))|^-1 where 𝐙∈ℝ^𝔻 is a random variable with a known and tractable probability density function p_𝐙 : ℝ^𝔻→ℝ, 𝐠 is an invertible function, 𝐟 is the inverse of 𝐠, 𝐘 = 𝐠(𝐙) is a random variable, 𝐃𝐟(𝐲)=∂𝐟/∂𝐲 is the Jacobian of 𝐟 and 𝐃𝐠(𝐳) =∂𝐠/∂𝐳 of 𝐠. The determinant of the Jacobian of 𝐟 captures the scaling and stretching of the space during the transformation, which ensures that the transformed distribution has the same area as the original distribution and is a valid probability density function that integrates to one. In other words, the original density p_𝐙 is pushed forward by the function 𝐠, while the inverse function 𝐟 pushes the data distribution in the opposite normalizing direction, hence the name normalizing flow. <ref> can be reformulated into four chains of transformations on normal distributions. Let 𝐠_1(𝐲),𝐠_2(𝐲) be invertible functions; the transformation of the distributions corresponding to the width ŵ and height ĥ is written as: 𝐠_1(𝐲) = exp(𝐲) 𝐠_2(𝐲) = 𝐜⊙𝐲 𝐡 = 𝐠_2 ∘𝐠_1(ĥ) with 𝐜=𝐡_𝐚 𝐰 = 𝐠_2 ∘𝐠_1(ŵ) with 𝐜=𝐰_𝐚 Each of the transformations in <ref> is implemented with the help of bijectors, which represent differentiable and injective functions. The final coordinates and variances in the scaled image are then directly calculated from the transformed distribution. This method can also be applied for uncertainty propagation in other anchor-based object detectors such as YOLOv3 <cit.>, by including a sigmoid function in the chain of transformations in <ref>. (2) Decoding via Properties of the Log-Normal Distribution. The calculation of the Jacobi matrix and inverse functions is computationally expensive. We therefore introduce a different method that directly calculates the transformed mean and variance for the specific case of a normal distribution and exponential or sigmoid transformation. If 𝐙 follows a normal distribution with mean μ and variance σ^2, then 𝐘=exp(𝐙) follows a log-normal distribution. The density function, mean and standard deviation of a log-normal distribution are calculated as follows <cit.>: f(y;μ,σ^2) = 1/yσ√(2π)exp(-[log(y)-μ]^2/2σ^2) Mean(𝐘) = exp(μ)√(exp(σ^2))=exp(μ+σ^2/2) SD(𝐘)= exp(μ)√(exp(σ^2)(exp(σ^2)-1)) Combining <ref> with <ref> results in the transformed mean and variance for the width and height, as shown in <ref>. Due to the preservation of linearity for Gaussian distributions, <ref> remains unchanged for the mean of the center coordinates. For the variance, the equations undergo modification in compliance with the applicable transformation rules. Log-Normal during Training. <ref> and Mean(𝐘) in <ref> show that a factor σ^2/2 is added to the mean of the width and height during the decoding. This always results in an enlargement of the bounding boxes (σ^2>0, exp(σ^2)>1). However, the model fits the offsets during training based solely on the mean, with no regard to the uncertainty (see <ref>). We propose incorporating the same factor during training, thereby accounting for the exponential transformation in the decoding equations of μ_𝐡 and μ_𝐰. This results in y^*_ij-[μ̂_̂ĵ(𝐱_i) + σ̂_̂ĵ(𝐱_i)^2/2]^2 for j=3,4 in <ref>. §.§ Uncertainty Calibration The main idea behind post-hoc calibration on the validation set is to map the uncertainty to the residuals via a model 𝐫 or a scaling factor s. Extensions to Caliration by a Model. Since the model predicts a multivariate Gaussian distribution with a diagonal covariance matrix (see <ref>), all four coordinates are predicted independently. Furthermore, the performance of the object detector varies from one class to the other due to heavy class imbalance, potentially leading to bias towards one class during calibration while neglecting the other. Therefore, we extend calibration via an auxiliary model <cit.> from calibrating all four uncertainties simultaneously with one isotonic regression model 𝐫 to the calibration of the uncertainty for each coordinate c with a separate model 𝐫_c for c∈[1,4], each ground truth class k with 𝐫_k for k∈[1,n_classes], and each coordinate i plus each ground truth class k with 𝐫_c,k. For an input 𝐱, a ground truth 𝐲 and predicted output 𝐩= 𝐫(𝐱), an isotonic regression model minimizes ∑_i=0^N w_i(y_i-p_i)^2 on N predictions <cit.>, with 𝐰≥0 as the observation weight and p_i≤ p_j for all i,j ∈𝔼, where 𝔼={(i,j): x_i≤ x_j}. Extensions to Calibration by a Factor. Laves et al. <cit.> optimize the factor s by minimizing the NLL with gradient descent. However, the log-likelihood objective is highly sensitive towards outliers and mislabeled variables, which is particularly relevant for real-world datasets <cit.>. Since their method only adjusts the predicted uncertainty σ in 𝒩(μ,(s·σ)^2), we propose to directly optimize the scaling factor s based on a distance metric between the predicted uncertainty and the true intervals, similar to the isotonic regression optimization goal. Therefore, two different loss functions are introduced, the root-mean-square uncertainty error (RMSUE) and the mean absolute uncertainty error (MAUE): RMSUE(s)=√(1/N∑_i=1^N(Δ_i-s·σ_i)^2) MAUE(s)=1/N∑_i=1^N|Δ_i-s·σ_i| with N detections, σ as the predicted uncertainty, and Δ = |𝐲^*-μ| as the residual. Relative Uncertainty. Existing methods are not attuned for localization tasks as they do not account for varying aspect ratios and sizes of bounding boxes. We introduce relative calibration, which consists of calibrating σ and Δ after normalization with the width and height of their corresponding object. This prevents the uncertainty of large objects from negatively influencing the calibration of the uncertainty of smaller objects. Contextualizing the uncertainty with respect to its object also helps mitigate the effect of missing depth information in 2D images, which is crucial for the comprehension of a detector's confidence in real-world detections. Proximity-based Data Sorting. Post-hoc calibration is performed on the validation set. The output of non-maximum suppression (NMS) in object detectors typically involves selecting top n detections based on their classification score using a manually specified threshold, resulting in the exclusion of certain detections. Such exclusions, in turn, could correspond to actual ground truths and therefore can impede the calibration of the localization uncertainty. EfficientDet employs soft-NMS, which entails the adjustment and subsequent sorting of its output based on the classification score. Nevertheless, a higher score does not necessarily imply a more accurate detection. We propose resorting the NMS output based on the nearest-neighbor to the ground truth via a distance metric, such as mean squared error (MSE), hence retaining and correctly allocating all samples in the validation set. § EXPERIMENTS The datasets used in this work are common in autonomous driving research: KITTI <cit.> (all 7 classes, 20% split for validation), and BDD100K <cit.> (all 10 classes, 12.5% official split). The baseline is EfficientDet-D0 <cit.> pre-trained on COCO <cit.> and fine-tuned on the two datasets respectively for 500 epochs with 8 batches and an input image resolution of 1024x512 pixels. The default hyperparameters for EfficientDet-D0 are maintained. To prevent the classification results from affecting the localization output, we use ground truth classes for the per-class calibration and reorder the detections based on MSE (the distance measure used during training, see <ref>) for both calibration and evaluation. §.§ Decoding Methods To showcase the effectiveness of the presented methods, eight metrics are selected. For localization: Average Precision (AP), root-mean-square error (RMSE), mean intersection over union (mIoU) and average time: model exporting time (ET) in seconds (s) and inference time (IT) in milliseconds (ms) per image. For uncertainty: RMSUE, expected calibration error (ECE) <cit.>, negative log-likelihood (NLL) and sharpness (Sharp). Sharpness is the average of the variance, i.e. it relates to the concentration of the predictive distribution <cit.>. Each model is trained three times. The results of sampling and IT are averaged over three trials on the validation set. ET is calculated as the average of three exporting iterations. Time measurements are performed on one GPU (RTX 3090). We compare our normalizing flows (N-FLOW) and log-normal (L-NORM) approaches to the baseline without uncertainty, and to the sampling method (SAMP) with 30, 100 and 1000 samples, inspired by Le et al. <cit.>. We also add false decoding (FALSEDEC), where both μ and σ are decoded via <ref>, as an ablation study to analyze the effect of correct propagation and including the uncertainty in the decoding process of the mean (see <ref>). The N-FLOW method is implemented using the library TensorFlow Probability <cit.>. Baseline vs Uncertainty. Predicting the localization aleatoric uncertainty increases the original 3,876,321 parameters by only 2,327 (0.06%). It reduces the required inference time per image, due to the Tensor Cores in the GPU utilizing the extension of the model output to eight values (mean and variance) <cit.>. The exporting time varies by decoding function. Direct calculation functions (Baseline, FALSEDEC, L-NORM) are faster than distribution-based (N-FLOW, SAMPL) functions, due to lower complexity of operations in the graph. Estimating the uncertainty improves the baseline AP and mIoU by 0.5% on KITTI. On BDD100K, it reduces the AP by 0.3%, but improves both the mIoU and RMSE, as seen in <ref>. Therefore, on both datasets, the localization performance increases. The COCO-style AP is affected by the classification performance, since it is calculated per class and detections are sorted based on their classification score to determine the cumulative true and false positives. This is amplified in the case of BDD100K, due to the larger number of images and their lower fidelity, and by extension, the overall decrease in performance and higher misclassification rate (see <ref>) in comparison to KITTI. Our Methods vs Sampling. The only difference between the N-FLOW and the L-NORM approaches is the processing time, due to different mathematical complexity (see <ref>). The main advantage of the N-FLOW approach is the flexibility in changing the distribution or the transformations without manually recalculating the posterior distribution. The latter is especially beneficial, when the transformations render the posterior distribution intractable. <ref> shows that incorrectly propagating the mean and variance (FALSEDEC) reduces performance and the precision of the uncertainty. Compared to our methods, sampling shows on both datasets either a strong reduction in performance (up to 3% AP and mIoU) or a longer inference time per image (up to 5 times slower). However, sampling with 30 samples does offer slightly sharper uncertainties on KITTI, which results in a lower NLL. The opposite is true for BDD100K. This can be retraced to the overestimation of the uncertainty by the model. Therefore, any reduction in the uncertainty leads to an enhancement of its precision. Sampling with a mere 30 samples can result in substantial deviation in both directions, hence the fluctuation between the datasets. Based on the results in <ref>, we select the L-NORM decoding method for the calibration evaluation. §.§ Calibration Evaluation Calibration improves reliability and interpretability of predicted uncertainties by reducing misalignment between the error distribution and the standard Gaussian distribution. This is highly relevant for safety-critical applications, where uncertainty should reflect the true outcome likelihood. Uncertainty Behavior. We notice that EfficientDet predicts a lower σ on the validation set, despite the higher NLL and RMSE compared to the training set, in accordance with Laves et al. <cit.>. We also found that σ^2 is predicted higher than the MSE, hence being miscalibrated. Reasons therefor can be found in the optimization of multiple losses and in uneven data distribution. For both datasets, the model overestimates the uncertainty, with the interval μ±σ containing 99% of the true values instead of the expected 68.27%. Calibration Methods. For factor scaling (FS), gradient descent is applied for 100 optimization epochs with a learning rate of 0.1 on the validation dataset. Optimizing the factor s based on MAUE and RMSUE (see <ref> and <ref>) results in a lower ECE and sharper uncertainties, but a higher NLL (see <ref>). We discover a trade-off between the ECE and NLL, since optimizing s based on the NLL instead results in a higher ECE. For the auxiliary isotonic regression (IR) model, we compare its extensions to per-coordinate (PCo) and per-class (CL) calibration. An illustrative example is featured in <ref>. <ref> shows that per-coordinate calibration outperforms the calibration on all coordinates as expected, since all four normal distributions are assumed to be independent. Per-class calibration further reduces the ECE, RMSUE and NLL, since both datasets contain heavily unbalanced classes with different aspect ratios and localization accuracy. IR outperforms FS for both datasets, because the size of the calibration dataset is large enough for the auxiliary model to train on, as also observed by Feng et al. <cit.>. Relative calibration results in further improvement for IR in both NLL and ECE. Our hypothesis in <ref> is that relative calibration mitigates bias towards larger objects. We empirically demonstrate that it effectively achieves this objective by conducting a comparative analysis on small, medium and large objects based on their area as defined by the COCO API <cit.>. Our findings reveal that relative calibration causes a more substantial reduction in ECE on small objects with a 6-fold further decrease when compared to absolute calibration, whereas it is 2-fold on medium objects and 3-fold on large objects. Accordingly, relative isotonic regression per-coordinate and per-class (Rel. IR PCo CL) is selected for further investigations. §.§ Uncertainty Correlation We investigate the correlation between the localization aleatoric uncertainty and performance, object area, i.e. distance in the real world, occlusion level and the Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) <cit.>. In the following, σ_obj=1/4∑^4_i=1σ_i is the average of all four uncertainties per object. Uncertainty vs Real-World Metrics. We assume that the distance of an object in the real world is connected to its area in an image in pixels^2 (px^2). For both datasets, <ref> demonstrates that the smaller the object in the image, or the farther away it is, the higher its aleatoric uncertainty. As mentioned in <ref>, aleatoric uncertainty correlates with occlusion. <ref> visualizes the results based on the annotations for occlusion in both datasets. KITTI has three occlusion levels: 0 is fully visible, 1 is occluded less than 50% and 2 is occluded more than 50%. BDD100K has only two: 0 is visible and 1 is occluded. The correlation is present in both datasets, but less for BDD100K. We trace this back to the model predicting double the uncertainty on average for the traffic light and sign classes, as compared to other classes. While 56568 instances of these classes were labeled as visible, only 5040 were labeled as occluded (8%). This, combined with the high uncertainty, negatively impacts the correlation. However, when excluding these two classes, the average uncertainty of visible objects is 34% lower than occluded objects pre-calibration and 40% lower post-calibration. Uncertainty vs Image Quality. The assumption that aleatoric uncertainty correlates with inherent noise in the data is investigated based on the BRISQUE score. For every detection, the score is calculated on the standardized crop around its bounding box in the corresponding image. Standardizing crops involves mean subtraction and division by the standard deviation of pixel values. As <ref> shows, the BRISQUE score positively correlates with the uncertainty, indicating a higher uncertainty for lower image quality. Uncertainty vs Detection Performance. The comparison with IoU and RMSE for both datasets in <ref> shows a correlation with localization accuracy. Calibration via Rel. IR PCo CL strengthens the correlation with all metrics, as presented in <ref>. The calibrated aleatoric uncertainty can be used for thresholding between misdetections (IoU <= threshold) and correct detections (IoU > threshold) for both datasets (see <ref>), since the uncertainty of misdetections is on average higher than the uncertainty of correct detections. This extends to classification, where the uncertainty of false positives of each class is on average also higher than the uncertainty of true positives. Therefore, the localization aleatoric uncertainty allows for the detection of the model prediction errors. § CONCLUSION We provide an object detection pipeline with reliable and interpretable localization uncertainty, by covering the estimation, propagation, calibration, and explanation of aleatoric uncertainty. Our methods enhance the safety and reliability of object detectors without introducing drawbacks. We propose two approaches to propagation, which allow an exact and fast propagation of distributions, along the corresponding uncertainty, through non-linear functions such as exponential, sigmoid and softmax. We demonstrate the efficacy of our techniques through their implementation in the post-processing of EfficientDet as a use-case. Our propagation methods improve the localization performance of the baseline detector on both datasets KITTI and BDD100K, and decrease the inference time. They generalize to any model with a tractable output distribution requiring its transformation via invertible and differentiable functions. They particularly alleviate the disadvantages of sampling, namely either low accuracy and reproducibility or high computation time. Furthermore, we extend regression calibration to localization, by considering the relativity of the uncertainty to its bounding box, as well as per-class and per-coordinate calibration with different optimization functions. We also investigate the data selection process for calibration and propose an approach for the allocation of predictions to their corresponding ground truth, which alleviates the disadvantages of manual thresholding. We find a correlation between aleatoric uncertainty and detection accuracy, image quality, object occlusion, and object distance in the real world. We hope the methods and results presented in this paper will encourage wider adoption of uncertainty estimation in different industrial and safety-critical applications, e.g. for safer decision making via more reliable detection monitoring, and more efficient use of labeled data in active learning. splncs04
http://arxiv.org/abs/2306.05461v1
20230608180002
Revisiting mass estimates of the Milky Way
[ "Yongjun Jiao", "Francois Hammer", "Haifeng Wang", "Jianling Wang", "Yanbin Yang" ]
astro-ph.GA
[ "astro-ph.GA" ]
Y. Jiao et al. IAU Symposium 379: Template 17 2023 10.1017/xxxxx Proceedings of IAU Symposium 379 P. Bonifacio, M.-R. Cioni, F. Hammer, M. Pawlowski, and S. Taibi, eds. ^1GEPI, Observatoire de Paris, Université PSL, CNRS, Place Jules Janssen, 92195 Meudon, France email:[email protected] ^2CREF, Centro Ricerche Enrico Fermi, Via Panisperna 89A, I-00184 Roma, Italy ^3CAS Key Laboratory of Optical Astronomy, National Astronomical Observatories, Beijing 100101, China We use the rotation curve from Gaia data release (DR) 3 to estimate the mass of the Milky Way. We consider an Einasto density profile to model the dark matter component. We extrapolate and obtain a dynamical mass M=2.75^+3.11_-0.48× 10^11 M_⊙ at 112 kpc. This lower-mass Milky Way is consistent with the significant declining rotation curve, and can provide new insights into our Galaxy and halo inhabitants. Galaxy: kinematics and dynamics – Galaxy:structure – dark matter – methods: numerical Revisiting mass estimates of the Milky Way Yongjun Jiao^1, François Hammer^1, Haifeng Wang^2, Jianling Wang^1,3 and Yanbin Yang^1 July 31, 2023 ========================================================================================== § INTRODUCTION The rotation curve (RC) is an essential tool for estimating the enclosed mass at different radii. The Milky Way (MW) disc is in relatively equilibrium because there was no major merger since 9 to 10 Gyr ago <cit.>. Then, its dynamical mass can be well established from its RC. Previous RC studies based on the second Gaia data release (DR2, ) provided a declining MW RC, with a slope of β = -(1.4± 0.1) km s^-1 kpc^-1 and -(1.7± 0.1) km s^-1 kpc^-1 for <cit.> and <cit.>, respectively. Such declining RCs have led to estimates of the total MW mass near or well below 10^12 M_⊙ <cit.>. The Gaia DR3 <cit.> provides a wider sample of stars with new determinations of spectra, radial velocity, chemical abundance, etc. Recently, <cit.> have applied a statistical inversion method introduced by <cit.> to reduce the errors in the distance determination in the Gaia DR3 data-set and provided the MW rotation curve v_C(R) up to about 27.5 kpc. Their RC is in reasonably good agreement with the RC measurement based on Gaia DR2 <cit.> and DR3 <cit.>. However, <cit.> found a systematically higher RC based on Gaia DR3 when it is compared to <cit.> and <cit.>. <cit.> argue that this systematic differences are caused by different methods of measuring distance. In particular, <cit.> found that a steeper slope of RC, with β = -(2.3± 0.2) km s^-1 kpc^-1, which is smaller than previous studies. A more rapidly declining RC provides a more stringent constraint, leading to a lower mass MW. <cit.> have shown that a Navarro–Frenk–White (NFW) profile could not be reconciled with a low mass MW and that an Einasto profile gives a larger mass range. Here we focus on the Einasto profile for the fitting of the MW RC. § METHOD The baryonic model of this study is the same model that we used in <cit.>, which was obtained from <cit.> and corresponded to a mass of ∼9× 10^10 M_⊙. Some studies argue that this model overestimates the mass of baryons towards the outer galactic radii <cit.>. We have also tested some different baryonic models with smaller mass. Though it has some minor impact on the estimated dark matter (DM) mass, it does not affect the results of dynamical mass (total mass). So the choices of baryonic mass do not change our main results. We apply the χ^2 method to fit the RC and calculate its associated probability, for which we have tested an extremely large parameter space. The χ^2 is calculated by the sum at each disk radius R_i: χ^2=∑_i^N(v_mod,i-v_obs,i)^2/σ_i^2 where v_mod is the modeled circular velocity for the cumulative baryons + DM profiles, v_obs is the observed circular velocity and σ_stat is the statistical uncertainty of the measurement so that σ_stat,i=(σ^+_v_obs,i+σ^-_v_obs,i)/2 to which we have added the systematic uncertainty σ_sys,i to calculate σ_i. Hence the χ^2 probability can be expressed as : Prob( χ^2/2, N-ν/2) = γ( N-ν/2, χ^2/2)/Γ( N-ν/2) where N is the number of independent observed velocity points in the RC and ν is the number of degrees of freedom. § RESULT AND CONCLUSION <cit.> explained in detail that the NFW profile may lead to a possible methodological bias particularly against low MW masses. With the RC from Gaia DR3 <cit.>, we find that the Einasto profile provides better fitting results than the NFW model. In Figure <ref> we present the RC from <cit.>, 3 models (low mass fit, best fit and high mass fit) of Einasto profile and best-fit model of NFW profile. The reduced χ^2 of the best fit model is ∼ 1.5 (for comparison, the reduced χ^2 of best fit NFW profile is ∼ 3.7). <cit.> and <cit.> have also found a similar result. In short, the three-parameter Einasto profile provides a much better fit to the declining RC. In Fig. <ref>. we compare the results of Einasto profile using Gaia DR2 and DR3. We keep the same parameter space as in <cit.> but the estimated MW mass range is much narrower. The best-fit dynamical mass remains consistent, from 2.77× 10^11 M_⊙ to 2.75× 10^11 M_⊙, indicating that the Einasto profile gives a consistent estimated dynamical mass. The low mass fits of RC are mostly constrained by the data at large radii (>17 kpc). <cit.> find a consistent slope beginning at ∼ 13 kpc. The last five points seem contradictory to each other. For now we think that inconsistencies may be due to the lack of a strict systematic uncertainty analysis, like the neglected cross term in Jeans equation and the uncertainty of scale length (see also ). We will present this analysis in an upcoming work, but preliminary results show that the systematic does not lead to significant differences in our main results, including the choices of DM profile, estimated mass range and best fit model. The Magellanic Clouds (MCs), Globular clusters (GCs) and dwarf galaxies are also widely used as tracers to estimate the enclosed mass of the MW at large radii. However, these methods can easily lead to prior selection bias. <cit.> found that by excluding two GCs with large orbit energy, Crater and Pyxis, their estimated MW mass decreased from 5.73^+0.76_-0.58× 10^11 M_⊙ to 5.36^+0.81_-0.68× 10^11 M_⊙ with Einasto profile. In fact, as relatively old halo inhabitants <cit.>, GCs behave consistently with a large MW mass range. We have tested orbits of all GCs with the MW mass from 2 to 15 × 10^11 M_⊙, and find that almost all GCs are gravitationally bound. However, about half of the dwarf galaxies are not bound with a low mass MW (∼ 2× 10^11 M_⊙). If most dwarfs came late in the MW halo <cit.>, it is unrealistic to use them as tracers for estimating MW mass. For example, it is likely that the consideration of Leo I as a bound satellite would lead to a significant overestimate of the MW mass. On the other hand, orbital studies of other halo inhabitants are sensitive to the choice of MW mass and need more rigorous analysis. [de Salas et al.(2019)]desalas2019 de Salas, P. F., Malhan, K., Freese, K., et al. 2019, , 2019, 037. [Eilers et al.(2019)]eilers2019Eilers, A.-C., Hogg, D. W., Rix, H.-W., et al. 2019, , 871,120 [Gaia Collaboration et al.(2018)]gaiadr2 Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, , 616, A1. [Gaia Collaboration et al.(2022)]gaiadr3 Gaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2022, arXiv:2208.00211. [Hammer et al.(2007)]hammer2007 Hammer, F., Puech, M., Chemin, L., et al. 2007, , 662, 322. [Hammer et al.(2023)]hammer2023 Hammer, F., Li, H., Mamon, G. A., et al. 2023, , 519, 5059. [Haywood et al.(2018)]haywood2018 Haywood, M., Di Matteo, P., Lehnert, M. D., et al. 2018, , 863, 113. [Jiao et al.(2021)]jiao2021 Jiao, Y., Hammer, F., Wang, J. L., et al. 2021, , 654, A25. [Karukes et al.(2020)]karukes2020 Karukes, E. V., Benito, M., Iocco, F., et al. 2020, , 2020, 033. [Lucy(1974)]lucy1974 Lucy, L. B. 1974, , 79, 745. [Mróz et al.(2019)]mroz2019 Mróz, P., Udalski, A., Skowron, D. M., et al. 2019, , 870, L10. [Ou et al.(2023)]ou2023 Ou, X., Eilers, A.-C., Necib, L., et al. 2023, arXiv:2303.12838. [Pouliasis et al.(2017)]pouliasis2017 Pouliasis, E., Di Matteo, P., & Haywood, M. 2017, , 598, A66. [Sylos Labini et al.(2023)]labini2023 Sylos Labini, F., Chrobáková, Ž., Capuzzo-Dolcetta, R., et al. 2023, , 945, 3. [Wang et al.(2022)]wang2022 Wang, J., Hammer, F., & Yang, Y. 2022, , 510, 2242. [Wang et al.(2023)]wang2023 Wang, H.-F., Chrobáková, Ž., López-Corredoira, M., et al. 2023, , 942, 12. [Zhou et al.(2023)]zhou2023 Zhou, Y., Li, X., Huang, Y., et al. 2023, , 946, 73. Manuel Bayer1. Do I understand correctly that for the modeling of the <cit.> data-set you just take into account the uncertainties for the circular velocity? 2. Did you just use the binned data of <cit.>? 3. What stars do <cit.> employ to trace the rotation curve? Yongjun Jiao1. We consider the neglected term in their Jeans equation, which is a cross-term made by the vertical density gradient of the product of the radial and vertical velocities and add an additional systematic uncertainty of 2% on the velocity scale due to the effect of changing the distance of the Sun to the Galactic center and the scale length, the proper motion of the latter, etc. 2. We directly used their binned data <cit.> with overestimated error bars (see above). Haifeng Wang3. <cit.> use different types of stars in combination with Gaia DR3 line-of-sight velocity, which is based on the Lucy method and mainly in the galactic anticenter direction. The random errors and systematics are acceptable at least for the moment.
http://arxiv.org/abs/2306.03969v2
20230606190430
ECQED: Emotion-Cause Quadruple Extraction in Dialogs
[ "Li Zheng", "Donghong Ji", "Fei Li", "Hao Fei", "Shengqiong Wu", "Jingye Li", "Bobo Li", "Chong Teng" ]
cs.CL
[ "cs.CL" ]
UTF8gbsn Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals ECQED: Emotion-Cause Quadruple Extraction in Dialogs Li Zheng, Donghong Ji, Fei Li, Hao Fei, Shengqiong Wu, Jingye Li, Bobo Li and Chong Teng Li Zheng, Donghong Ji, Fei Li, Jingye Li, Bobo Li and Chong Teng are with the Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan 430072, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]). Hao Fei and Shengqiong Wu are with the Sea-NExT Joint Lab, School of Computing, National University of Singapore, Brazil (e-mail: [email protected]; [email protected]). July 31, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The existing emotion-cause pair extraction (ECPE) task, unfortunately, ignores extracting the emotion type and cause type, while these fine-grained meta-information can be practically useful in real-world applications, i.e., chat robots and empathic dialog generation. Also the current ECPE is limited to the scenario of single text piece, while neglecting the studies at dialog level that should have more realistic values. In this paper, we extend the ECPE task with a broader definition and scenario, presenting a new task, Emotion-Cause Quadruple Extraction in Dialogs (ECQED), which requires detecting emotion-cause utterance pairs and emotion and cause types. We present an ECQED model based on a structural and semantic heterogeneous graph as well as a parallel grid tagging scheme, which advances in effectively incorporating the dialog context structure, meanwhile solving the challenging overlapped quadruple issue. Via experiments we show that introducing the fine-grained emotion and cause features evidently helps better dialog generation. Also our proposed ECQED system shows exceptional superiority over baselines on both the emotion-cause quadruple or pair extraction tasks, meanwhile being highly efficient. emotion detection, dialog analysis, grid tagging, empathic response generation. § INTRODUCTION It is a long-reached consensus in opinion mining area that emotion and cause are mutually indicative. Emotion-Cause Pair Extraction task <cit.> has accordingly been proposed to extract the emotion clauses and the corresponding cause clauses. By revealing the underlying emotional causes, ECPE shows great potential on realist applications, such mental health analysis <cit.> and public opinion monitoring <cit.>. Despite its promising success, the current ECPE research has two vital limitations, which unfortunately hinders its further prosperity and promotion. From the task scenario perspective, existing studies only consider the single text piece (e.g., news articles or short posts), while another widely-occurred and sophisticated scenario, dialog, has not been studied in depth. Although Emotion Recognition in Conversation (ERC) <cit.> has been previously proposed to identify the speaker emotions in conversational utterance, it suffers from the coarse-level analytics, i.e., without detecting the underlying causes of the emotions <cit.>. From the task definition perspective, ECPE task only coarse-grainedly recognize which clauses (or utterances) are emotions or causes, while ignoring the emotion and cause types, the meta-information that reflects the finer grained opinion status. Essentially, emotion type reveals what specifically the emotion is, and cause type depicts how the cause is aroused. Intuitively, without leveraging such details, the utility of emotion-cause pairs (from ECPE) for the downstream applications (e.g., chatbots and empathic dialog generation) can be much limited <cit.>. As exemplified in Figure <ref>(b), a dialog system manages to accurately generate a strong empathic response g_2 to the input utterance when the emotion and cause type label features are given. Otherwise, an under-emotional response g_1 will be generated without understanding the emotion and its underlying cause due to the absence of type information. Based on the above considerations, we investigate a new task, namely Emotion-Cause Quadruple Extraction in Dialogs (ECQED). Technically, given a dialog with utterances, ECQED sets the goal to extract each emotion-cause pair as well as their corresponding type labels, i.e., a quadruple (emotion utterance, cause utterance, emotion type, cause type), as seen in Figure <ref>(a). We build up the benchmark for ECQED task based on the existing RECCON dataset <cit.>, which includes 1,106 dialogs with human-annotated emotion- cause quadruples. Our ECQED task challenges in three aspects. First, a dialog contains not only dialog text content but also other information such as the speaker identity, the order of utterance and the response relations between utterances. Second, the overlapping phenomena of emotion or cause utterances are much more frequent than those in the news text. As shown in Figure <ref>(a), the cause utterance u_1 is paired with two emotion utterances u_1 and u_2. Third, ECQED task is inherently a multi-stage task, having more steps to complete quadruple extraction than those in ECPE task. Therefore, more errors will be propagated if simply building pipeline models. To this end, we first propose an end-to-end quadruple extraction model equipped with a Structural and Semantic Heterogeneous Graph (SSHG), which effectively captures the complex context information in the dialog. For the overlapping issue, we design an efficient parallel grid tagging module that extracts emotion-cause pairs for each kind of emotion simultaneously. In addition, instead of decomposing the quadruple extraction problem into multiple stages, our model adopts the joint extraction method to avoid error propagation. We evaluate our method on the RECCON dataset. Experimental results indicate that our model significantly outperforms those of 7 state-of-the-art baselines <cit.>, which are adapted from the ECPE task or the RECCON paper. Further ablation experiments demonstrate that each component of our framework is essential. Specifically, the F1 score increases by 9.21% after applying SSHG in our model. The experiment for only overlapped quadruple extraction show that the F1 score of our model is 6.35% higher than that of the best baseline. Moreover, emotion-cause quadruple information can facilitate the generation of empathic dialog, and the ROUGE scores are improved by average 5 points. To facilitate related research and reproduce our results, all our codes and metadata will be publicly available after publication. § RELATED WORK §.§ Emotion Recognition in Conversation Emotion recognition in conversation (ERC) has attracted increasing attention in recent years <cit.>. Different from traditional emotion recognition, emotion analysis in conversation is context-dependent and speaker-sensitive. It also needs to consider the speaker's state and dependency as well as a full understanding of the structure of the conversation. Mainstream ERC methods are divided into two categories: sequence-based method <cit.> and graph-based method <cit.>. The graph-based models better model the complicated conversational structures, comparing with the sequence-based methods. However, existing methods <cit.> focus on homogeneous graphs, while failing to utilize the vital interaction between speakers and utterances. How to effectively leverage the heterogeneous graphs to model the dialog context and capture the interactions between speakers and utterances is still under-studied yet. §.§ Emotion-Cause Pair Extraction Considering the fact that emotion and cause indicates each other, Xia and Ding <cit.> proposed the ECPE task based on the Emotional Cause Extraction (ECE) task <cit.>. The model is a two-step pipeline that first extracts emotion and cause clause, and then pairs them. Unfortunately, it leads to drawbacks of error propagation and high computational cost. Later, end-to-end ECPE methods have been widely investigated <cit.>. Wang et al. <cit.> proposed multimodal emotion-cause pair extraction in conversations, and established a two-step pipeline model. Because multimodal emotion analysis is not the research range in this study, we leave it in the future work. However, all these existing ECPE works leave out the significance of emotion and cause types, and the neglect of them would result in quite limited performance as well as the downstream applications. §.§ The RECCON Dataset To the best of our knowledge, the only work that is closely related to our ECQED task was performed by Poria et al. <cit.>. They built the RECCON dataset with human-annotated emotion-cause quadruples. However, only two tasks, “Causal Span Extraction” and “Causal Emotion Entailment” were studies in their paper. The former is to identify the causal span given a target emotion utterance, while the latter is to identify the causal utterance that is similar with the ECPE task <cit.>. By contrast, we in this paper extend their work based on their dataset by building up a strong model to jointly extract emotion-cause quadruples (i.e., <emotion utterance, cause utterance, emotion type, cause type>) and carrying out extensive experiments to show the effectiveness of our model and the necessity of our ECQED task. § METHODOLOGY In this paper, we propose an end-to-end quadruple extraction model to address this new ECQED task. The architecture of our model is illustrated in Figure <ref>, which consists of three components. First, we choose BERT <cit.>, the widely-used pre-trained language model, as encoder to yield contextualized utterance representations from input dialogs. Then, we design a SSHG to capture the structural and semantic information in dialogs and the interaction between utterances. Finally, we adopt a grid tagging module to jointly reason the relationships between all pairs of utterances and extract the quadruples in parallel. §.§ Task Definition Let U={u_1, u_2, ..., u_N} be a dialog, where N is the number of utterances. Each utterance u_i={w_i_1,…,w_i_M} is a sequence of words of length M. The goal of our task is to extract a set of quadruples in U: P = {...,(u_e_j,u_c_j,t_e_j,t_c_j),...} where j means the j-th quadruple, u_e_j, u_c_j, t_e_j, t_c_j denote the emotion utterance, cause utterance, emotion type and cause type, respectively. §.§ Utterance Encoding BERT <cit.> has achieved state-of-the-art performance across a variety of NLP tasks <cit.> due to its powerful text representation capacity. Thus, we take BERT as the underlying encoder to yield contextualized utterance representations. Concretely, for a given dialog U={u_1, u_2, ..., u_N}, where the i-th utterance u_i={w_i_1, w_i_2, …, w_i_M}, we insert a [CLS] token before each utterance and attach a [SEP] token to it, thus obtaining u_i = ([CLS], w_i_1 , w_i_2 ,..., w_i_M , [SEP]). Then we concatenate them together as the input of BERT to generate contextualized token representations, in which we take the representation of [CLS] token in each utterance u_i as its utterance representation. After that, we obtain all the utterance representations X = {x_1, x_2,.., x_N}. §.§ Structural and Semantic Heterogeneous Graph We design a structural and semantic heterogeneous graph to capture speaker, utterance and dialog features as well as the interaction between them. Formally, we employ three distinct types of nodes in the graph: * Utterance nodes summarize the information for each utterance and their context, which are obtained from the utterance encoder. * Dialog nodes generalize the overall information of the dialog, which are initialized by averaging the representation of all utterance embeddings. * Speaker nodes represent the personality features of the speaker in the dialog. These nodes are obtained from pre-defined embeddings. The graph also has four types of edges: * Dialog-Utterance edges connect the dialog nodes and utterance nodes, so that local and global context information can be exchanged. * Speaker-Utterance edges connect the spe-aker nodes and utterance nodes issued by the speaker, in order to model both the intra- and inter-speaker semantic interactions. * Utterance-Utterance edges connect the adjacent utterances in the chronological order. * Self-loop edges connect to the node itself, because emotion and cause utterances can be the same one in our task. Motivated by R-GCN <cit.>, we apply a 2-layer graph convolution network to aggregate each node feature from its neighbors. The graph convolution operation is formulated as: h^(l+1)_n = ReLU(∑_k∈κ∑_v∈ N_k(n)(W^(l)_kh^(l)_n + b^(l)_k)) where κ are different types of edges, N_k(n) denotes the neighbors of the node n connected with the k^th type of edge, W^(l)_k∈ R^d× d and b^(l)_k∈ R^d are the parameters of the l^th layer. §.§ Parallel Grid Tagging Prediction To extract quadruples, we first construct a grid for each emotion type and use four tags 𝕋={H, I, N, S} for each grid to represent the cause type for the emotion, which are explained as below: * Hybrid: the cause lies in the joint influence of the speaker himself and other speakers. * Inter-personal: the cause is in the utterance issued by another speaker. * No-context: the cause exists in the utterance itself. * Self-contagion: the cause is influenced by the speaker himself. In order to further illustrate the process of grid tagging prediction, we present the grid-tagging module in Figure <ref> corresponding to the example in Figure <ref>. Due to the particularity of dialogs, the current utterance is only affected by the previous utterance or itself. Therefore, the cause utterance appear in front of the emotion utterance, which means that we can use the upper triangular grid. For example, as seen in Figure <ref>, in the grid corresponding to the emotion “sadness”, S means that u_4 and u_6 are one emotion-cause pair and the emotion type is “sadness” and the cause type is “self-contagion”. Since the goal of our framework is to predict the relations (tags) between utterance pairs, it is crucial to generate a high-quality representation for the utterance pair grid. We regard the representation V_ij as a 3-dimensional matrix V∈ R^N× N× d_h of utterance pair (u_i,u_j). Inspired by Lee et al. <cit.>, we adopt the Conditional Layer Normalization (CLN) to generate V_ij: V_ij= CLN(h_i, h_j) = γ_ij⊙ N(h_j) + λ_ij where N(h_j)=h_j - μ/σ, γ_ij = W_α h_i + b_α, bias λ_ij = W_β h_i + b_β, μ and σ are the mean and standard deviation across the elements of h_j. MLP Predictor. Based on the utterance pair grid representation V, we design a multi-layer perceptron (MLP) to calculate the relation (tag) scores for the utterance pair (u_i,u_j):[For simplicity, we only show the process of calculating tag scores for one emotion grid.] y'_ij = MLP(V_ij) where y'_ij∈ R^|𝕋| are the scores of the four pre-defined tags in 𝕋. Prior work <cit.> has shown that MLP predictor can be enhanced by collaborating with a biaffine predictor. We thus follow such method and also add a biaffine predictor. Biaffine Predictor. The inputs of the biaffine predictor are the outputs from the structural and semantic graph, i.e., H = {h_1, h_2, ..., h_N}∈ R^N× d_h. Given a pair of utterance representations h_i and h_j, we map them to an emotion-space representation e_i and a cause-space representation c_j with two MLPs, respectively. Then, a biaffine classifier <cit.> is used to calculate the relation scores for the utterance pair (u_i,u_j): y”_ij = e_i^TUc_j + W[e_i;c_j] + b where U, W and b are trainable parameters. The final relation probabilities y_ij for the utterance pair (u_i,u_j) are calculated by combining the scores from the biaffine and MLP predictors: y_ij = Softmax(y'_ij + y”_ij) §.§ Model Training The training loss is defined as the cross-entropy loss between the ground-truth tag distribution ŷ_ij and the predicted tag distribution y_ij for each utterance pair (u_i,u_j) in the emotion grid e: ℒ = -1/N^2∑_e∈ E∑_i=1^n∑_j=1^n∑_k=1^|𝕋|ŷ^(e)_ij_klog(y^(e)_ij_k) where E represents all the emotion grids. § EXPERIMENTS §.§ Dataset, Implementation Details and Evaluation Metrics In this paper, we evaluate our model on the RECCON dataset <cit.>. The statistics of the dataset are shown in Tables <ref> and <ref>. As seen, it consists of 1,106 dialogs and 11,104 utterances. Each dialog contains multiple emotion-cause quadruples, and the average length of each dialog is 10 utterances, and the average number of emotion-cause quadruples in a dialog is 8. The dataset splitting follows the same setting in the experiments of the RECCON paper, with train:val:test as 7:1:2. In Figure <ref>, we show the statistics of the ratios of overlapped quadruples and cross-utterance quadruples. As shown in Figure <ref>(a), we can observe that 93.3% of the dialogs contain overlapped quadruples, which is much higher than the percentage in the ECPE dataset <cit.>. As illustrated in Figure <ref>(b), a large proportion of quadruples in our dataset are not adjacent (36%, where distance>1), and the distance of 11% of the quadruples is more than three utterance. Intuitively, the farther emotion utterance and cause utterance in a quadruple are, the more difficult they are to extract. Therefore, considering the high ratios of overlapped and cross-utterance quadruples, our task is more challenging than other related tasks such as ECPE. In Table <ref>, we summarize the hyper-parameter setting in our model and experiments. During training, we choose the Adam <cit.> optimizer with the learning rate of 2e-5 for BERT <cit.> and 1e-5 for other modules. We train our model by setting the epoch, dropout and batch size to 50, 0.2 and 2 respectively. Moreover, the hidden size d_h is set as 768 to accommodate with BERT and the number of GCN layers is set to 2. In terms of evaluation metrics, we follow previous ECPE works <cit.> and use the Precision, Recall, and F1 score as the metrics for evaluation. All our scores are the averaged number over five runs with different random seeds. §.§ Baseline Systems Since neither the RECCON paper nor prior work has implemented a model for our ECQED task, we adapted some state-of-the-art models for the ECPE task as our baselines because it shares some similarities with our task. Note that ECPE-2D, RANKCP and MLL were also used in the RECCON paper for their task. * ECPE-2D: Ding et al. <cit.> proposed a two-dimensional expression scheme to recognize the emotion-cause pair. * TransECPE: Fan et al. <cit.> proposed a transition-based model, which transforms the task into an analytic process of constructing a directed graph. * PairGCN: Chen et al. <cit.> designed a graph convolution network to model three kinds of dependencies between local neighborhood candidate pairs. * RANKCP: Wei et al. <cit.> extracted emotion-cause pair from the perspective of ranking and through the graph attention network. * MLL: Ding et al. <cit.> transformed the ECPE task into the emotion-pivot cause extraction problem in the sliding window. * UTOS: Cheng et al. <cit.> reframed the emotion-cause pair extraction task as a unified sequence labeling problem. * RSN: Chen et al. <cit.> jointly extracted emotion clauses, cause clauses and emotion-cause pairs through multi-task learning. §.§ Quadruple Extraction Results The experimental results of emotion-cause quadruple extraction in terms of P, R and F1 measures are given in Table <ref>. The results illustrate that our end-to-end method has obvious advantages over other strong baselines in the quadruple extraction task. For example, our system surpasses the best baseline (MLL) with absolute 4.57% F1 score. Further analyzing, this performance gain mainly comes from the improvement on the precision score. Compared with RANKCP, our precision is increased by 4.92%, which proves that our model can effectively extract more correct quadruples. Our remarkable performance reveals that the ECQED task is solvable, and meanwhile our approach is a feasible way to handle this task. §.§ Pair Extraction Results In Table <ref> we compare our method with the existing methods of emotion-cause pair extraction to study the scalability of our model. To ensure fairness, all models use BERT <cit.> as the encoder and are evaluated on the RECCON dataset. The emotion-cause pair extraction results show that the F1 score of our model is 69.37%, which is significantly better than those of other models. Furthermore, in terms of emotion extraction and cause extraction, our model still shows competitive performance with the highest F1 scores 91.32% and 85.48%, respectively. As a summary, the above experimental results confirm the well scalability of our model performing on other subtasks, though our model is initially proposed aiming at emotion-cause quadruple extraction. §.§ Ablation Study We conduct ablation experiments to further evaluate the contribution of each component. As depicted in Table <ref>, we observe that no variants can compete with the complete model, suggesting that every component is essential for our task. Specifically, the F1 score decreases most heavily without SSHG (-9.21%), which indicates that it has a significant effect on modeling the complex structural information of dialog, such as the relationships between the context and speakers. To verify the necessity and effectiveness of each node and edge of the heterogeneous graph, we remove the nodes and edges from the graph, respectively. The sharp drops of results demonstrate that each node and edge plays an important role in capturing the speaker, utterance and dialog features as well as the interaction between them. Besides, removing MLP and biaffine predictors results in a distinct performance decline. This suggests that the cooperation of two predictors is able to enhance our model performance. § ANALYSIS AND DISCUSSION §.§ Do Fine-grained Emotion and Cause Features Really Help Empathic Dialog Generation? Holding the fact that fine-grained emotion and cause features can make generated utterances more empathetic, we set up the following experiments to comprehensively explore the effect of the ECQED task on the empathic dialog generation. First of all, we extract each pair of adjacent utterances (u_i, u_i+1) from the RECCON dataset where the former u_i is the cause utterance and the latter u_i+1 is the emotion utterance. We apply such data in the task of empathic dialog generation where cause utterance is used as input, and emotion utterance is used as the ground-truth of generated dialog. We divide the experiments into 4 groups according to the features used for the dialog generation model. 1) CU: only using the cause utterance (the former one) as input into the dialog generation model. 2) CU+Pair: inputting the cause utterance and prompting that the next utterance is an emotion utterance but not giving the emotion type. 3) CU+ET: inputting the cause utterance and prompting the emotion type of the latter utterance. 4) CU+ET+CT: inputting the cause utterance and prompting the cause type and emotion type. We apply GPT-2 <cit.> as our dialog generation model. For measuring content quality, we use ROUGE metric <cit.> to evaluate the fluency and relevance of generated utterances. We fine-tune BERT <cit.> as the emotion classifier to predict the emotion label of the generated utterance, and use accuracy as an evaluation metric to verify the empathy performance of generative utterance. As shown in Table <ref>, the generation performance reaches the best when adding more fine-grained emotion and cause features, such as cause utterances, cause types and emotion types. For example, by incorporating both the cause types and emotion types, the ROUGE metric is increased by average 5 points, i.e., 4) vs. 2). In addition, the dialog generation model enhanced with CU+ET+CT features achieves the best accuracy, suggesting that it is able to generate more empathic utterances with correct emotion types. §.§ Special Comparison on Overlapped Quadruple Extraction compat=newest We are curious about the impact of overlapped quadruples on model performance. Therefore, we make up a subdataset (i.e., Overlap) that only containing overlapped quadruples in the test set. In Figure <ref>, we compare our model with two competitive baselines on the Overlap and All datasets, i.e., RANKCP and MLL. It can be seen that our model always performs better than MLL and RANKCP on extracting quadruples, no matter whether they are overlapped or not. Notably, our model wins MLL over 6.35%(63.56-57.21) F1 score. What's more, the performance gaps are further enlarged when considering overlapped quadruple extraction. The overall results reveal that our method can well solve the problem of overlapped quadruples. §.§ Advantage of Joint Emotion-Cause Quadruple Extraction In this part, we verify the effectiveness of our end-to-end method by designing two pipeline methods to compare with it. The first method is “Four-Step” scheme for the quadruple extraction, whose framework is shown in Figure <ref>(a): 1) extract emotion utterances and cause utterances; 2) perform emotion-cause pairing and filtering; 3) classify the emotion types of emotion-cause pairs; 4) classify the cause types of emotion-cause pairs. In contrast, the second method consists of two steps, as illustrated in Figure <ref>(b): 1) extract the emotion-cause pairs; 2) classify the emotion types and cause types of the extracted emotion-cause pairs. As seen in Table <ref>, our end-to-end method achieves a 16.27% (63.68-47.41) higher F1 score than the four-step method, and a 8.29% (63.68-55.39) higher F1 score than the two-step method. This implies that our end-to-end method avoids error propagation. §.§ Efficiency Study In order to further demonstrate the advantage of our parallel grid tagging module, we design a model using one grid to decode all kinds of quadruples to compare with our multi-grid model. Most parts of one-grid model are the same as the ones of multi-grid model, except the grid tagging layer. As there is only one grid, the tags are designed as the combinations of emotion and cause types. For example, if the emotion type is “surprise” and cause type is “hybrid”, the tag should be “SU-H”. Besides performances, we also compare the efficiencies of two models by running these methods in the same hardware environment (i.e., NVIDIA RTX 3090 GPU). As shown in Table <ref>, the F1 score for extracting all quadruples using the multi-grid method is 0.81% higher than that of the one-grid method. Particularly, the advantage of the multi-grid method for extracting overlapped quadruples is more obvious, and the F1 score is 2.67% higher than the one-grid method. Apart from the model performance, the inference speed of the multi-grid method is about × 1.35 times faster than the one-grid method, which proves the efficiency of our parallel grid tagging module. §.§ Case Analysis We empirically perform case study on the ECQED task to better understand the capacity of our proposed model. Specifically, in Table <ref>, we show some predictions for two instances on the test set. Our complete model can correctly extract all quadruples in these instances. By contrast, the model incorrectly predicts the types of emotion or cause without SSHG. Through analysis, one possible reason is that the lack of reasoning ability between the context and speaker's state. Meanwhile, when parallel prediction is not employed, for the case of overlapped quadruples, such as (u_1,u_1, SU, N) and (u_2, u_1, SA, I) in the first example, only the latter one is extracted. There is also a pairing error in each example, i.e., (u_5, u_4, SA, I) and (u_3, u_2, AG, I). In summary, the above analyses indicate that SSHG plays a crucial role in modeling sophisticated dialog structures and semantic understanding. Moreover, the parallel prediction method is indispensable in solving the problem of overlapped quadruple extraction. § CONCLUSION In this paper, we propose a novel task, Emotion-Cause Quadruple Extraction in Dialogs (ECQED), detecting the fine-grained quadruple: emotion utterance, cause utterance, emotion type, cause type. We then present a nichetargeting model for ECQED, which leverages rich dialog context information, and meanwhile effectively addresses the severe quadruple overlapping problem and error-propagation problem in this task. The experimental results show that our end-to-end model achieves new SOTA performance on the task, and further ablation study evaluates the efficacy of our model design. Moreover, we show that emotion-cause quadruples can facilitate empathic dialog generation, demonstrating the necessity of this task. Our work pushes forward the research of emotion-cause analysis from both the scenario (i.e., dialog) and granularity (i.e., quadruple). In future work, we suggest the following directions to extend our study: ▸ Enhancing the generation of controllable empathic dialog In this paper, we have utilized emotion-cause type information as prompts to promote the generation of controllable empathic dialog, and obtained decent result improvement. In future research, one can further consider how to employ type information in a more complex way to further improve empathic dialog generation. ▸ Extracting implicit emotion-cause quadruples Implicit emotion-cause extraction aims to detect the emotion or cause expressions given in a latent manner (e.g., u_5 in Figure <ref>), which is a much more challenging task. Although some prior work has investigated implicit sentiment analysis <cit.>, the method using sentiment dictionary to distinguish explicit and implicit sentiment expressions may be not suitable for emotions, because emotions are more fine-grained than sentiments. A possible solution is to build an emotion dictionary to distinguish explicit and implicit emotions first and then carry out nichetargeting research on implicit emotion-cause extraction. § ACKNOWLEDGMENTS This work is supported by the National Natural Science Foundation of China (No. 62176187), the National Key Research and Development Program of China (No. 2022YFB3103602). IEEEtran
http://arxiv.org/abs/2306.07137v1
20230612141856
Impact of the gas choice and the geometry on the breakdown limits in Micromegas detectors
[ "P. Gasik", "T. Waldmann", "L. Fabbietti", "T. Klemenz", "L. Lautner", "B. Ulukutlu" ]
physics.ins-det
[ "physics.ins-det" ]
Discovering Ferroelectric Plastic (Ionic) Crystals in the Cambridge Structural Database: Database Mining and Computational Assessment Kristian Berland July 31, 2023 ===================================================================================================================================== § INTRODUCTION The most common MPGD structures, GEM <cit.> and MICROMEGAS <cit.>, can provide high position, time, and energy resolutions, high rate capability, and quite reliable spark protection capabilities. However, experience shows that in real experimental conditions, there is a non-zero probability of spark development. If the charge carrier density in the amplification region reaches the critical charge limit, the avalanche transitions to a streamer that ends up in a spark. The minimum avalanche size necessary to trigger a spark varies between 10^6 e, and 10^7 e and concerns all MPGD-type detectors (see  <cit.>). Systematic studies revealed that sparks usually appear in a narrow region of the amplification gap, where the electric field lines are parallel to each other and no quenching by the electric field reduction is possible. When a streamer reaches the cathode of the amplification structure, a full breakdown is observed. The occurrence of spark discharges in a Micromegas detector, and related critical charge limits, have been studied in numerous experimental and simulation works (see  <cit.> and <cit.>, respectively). A proper understanding of the fundamental limits of mesh structures may provide important input for designing Micromegas-based detectors. For completeness of our discharge studies, conducted initially with (TH)GEMs and summarised in <cit.>, we have launched a dedicated discharge investigation campaign with several types of Micromegas detectors. The main motivation of the studies is to measure stability dependency on the mesh geometry, varying the diameter of its wires (d_wire) and the distance between them (a_wire), thus varying its optical transparency. From the studies presented in <cit.> a clear dependency on the wire geometry is shown pointing to thinner wires and smaller Micromegas cells having higher stability. The conclusions are based on the argument of amplification field uniformity which is supposed to be better in meshes with thinner wires and smaller cells <cit.>. On the other hand, Finite Elements Method calculations of the electric field obtained with different meshes, discussed in <cit.>, show that the peak-to-average value of the electric field is rather constant at ∼3, or even slightly drops for thicker wires and larger cells (although, it increases when for the same wire diameter only the distance a_wire is enlarged). Also, the peak values are higher for thinner wires and increase with a larger wire pitch. From the electric field considerations, assuming discharges develop close to the high-field regions, thicker wires and less dense meshes would be preferable. This is also a conclusion presented in <cit.>, supported by the streamer development simulations in meshes with d_wire=18 µm, a_wire=45 µm, and d_wire=28 µm, a_wire=50 µm. This result is in contradiction with the measurements presented in <cit.> where meshes of the same geometry show an opposite stability performance. § MEASUREMENTS In order to further understand the main factors responsible for the discharge behavior of a mesh-based structure, we have performed measurements where a Micromegas detector was irradiated with highly ionizing alpha particles. All detectors used in this study were produced in the CERN EP-DT-DD MPT laboratory with the Micromegas bulk technology. The available meshes are listed in tab:qcrit:mmg, together with their main geometrical parameters. Due to technological reasons, the detectors were produced with different distances z_MMG between the mesh and readout anode. In all presented studies the alpha source (mixed nuclide ^239Pu, ^241Am, and ^244Cm) was placed on top of the drift electrode, shooting toward the amplification structure. The left panel in fig:mmg:transparency presents the discharge probability per alpha particle measured with the MMG2 structure as a function of its effective gain in four gas mixtures. The effective gain is defined as the ratio of the amplification current induced on the readout plane to the primary current measured in the drift gap. Hence, for electron transparencies of <100 the actual (absolute) gain value is larger than the effective one. A clear gas dependency can be observed in the order of discharge curves, pointing to lighter gases as more stable ones, as observed in the studies with GEMs <cit.> and THGEMs <cit.>. For both, Ne- and Ar-based mixtures an anti-correlation can be observed between the stability of the Micromegas and the quencher content of the gas. Similar behavior was observed with three other structures and is in line with previous measurements presented in <cit.>. The results clearly point to the primary charge density having an essential influence on the probability of spark development. Therefore, one can ask whether mesh cells can be considered independent amplification units, similar to GEM holes. Following the GEM and THGEM comparison <cit.>, the discharge probability shall scale with the Micromegas cell size, higher discharge rate is expected for large-a_wire meshes (small LPI value) which presumably collect more primary charges in a single cell than denser meshes, where the primary charge cloud is shared among many cells. It is important to note, however, that the quality and density of the mesh affect multiple aspects of Micromegas performance which should be taken into account in the comparison study. Firstly, high electric fields around thin mesh wires and/or any defects present in the mesh may further reduce sensitivity to the investigated correlations. We believe, however, that highly ionizing alpha particles used in our measurements liberate enough primary ionization to study discharge dependencies at relatively low fields and gains, where the effect of the mesh quality is suppressed. Secondly, the open geometry of a Micromegas structure and poor quenching capabilities of CO_2 imply photon and ion feedback issues, especially at high gains. Thus, the measurements in the low-gain region also allow us to neglect this effect. Finally, the optical transparency of a mesh also influences its electron transparency. This is taken into account by a detailed characterization of each mesh <cit.>. The electron transparency is measured with an ^55Fe source as a function of the drift field and the drift-to-amplification field ratio, as shown in fig:mmg:transparency. The amplification field E_MMG is approximated by U_MMG/z_MMG, where U_MMG is the potential applied to the mesh. The transparency is evaluated by measuring the amplification current while keeping U_MMG constant and normalizing it to the maximum measured current value, where a transparency of 100 can be reliably assumed. A clear correlation with the optical transparency is observed, as expected. The maximum transmission for the low-transparency meshes (∼39) is observed for a drift field of ∼100, whereas for meshes with an optical transparency of ∼50 the corresponding drift field is E_drift≈600 . For higher E_drift values the collection efficiency drops as more electric field lines, that originate from the drift cathode, end on the metallic mesh. Thus, for the comparison studies of the four meshes and their absolute gain determination the drift field value of 150 is chosen, for which the transparency of ∼100 can be safely assumed. Given the E_drift/E_MMG dependency, shown in the right panel of fig:mmg:transparency, the assumption holds for the investigated U_MMG voltage range of 400–600 V. The direct comparison of all four meshes in terms of gain and discharge probability is shown in fig:qcrit:mmgtyp. Although the differences between discharge stability measured for different meshes are not large, there is a clear order of discharge curves observed, pointing to the meshes with thin wires and small cell sizes as the more stable ones, similar to <cit.>. We explain this result with the primary charge density hypothesis. With a larger wire pitch, more electrons enter a single mesh cell and are multiplied therein. This could explain the discharge curve scaling with the wire pitch and MMG4 being the less stable structure. This observation would also suggest that a Micromegas mesh cell can be treated as an independent amplification unit, similar to a hole in a GEM foil. Of course, a much simpler interpretation can also be considered. For technological reasons, the MMG4 structure features a larger amplification gap than the other three detectors. This means, that in order to achieve the same gain, larger potential needs to be applied to the structure, which is clearly seen in the gain curves plotted in fig:qcrit:mmgtyp. Although the average amplification field in the gap is lower, the peak field around mechanical imperfections or places where two wires of a woven mesh splice may be enhanced, increasing the chance of triggering a discharge. Therefore, a measurement is performed in Ar-based mixtures with the alpha source placed 73 mm away from the mesh surface, more than the range r_α≈4.8 cm of emitted alpha particles <cit.>. Thus, all alpha tracks are confined between the source and the mesh and even the highest primary charge densities obtained around the Bragg peak will be reduced during the electron drift toward the amplification structure. As discussed in  <cit.>, the discharge probability for these distances drops by several orders of magnitude, close to the background level. The results with the Micromegas detectors are presented in fig:qcrit:bragg. Larger gains are obtained in the situation where no alpha particles penetrate the mesh, and primary charge densities are largely reduced by diffusion. It should be noted, that the measurements at =73 mm are available only for the drift field of 400, therefore the actual gain of the thin-wire meshes (MMG1 and MMG2) is ∼20 larger than for MMG3 and MMG4, given the electron transparency from fig:mmg:transparency. At these voltages, the electric field is enhanced at the hot spots and any kind of imperfections shall be manifested by reduced discharge stability. Indeed, this may explain significantly higher discharge rates obtained with the MMG2 structure. The latter, on the other hand, is the most stable in the measurements at =31.5 mm, which points towards the conclusion that the charge density effects exceed those related to the potential hot spots, in this configuration. All the other meshes measured at =73 mm show stable behavior up to much higher gains and voltages, without significant dependency on their geometrical parameters. This measurement suggests that the dependency presented in fig:qcrit:mmgtyp and the one reported in <cit.> may be indeed the consequence of the charge sharing between single mesh cells and not the result of the field uniformity. If confirmed, the observation will give valuable input to the design of future Micromegas-based detectors, especially multi-MMG and hybrid stacks. This work is supported by the Deutsche Forschungsgemeinschaft - Sachbeihilfe [DFG FA 898/5-1]. Lukas Lautner acknowledges support from the Wolfgang Gentner Programme of the German Federal Ministry of Education and Research (grant no. 13E18CHA) 99 GEM F. Sauli, GEM: A new concept for electron amplification in gas detectors, https://doi.org/10.1016/S0168-9002(96)01172-2NIM A 386 (1997) 531 MMG Y. Giomataris et al., MICROMEGAS: A High granularity position sensitive gaseous detector for high particle flux environments, https://doi.org/10.1016/0168-9002(96)00175-1NIM A 376 (1996) 29 BachmannSpark S. Bachmann et al., Discharge studies and prevention in the gas electron multiplier (GEM), https://doi.org/10.1016/S0168-9002(01)00931-7NIM A 479 (2002) 294 MathisGEM P. Gasik et al., Charge density as a driving factor of discharge formation in GEM-based detectors, https://doi.org/10.1016/j.nima.2017.07.042NIM A 870 (2017) 116 LautnerTH P. Gasik et al., Systematic investigation of critical charge limits in Thick GEMs, https://doi.org/10.1016/j.nima.2022.167730NIM A 1047 (2023) 167730 Bressan A. Bressan et al., High rate behavior and discharge limits in micro-pattern detectors, https://doi.org/10.1016/S0168-9002(98)01317-5NIM A 424 (1999) 321 BAY2002162 A. Bay et al., Study of sparking in Micromegas chambers, https://doi.org/10.1016/S0168-9002(02)00510-7NIM A 488 (2002) 162 PROCUREUR2010177 S. Procureur et al., A Geant4-based study on the origin of the sparks in a Micromegas detector and estimate of the spark probability with hadron beams, https://doi.org/10.1016/j.nima.2010.05.024NIM A 621 (2010) 177 ALVIGGI2020162359 M. Alviggi et al., Discharge behaviour of resistive Micromegas, https://doi.org/10.1016/j.nima.2019.162359NIM A 958 (2020) 162359 Bhattacharya_2020 D.S. Bhattacharya et al., A numerical investigation on the discharges in Micromegas , https://doi.org/10.1088/1742-6596/1498/1/012032J. Phys. Conf. Ser. 1498 (2020) 012032 Nikolopoulos_2011 K. Nikolopoulos et al., Electron transparency of a Micromegas mesh, https://doi.org/10.1088/1748-0221/6/06/P06011JINST 6 (2011) P06011 Bhattacharya_2014 P. Bhattacharya et al., Performance studies of bulk Micromegas of different design parameters, https://doi.org/10.1088/1748-0221/9/04/C04037JINST 9 (2014) C04037 Krugerphd F. Kruger, Signal Formation Processes in Micromegas Detectors and Quality Control for large size Detector Construction for the ATLAS New Small Wheel, PhD thesis, Julius-Maximilians-Universität Würzburg, 2017, https://doi.org/10.48550/arXiv.1708.01624arXiv:1708.01624 [physics.ins-det]
http://arxiv.org/abs/2306.03253v1
20230605211423
Zero-Shot 3D Shape Correspondence
[ "Ahmed Abdelreheem", "Abdelrahman Eldesokey", "Maks Ovsjanikov", "Peter Wonka" ]
cs.CV
[ "cs.CV" ]
KAUST Saudi Arabia [email protected] KAUST Saudi Arabia [email protected] LIX, École Polytechnique France [email protected] KAUST Saudi Arabia [email protected] < g r a p h i c s > Our proposed approach can produce 3D shape correspondence maps for strongly non-isometric shapes in a zero-shot manner. Mutual semantic regions are matched and are shown in similar colors, while non-mutual regions that can not be matched are shown in black. We propose a novel zero-shot approach to computing correspondences between 3D shapes. Existing approaches mainly focus on isometric and near-isometric shape pairs (human vs. human), but less attention has been given to strongly non-isometric and inter-class shape matching (human vs. cow). To this end, we introduce a fully automatic method that exploits the exceptional reasoning capabilities of recent foundation models in language and vision to tackle difficult shape correspondence problems. Our approach comprises multiple stages. First, we classify the 3D shapes in a zero-shot manner by feeding rendered shape views to a language-vision model (BLIP2) to generate a list of class proposals per shape. These proposals are unified into a single class per shape by employing the reasoning capabilities of ChatGPT. Second, we attempt to segment the two shapes in a zero-shot manner, but in contrast to the co-segmentation problem, we do not require a mutual set of semantic regions. Instead, we propose to exploit the in-context learning capabilities of ChatGPT to generate two different sets of semantic regions for each shape and a semantic mapping between them. This enables our approach to match strongly non-isometric shapes with significant differences in geometric structure. Finally, we employ the generated semantic mapping to produce coarse correspondences that can further be refined by the functional maps framework to produce dense point-to-point maps. Our approach, despite its simplicity, produces highly plausible results in a zero-shot manner, especially between strongly non-isometric shapes. Zero-Shot 3D Shape Correspondence Peter Wonka July 31, 2023 ================================= § INTRODUCTION Shape correspondence is a fundamental task in computer vision. The objective of this task is to match two 3D shapes given some geometric representation (point clouds, meshes) to produce a region-level or point-level mapping. This mapping can be constrained based on the downstream application in terms of deformation type, density, and scope (partial or full). Examples of such downstream applications are shape interpolation, shape morphing, shape anomaly detection, 3D scan alignment, and motion capture. While early approaches for shape correspondence mainly adopted optimization-based algorithms <cit.>, the emergence of deep learning has paved the way for learning-based approaches that implicitly learn suitable representations, which can be used for efficiently solving the matching problem. These approaches can either follow a supervised <cit.>, unsupervised <cit.> or a self-supervised <cit.> paradigm depending on the availability and the diversity of annotated datatsets. Supervised approaches are naturally data-dependent, and they require large-scale datasets with different classes of shapes for strong generalization. On the other hand, unsupervised alternatives are class-agnostic and do not require annotated data, but they still lack behind their supervised counterparts in terms of performance <cit.>. Moreover, the majority of these aforementioned approaches attempt to match near-isometric shapes of the same class, with less focus on non-isometric shapes (human v.s. animal). This is mainly caused by the lack of datasets with inter-class shape pairs, and the complexity of matching dissimilar shapes. To achieve a deeper understanding of 3D shapes and their relationships, it is desirable to develop methods that can generalize well both to isometric and non-isometric shape matching. Recently, several large-scale models were introduced for different modalities such as language (GPT3 <cit.>, Bloom <cit.>), and vision (StableDiffusion <cit.>, DALLE-2 <cit.>). These models are usually referred to as foundation models, and they have a broad knowledge of their domains since they were trained on large amounts of data. There are even ongoing efforts to connect these models to build a bridge between different modalities, such as Visual ChatGPT <cit.>, and MiniGPT-4 <cit.>. Unfortunately, it is still challenging to build similar models for modalities with limited amounts of data, such as 3D shapes. Therefore, a plausible approach is to employ existing models for language and vision to solve problems for other data-limited modalities. Motivated by this, we attempt to exploit existing foundation models to perform zero-shot shape correspondence with no additional training or finetuning. To address this problem, we identified the following three key problems. First, we would like to predict the class of each of the two shapes in question given only their 3D meshes. We achieve this through zero-shot shape classification by feeding rendered views of the two shapes into a language-visual model, BLIP2 <cit.>, to obtain object class proposals. Then, we use ChatGPT to merge these proposals into a single class per object. Second, we produce a semantic region set per shape, and a semantic mapping between the sets by exploiting the in-context learning <cit.> capabilities of ChatGPT. Afterward, we introduce a zero-shot 3D semantic segmentation method based on the recent large-scale models DINO <cit.> and Segment-Anything (SAM) <cit.>, which we denote as SAM-3D. Our method only requires a shape mesh and its corresponding semantic region set as input. Finally, the semantic mapping is used to provide coarse correspondences between the two shapes and a finer map can be produced, if needed, by employing the functional map framework <cit.>. Remarkably, although functional maps are geared towards near-isometric shape pairs, we observe that it is possible to obtain high-quality dense maps given an initialization from SAM-3D, even across some challenging non-isometric shapes. Since we propose a new scheme for solving the shape correspondence problem, we introduce several evaluation metrics to evaluate the performance of different intermediate tasks in our pipeline, such as zero-shot object classification, semantic region generation, and semantic segmentation. We also create a new benchmark that includes strongly non-isometric shape pairs (humans vs. animals) that we denote as (SNIS) in order to test the generalization capabilities of our proposed approach. Experiments on the new benchmark show that our approach, despite being zero-shot, performs very well on non-isometric shape pairs. In summary, we make the following contributions: * We propose a novel solution to 3D shape correspondence that computes results in a zero-shot manner. * To the best of our knowledge, we introduce a first zero-shot joint 3D semantic segmentation technique that does not start with a mutual set of semantic regions, and it requires only the shape meshes while exploiting language-vision models to generate shape-specific semantic regions. * We introduce a benchmark for shape correspondence which includes strongly non-isometric shape pairs, as well as evaluation metrics for different stages of the proposed pipeline. § RELATED WORK In this section, we give a brief overview of shape correspondence literature, large-scale models, and 3D semantic segmentation. For shape correspondence, we focus only on relevant deep learning-based approaches, and we refer the reader to <cit.> for a comprehensive survey of earlier registration-based and similarity-based approaches. §.§ Deep Learning-Based Shape Correspondence Convolutional Neural Networks (CNNs) by nature are not directly applicable to non-rigid shapes due to the lack of shift-invariance property in non-Euclidean domains. Wei <cit.> circumvented this by training on depth maps of shapes that are being matched, and produced pixel-wise classification maps for each point in the object. Wu <cit.> generated volumetric representations from depth maps, and used 3D CNNs to process them. However, these methods do not capture all shape deformations, since they treat shapes as Euclidean structures. Alternatively, other approaches tried to generalize Convolutional Neural Networks (CNNs) to non-Euclidean manifolds. Masci <cit.> introduced Geodesic CNNs that allowed constructing local geodesic polar coordinates that are analogous to patches in images. Similarly, Boscaini <cit.> proposed localized spectral CNNs to learn class-specific local descriptors based on a generalization of windowed Fourier transform. This was followed by another generalization of CNNs in <cit.> denoted as Anisotropic CNNs that replaces the conventional convolutions with a projection operator over a set of oriented anisotropic diffusion kernels. All these approaches allowed extracting local descriptors at each point on deformable shapes, and eventually perform shape correspondence by similarity-matching. Another category of approaches includes the matching computations in the learning process, and can find shape correspondences directly from a CNN. Litany <cit.> proposed a structured prediction model in the functional maps space <cit.> that takes in dense point descriptor for the two shapes, and produces a soft correspondence map. Halimi <cit.> transforms <cit.> into an unsupervised setting by replacing the point-wise correspondences with geometric criteria that are optimized, eliminating the need for annotated data. Donati <cit.> proposed an end-to-end pipeline that computes local descriptors from the raw 3D shapes, and employs a regularized functional maps to produce dense point-to-point correspondences. Their method requires less data to train and generalizes better than its supervised counterparts. Li <cit.> employed a regularized contrastive learning approach to learn robust point-wise descriptors that can be used to match shapes. We deviate from all these approaches, and we tackle the problem from a peculiar zero-shot perspective that exploits the emerging large-scale models in language and vision. §.§ Large-Scale Models Several models that are trained on large-scale datasets were introduced recently for different modalities, given the advances in deep architectures design and computational capabilities. For instance, Large-Scale Language Models (LLMs) such as T5 <cit.>, BLOOM <cit.>, GPT-3 <cit.>, and InstructGPT <cit.>; vision models StableDiffusion <cit.>, and DALLE-2 <cit.>). LLMs have outstanding capabilities in understanding textual data, but they lack any understanding of natural images. Recent methods try to build cross-modal vision-language models that incorporate the capacities of both models. Visual-ChatGPT <cit.> is one example that combines ChatGPT with many vision foundation models that are managed using a prompt manager that allows better combination and interaction. MiniGPT-4 <cit.> pursues a similar endeavor by attempting to align frozen LLM with a visual encoder through a projection layer. To further improve language coherence, they finetune the model on a well-aligned dataset using a conversational template. BLIP2 <cit.> bootstraps vision-language models through efficient pre-training from off-the-shelf models. We employ these models to achieve zero-shot 3D shape classification and to generate shape-specific semantic regions that can then be utilized to perform zero-shot 3D semantic segmentation to find shape correspondences. §.§ Zero-shot 3D Semantic Segmentation Zero-shot 3D semantic segmentation is an active research topic that attempts to segment volumes or point clouds given some textual labels or descriptors <cit.>. On a different note, there exist many approaches that are based on Neural Radiance Fields (NeRFs) <cit.>, which try to produce full semantic maps of 3D scenes by exploiting 3D density fields from NeRFs <cit.> These two categories of approaches can be combined to perform zero-shot 3D segmentation of volumetric scenes by incorporating zero-shot 2D segmentation networks (<cit.>) into NeRFs <cit.> given some textual labels. SATR <cit.> showed that replacing 2D segmentation networks with 2D object detectors networks yields marginally better results. Inspired by this, we propose to combine the object detector DINO <cit.> with Segment-Anything (SAM) <cit.> to perform zero-shot 3D shape segmentation. § METHOD In zero-shot 3D shape correspondence, the input is a pair of 3D shapes (S^1, S^2), where each shape S^i is represented using triangular meshes with vertices V^i ∈^|V^i| × 3, and faces F^i ∈^|F^i| × 3. The number of vertices/faces in S^1 are not necessarily equal to that of S^2. The desired output is a point-to-point correspondence map C ∈^|V^2| × |V^1| that contains matching scores between vertices of V^2 and V^1. Note that no other information is provided about the shape such as the shape class or semantic region names, and it is desired to perform shape correspondence in a zero-shot manner with no training or fine-tuning. To this end, we propose a new setting to tackle this problem that consists of three modules, as illustrated in Figure <ref>. First, we perform zero-shot 3D object classification on the shapes to obtain an object class per shape using a large-scale visual-language model (<ref>). Afterward, a set of semantic region names per shape is generated using an LLM (<ref>). Next, zero-shot 3D semantic segmentation is performed given the semantic region names (<ref>). Finally, dense correspondence maps can be calculated using functional maps <cit.> (<ref>). We explain these components in more detail in the following sections. §.§ Zero-Shot 3D Shape Classification Initially, we need to identify the classes of the 3D shapes. Existing zero-shot 3D shape classification approaches <cit.> can predict a limited set of unseen classes, but do not generalize when the unseen set is unlimited. In our case, there is no prior knowledge about the classes of the shapes, and therefore, existing approaches for zero-shot 3D classification can not be employed. To alleviate this, we propose to employ a language-vision foundation model (BLIP2 <cit.>) that exploits the generalization capabilities of Large-Scale Language Models (LLMs) to reason about 2D images. For each shape in the pair (S^1, S^2), we render k views, where viewpoints are sampled uniformly around the shape for a wide coverage. We set the elevation angles to {-45, 0, 45}, the azimuth angles to {0, 90, 180, 270}, and the radius to 2 length units, where each shape is centered around the origin and scaled to be inside a unit sphere. Then, we feed these k rendered views per shape to BLIP2 <cit.> to obtain k object class proposals. A natural choice would be to perform majority voting on these predictions to get a single class type. However, it is not straightforward to achieve this for textual labels, given that the list of class proposals can include synonyms and adjectives. Figure <ref> shows an example of this situation. Therefore, we exploit the reasoning capabilities of a ChatGPT agent to unify the responses and obtain a single class label per shape. We show examples of the used prompts in the supplementary materials. §.§ Semantic Region Generation and Matching Zero-shot 3D semantic segmentation approaches require a set of semantic labels as an input together with the 3D shape. Our problem is more difficult than traditional co-segmentation, because the two input shapes may not share the same region names. For this reason, We need to obtain two sets (R^1, R^2) of the possible names of semantic regions present in each shape in the input pair (S^1, S^2). Afterward, we attempt to match, whenever possible, between the semantic regions defined in R^1 and R^2, where a single semantic region in R^1 can be matched to one or more semantic regions in R^2. For instance, the legs of a dog can be matched to both the arms and the legs of a person. We exploit the in-context learning <cit.> capabilities of LLMs to achieve this. In-context learning is the process by which a model understands a certain task and provides an adequate response to the required task. LLM models are indeed good in-context learners, allowing them to perform well on a wide range of tasks without explicit fine-tuning. The idea is when asking the model to solve a task given a certain input, we include a few (input, expected output) pairs as examples in the input prompt. We employ ChatGPT for this purpose, and we query two sets of semantic regions (R^1, R^2) for the two shape classes in question, and a mapping between the two sets M^12: R^1 ↔ R^2. Figure <ref> shows an example of such a mapping. We refer the reader to the supplementary material for further details on formulating the textual prompts for ChatGPT. §.§ Zero-Shot 3D Semantic Segmentation After generating the two sets of semantic regions (R^1,R^2), we can use them to perform zero-shot 3D semantic segmentation. Despite the fact that the recent object segmentation model Segment-Anything (SAM) <cit.> is powerful, its text-guided segmentation is still limited. The DINO object detector <cit.> on the other hand, can perform 2D object detection for a large number of classes. Therefore, we propose to combine DINO with SAM to perform zero-shot 3D semantic segmentation, and we denote this hybrid approach as SAM-3D. We start by rendering a large number of viewpoints v sampled uniformly to cover the whole shape as illustrated in Figure <ref>. For each rendered view, we feed it to DINO to detect a bounding box for each semantic region in R^i. Afterward, we feed the detected bounding boxes with the rendered viewpoints to SAM to provide segmentation maps for each semantic region. We define a matrix X^i ∈^|F^i| × |R^i| that is initialized with zeros, and we use it to accumulate scores for each face in F^i for each semantic region in R^i. Finally, each face is assigned a label by selecting the highest score in each row j of X^i yielding FL^i: FL^i = max_j X^i[j,:] Once the 3D segmentation vectors FL^i are computed, the segmentation maps are matched between the two shapes using the mapping M^12 to produce a coarse correspondence map. §.§ Zero-Shot Dense Shape Correspondence To produce dense point-to-point shape correspondence, we employ a functional maps-based approach, Bijective and Continuous Iterative Closest Point (BCICP) <cit.>. Specifically, we first use the computed region (segment)-wise correspondences and convert them into functional descriptors using the strategy from <cit.>. This method takes as input region correspondences, decomposes them into individual connected components on each shape, and then formulates functional descriptor constraints based on region preservation. We then use these descriptor constraints (also known as “probe functions” for functional maps <cit.>) and formulate a least squares problem following <cit.>, which aims to compute a functional map matrix 𝐂 that preserves the descriptors, while also promoting orientation preservation as well as orthogonality and Laplacian commutativity of the computed functional map matrix. Finally, we convert the computed functional map 𝐂 to a point-to-point map and iteratively refine it using the BCICP refinement strategy <cit.>. This gives, as output, a dense point-to-point correspondence between each 3D shape pair. Interestingly, while functional maps primarily target near-isometric shapes, our initialization with SAM-3D allows us to generate high-quality dense maps even for challenging non-isometric shape pairs, as will be shown in Figure <ref>. Nevertheless, artifacts in point-to-point maps can occur due to the use of the functional map framework, and we leave the development of a correspondence densification technique that can adapt to strongly non-isometric shapes, as future work. § EXPERIMENTS In this section, we evaluate our proposed approach, and we introduce our new dataset for strongly non-isometric shape matching (SNIS). Moreover, since we propose a new strategy for solving shape correspondence problems, we introduce some evaluation metrics for different components of the pipeline. §.§ Strongly Non-Isometric Shapes Datasets (SNIS) Existing shape correspondence datasets usually encompass a single category of objects (humans or animals). For facilitating the development of approaches that can generalize to shape pairs of different classes, we introduce a new dataset with mixed shape pairs from existing isometric datasets, FAUST <cit.> (humans), SMAL <cit.> (animals), and <cit.> DeformingThings4D (humanoid objects). For each pair of shapes, we annotate 34 keypoint correspondences between the shapes as well as a dense segmentation map. Figure <ref> shows an illustration for these annotations. For the FAUST dataset <cit.>, we use the version provided by <cit.> that includes segmentation maps for all the available 100 shapes. Our SNIS dataset includes 250 shape pairs, where the first shape is either from FAUST or DeformingThings4D, and the second is from SMAL. The included classes are: {“cougar”, “cow”, “dog”, “fox”, “hippo”, “horse”, “lion”, “person”, “wolf”}. Note that there exist other datasets with other types of objects, such as SHREC09 <cit.>, but we do not include them because the dataset does not provide shape correspondence annotations. However, we demonstrate the generalization capabilities of our approach by showing some qualitative examples from SHREC09 in section <ref>. §.§ Metrics For the final dense shape correspondence map, we use the standard average geodesic error as in <cit.>. We describe the newly proposed metrics below. Zero-Shot Object Classification Accuracy (ZSClassAcc) To evaluate if the predicted object class in <ref> is accurate, we compare it against the groundtruth shape label. However, since LLM-based approaches can predict several synonyms for each class (human and person), the standard classification accuracy becomes infeasible. Therefore, we propose to generate a set of synonyms for each object class in the dataset from WordHoard [<https://wordhoard.readthedocs.io>]. Whenever a class prediction matches any of the synonyms, it is counted as a correct prediction. Eventually, the accuracy is calculated as a standard binary classification accuracy. Semantic Regions Generation F1-Score (SRGen-F1) Similar to the previous metric, we evaluate the generated semantic regions as a multi-class classification problem. Regions that are matched with the groundtruth count as True Positives (TP), groundtruth regions that were not predicted count as False Negative (FN), and predicted regions that do not exist in the ground truth count as False Positives (FP). Finally, a standard F1-Score is calculated as: SRGen-F1 = 2 ·TP2 ·TP + FP + FN Semantic Regions Prediction IoU (SRIoU) To evaluate the quality of semantic segmentation for different semantic regions in R^1 and R^2, we calculate the average intersection-over-union over different regions and shapes as follows: I^12 = I^1 + I^22 I^i = 1/|R^i| ∑_r=1^|R^i|IoU^i_r where IoU^i_r is the intersection-over-union for region r in shape i compared to the groundtruth segmentation. Keypoint Label Matching Accuracy (KPLabelAcc) Since we provide keypoint annotations in our proposed SNIS dataset, we can evaluate the shape matching accuracy at these keypoints. For each shape i, we define keypoint indices vector P^i ∈^34×1, which stores vertex indices from V^i for the annotated keypoints. Given faces labels FL^i from (<ref>), we can generate labels for vertices as well that lie on these faces, and we denote them as VL^i. The keypoint label matching accuracy is then calculated between VL^1, VL^2, and the groundtruth labels VL^GT as: KPLabelAcc = 1/|P| ∑_j=1^|P| VL^1[P^1_j] VL^2[P^2_j] VL^GT[P^GT_j] where j refers to elements in the vector and is the logical AND operator. This metric measures if the keypoints are matched correctly, and that they were assigned the correct label. Next, we compare our proposed approach against some baselines in terms of these metrics. §.§ Zero-Shot 3D Shape Classification Results Baseline We calculate a majority voting between all classification proposals generated by BLIP2. Table <ref> shows that our proposed approach based on ChatGPT performs significantly better than the standard voting. §.§ Semantic Regions Generation and Matching Results Baseline We use BLIP2 <cit.> model as a baseline, where we feed it with the k rendered views from section <ref> to query semantic regions and mapping. We report the results in Table <ref> for the generated semantic regions in terms of the SRGen-F1 metric. Surprisingly, our proposed approach that employs ChatGPT outperforms BLIP2 with a huge margin despite the fact that our approach does not have access to the rendered images. This demonstrates the in-context learning capabilities of ChatGPT. We can also compare the semantic mapping M^12 in the same fashion as the semantic regions by matching the keys and values of the generated mapping with those of the ground-truth mapping. However, we did not succeed in obtaining a valid mapping from BLIP2, so we report only our scores in Table <ref>. §.§ 3D Semantic Segmentation Results We compare against the recently released zero-shot 3D semantic segmentation approach SATR <cit.> that employs only 2D object detectors. SATR differs from our approach mainly in using a 2D object detector instead of SAM. Table <ref> shows that our approach outperforms SATR in terms of the SRIoU metric. We believe that is caused by employing 2D semantic segmentation masks from SAM, which are less prone to error when transferring the segmentation information to the 3D space, compared to bounding boxes. We also show selected qualitative examples in Figure <ref>, where it is clear that our proposed SAM-3D provides a more accurate and well-localized segmentation compared to SATR. §.§ Keypoints Matching Results We compare the keypoints matching results from our proposed approach to those obtained by replacing SAM-3D with SATR <cit.>. Table <ref> shows that our approach outperforms SATR in terms of KPLabelAcc, which demonstrates that it provides better segmentation maps with more accurate labels. §.§ Dense Shape Correspondence Comparison Our approach generally produces sparse shape correspondences, as illustrated in Figure <ref>. However, dense correspondence maps can be produced by using the functional maps framework as described in Section <ref>. We provide a comparison for dense shape correspondence maps when using our proposed SAM-3D in comparison with the segmentation model SEG <cit.> to initialize the functional maps framework BCICP <cit.> Table <ref> shows that the use of SAM-3D outperforms SEG in terms of average geodesic error with a large margin. We also provide a qualitative comparison in Figure <ref>, which shows that our approach provides more accurate correspondences, and does not suffer from region discontinuities as SEG. §.§ Generalization to Other Datasets To examine if our proposed approach generalizes to other datasets with highly unrelated shapes, we include some objects from the SHREC09 <cit.>, and 3D-CoMPaT <cit.> datasets. We form pairs of shapes where the first item is from SNIS, and the second is from SHREC09 or 3D-CoMPaT. Figures <ref> and <ref> show these examples. Our method was able to produce plausible results when matching a human with a chair, where the legs were matched correctly, and the seat was matched to the rest of the human body. A horse was also matched to a tricycle, where the horse limbs were matched to the wheels, the head to the handle, and the tail to the seat. These examples demonstrate the high reasoning capabilities of our approach, even when the shape pairs are strongly unrelated. We provide more detailed qualitative examples in Figure <ref>. §.§ Impact of Varying Number of Viewpoints We examine the effect of changing the number of rendered views k, and v in the proposed zero-shot 3D object classifier and in SAM-3D, respectively. Table <ref> shows that both the classification and segmentation accuracy improve when increasing the number of views. We do not consider higher values for computational efficiency. § CONCLUSION We proposed a novel zero-shot approach for 3D shape correspondence. Our approach exploited the capabilities of recently emerged language and vision foundation models to match challenging non-isometric shape pairs. There are two key differences in our work to traditional co-segmentation. First, we do not require the region names to be known in advance. Second, our approach does not require a mutual set of semantic regions and generates shape-specific sets, and a semantic mapping between them instead, enabling it to match diverse shape pairs. We also introduced a new dataset for strongly non-isometric shapes (SNIS) as well as evaluation metrics for each stage in our pipeline to facilitate the development and evaluation of future methods. Limitations and Future Work Our approach can match coarse semantic regions such as main body parts (head, torso, and legs). In future work, it would be desirable to produce finer regions in such as eyes, mouth, and hands. This is challenging because the current image-based segmentation models are not able to provide fine-grained segmentation for renderings of meshes without textures. Foundation models in machine learning are helpful for a wide range of tasks. In the future, it would also be interesting to design foundation models that can map 3D shapes, images, and text to a common latent space. Finally, adapting functional maps to handle strongly non-isometric shape pairs, starting from high-quality segment matches, is another interesting problem for future work. ACM-Reference-Format [ Supplementary Materials for: Zero-Shot 3D Shape Correspondence ] § IMPLEMENTATION DETAILS We run all our experiments on a single Nvidia RTX 3090 (24 GB RAM). We use the ChatGPT-3.5 turbo model via OpenAI Python API. We use the Nvidia Kaolin library <cit.> written in PyTorch for rendering shapes. We render the mesh on a black background with 512×512 resolution. We use a bounding box prediction threshold of 3.7 for the DINO <cit.> model. § ZERO-SHOT 3D OBJECT CLASSIFICATION We show in Figure <ref> and Figure <ref> the prompts used by BLIP2 and ChatGPT in our proposed method for zero-shot 3D object classification. In Figure <ref>, we replace the "ANSWERS_LIST" strings with a list of the proposals predicted by the BLIP2 model given the rendered images of an input shape. §.§ GT Synonyms List Figure <ref> shows the collected synonyms we used in our proposed evaluation metrics. § SEMANTIC REGION GENERATION AND MATCHING PROMPTS In Figure <ref>, we show the textual prompt we use for proposing sets of semantic regions R^1, R^2 for the input shapes S^1 and S^2 as discussed in Section <ref>. We replace the "SHAPE_SRC_LABEL" and "SHAPE_TRGT_LABEL" strings with the predicted class label for S^1 and S^2, respectively.
http://arxiv.org/abs/2306.04031v1
20230606214900
Certified Reasoning with Language Models
[ "Gabriel Poesia", "Kanishk Gandhi", "Eric Zelikman", "Noah D. Goodman" ]
cs.AI
[ "cs.AI" ]
BokehOrNot: Transforming Bokeh Effect with Image Transformer and Lens Metadata Embedding Zhihao Yang Wenyi Lian Siyuan Lai Uppsala University, Sweden {zhihao.yang.8094,wenyi.lian.7322,siyuan.lai.3205}@student.uu.se July 31, 2023 ======================================================================================================================================== Language models often achieve higher accuracy when reasoning step-by-step in complex tasks. However, their reasoning can be unsound, inconsistent, or rely on undesirable prior assumptions. To tackle these issues, we introduce a class of tools for language models called guides that use state and incremental constraints to guide generation. A guide can be invoked by the model to constrain its own generation to a set of valid statements given by the tool. In turn, the model's choices can change the guide's state. We show how a general system for logical reasoning can be used as a guide, which we call LogicGuide. Given a reasoning problem in natural language, a model can formalize its assumptions for LogicGuide and then guarantee that its reasoning steps are sound. In experiments with the PrOntoQA and ProofWriter reasoning datasets, LogicGuide significantly improves the performance of GPT-3, GPT-3.5 Turbo and LLaMA (accuracy gains up to 35%). LogicGuide also drastically reduces content effects: the interference of prior and current assumptions that both humans and language models have been shown to suffer from. Finally, we explore bootstrapping LLaMA 13B from its own reasoning and find that LogicGuide is critical: by training only on certified self-generated reasoning, LLaMA can self-improve, avoiding learning from its own hallucinations. § INTRODUCTION Consider a language-based autonomous agent tasked with managing a user's calendar and email. The user might want to specify general principles on how the agent should behave, such as “if the email is from any of my managers, you must send me a notification”, and important pieces of information such as “I'm part of the research team”, or “Grace manages the research team”. When the agent analyzes an email and decides what actions to take, we'd like it to respect the given instructions. Doing so might require reasoning: the agent should conclude that an email from Grace warrants a notification, even if that wasn't said explicitly. How should the agent make such conclusions? A Large Language Model (LLM), such as GPT-3 <cit.> or PaLM <cit.>, can in principle take in the given instructions and context, choose actions to take and, before each action, ask itself “is this permitted?” The answer might require making chains of inferences based on the user's input. For this class of problems, LLMs have been shown to dramatically benefit from chain-of-thought reasoning <cit.>. Empirically, allowing LLMs to generate reasoning steps before their answer consistently yields higher accuracy across a wide range of tasks <cit.>. Qualitatively, reasoning steps provide “an interpretable window” into how the model arrived at the answer <cit.>, in contrast to an opaque guess. But much like humans, language models can also produce unsound reasoning: even after correctly interpreting a problem, they can take logically invalid inference steps, or produce a guess at the final answer that is not supported by their own rationale <cit.>. Moreover, LLMs have also been observed to show human-like content effects in reasoning: their accuracy drops significantly when asked to reason with assumptions that contradict their prior beliefs <cit.>. While a natural language rationale can be highly desirable for purposes of interpretability, it is not enough to ensure a high degree of reliability. How can we avoid unsound, perhaps dangerous, inferences? This question illustrates the central concern that led to the development of formal logic. In a formal system, valid inferences can be generated mechanically with logical deduction rules. Automated reasoning tools, such as Z3 <cit.>, solve formalized problems automatically. This level of reliability also brings its costs. Using these tools requires users to fully formalize their problem, but some desiderata might be impractical to express in logic (e.g., “if the email looks important, you can create a to-do item for me”). Moreover, they do not provide a simple, equivalent “interpretable window” into how conclusions were derived: even the basic inference principles that they employ, such as resolution, are fundamentally hard to describe[ When introducing the first-order resolution principle, Robinson remarks that it is powerful in two senses: it is logically complete, and “in the psychological sense that it condones single inferences which are often beyond the ability of the human to grasp” <cit.>]. If the user's rules were incorrectly formalized, they will have difficulty understanding why the system is misbehaving. This can undermine trust just as invalid inferences can. In this paper, we aim to allow LLMs to rely on trusted formal deductions during generation by building on the recent paradigm of tool use in language models <cit.>. In prior work, LMs invoke external tools by generating special sequences, intercepted by the decoding algorithm. They can generate inputs (e.g., a mathematical operation, or search query) and receive the tool's output as if it was their own generation. We generalize this input-output paradigm to a broader class of LM tools we call guides. When a guide is invoked by the model using a special delimiter, the tool computes a space of valid outputs, as illustrated in <ref>. We then employ constrained decoding <cit.> to ensure the model will incrementally generate one of the valid outputs. Guides thus enable a more declarative interaction between tool and model: the guide declares a set of possible sequences, while the model brings prior expectations used to generate one among them. A guide can maintain state: its next valid outputs might depend on the sequence of choices up to that point. We use this framework to allow language models to locally constrain generation to a set of valid statements determined by an external logical tool. To that end, we leverage the Peano theorem-proving environment <cit.> to construct , which an LM can use to formalize its assumptions, set proof goals and make sound inference steps. The model can intersperse formal reasoning and natural language during generation. When the language is conditioned on previous formal steps, it is highly reliable, since the generations allowed by are formally certified. We validate our method on three existing natural language reasoning datasets, PrOntoQA <cit.>, ProofWriter <cit.>, and Syllogistic Validity <cit.>. We also follow the format and methodology of PrOntoQA to introduce a new dataset, DeontiQA, where problems require reasoning using deontic logic principles to determine whether an action is permissible, obligatory or forbidden. When used with few-shot prompting, we find that LogicGuide significantly improves the accuracy of OpenAI GPT-3 and GPT-3.5 Turbo, and the open LLaMA 13B model. Moreover, models using LogicGuide have drastically lower content effects: we show this both with PrOntoQA and in the Syllogism Validity dataset <cit.>, used previously to measure content effects in LLMs. Self-improvement methods, such as the Self-Taught Reasoner (STaR; <cit.>), improve reasoning by fine-tuning a model on the rationales of its successful answers. In the tasks we analyze here, there's a high probability of guessing the correct answer (e.g. true or false, so at least 50%), hence STaR alone fails to yield meaningful improvements. LogicGuide allows us to differentiate cases where the model arrived at a certified conclusion and when it generated an unsupported guess. We show that running STaR using only certified solutions is highly effective: LLaMA 13B enjoys accuracy gains of up to 17% on PrOntoQA, while naïve STaR — fine-tuning on all generations that led to the correct answer — fails to provide improvements. Altogether, guides provide a promising approach for combining the trustworthiness of formal reasoning with the flexibility of language models. § RELATED WORK As reviewed above, our work builds on two classes of systems for reasoning: language models, which can reason flexibly in natural language, and formal reasoning systems, which rely on formal logic to derived certified inferences. To interface these two systems, we leverage recent methods for constrained decoding from language models. Specifically, we employ Constrained Semantic Decoding (CSD) <cit.>, an algorithm that guarantees a valid sample by construction. CSD does not require full access to the model, only the ability to bias its logits. This allows us to use GPT-3 <cit.> and GPT-3.5 Turbo models through their public APIs, as well as a locally-run LLaMA model <cit.>. Other constrained decoding methods, such as NeuroLogic Decoding <cit.> and NeuroLogic A*esque decoding <cit.>, have been proposed to enforce lexical (but not richer) constraints at inference time. LLMs have been increasingly used as agents interacting with other systems, by both using tools to delegate computation or to trigger external actions <cit.>. In prior work, LLMs can provide inputs to an external tool, such as a search query <cit.> or a mathematical operation <cit.>, and receive the output in the decoding stream. Our framework of guides (<ref>) can be seen as a generalization of this paradigm, where the tool defines a space of outputs and he LM chooses one using its own probabilities. Our approach to certifying reasoning from language models relies on grounding their inferences in an interactive theorem prover, Peano <cit.>. Similar to other popular theorem proving languages like Lean <cit.> and Coq <cit.>, Peano uses dependent type theory as its logical foundation. Most theorem proving environments are designed for the verification of given proofs. In contrast, and of special interest to us, Peano is designed to aid in generation by exposing a finite action space. Many other recent works have integrated LLMs and interactive theorem provers in the context of formal mathematical reasoning. Recent work on autoformalization has shown that LLMs can be effective in translating informal to formal mathematics <cit.>. This idea is related to how we use LLMs to formalize their assumptions given in natural language, though our end goal is to produce reliable natural language rationales rather than formal proofs alone. § CERTIFIED REASONING WITH GUIDES We now develop the framework of guide tools, and discuss how to implement a guide for general logical reasoning. We then remark how guides can overcome computational limitations of Transformer models, and briefly discuss other potential guide tools. §.§ Guide functions Previous work in tools for language models assumed an interface where the model would provide inputs to the tool and receive back a single output, conditioning on this output for further generation. For instance, <cit.> allowed the model to rely on a calculator by generating a string such as . At this point, the decoding algorithm would execute the operation externally and copy the result as if it was generated by the language model. Here, our main goal is to leverage a trusted external tool to answer the question: “what logical inferences could be made next?” In principle, given a tool that can answer this question, we could copy its entire output into the decoding stream. However, this set can be very large, and for logical inference the set grows larger as each inference allows many new ones to be reached. We could instead randomly choose a single possibility from the set, but this would ignore the substantial prior knowledge of the language model, often yielding a useless choice. Our key idea will be to use constrained decoding so that, when the tool is invoked, the language model itself will generate one of the valid inferences. More generally, a guide tool defines a set of valid generations given previous choices. Formally, let S = Σ^* be the set of strings in the guide's alphabet Σ, with S^* denoting the set of finite sequences of such strings. We define a guide g to be a function g : S^* →𝒫(S) that takes a sequence of previously generated strings and returns a regular set of allowed next generations. Our idea is to leverage g at specific points when sampling from an autoregressive language model P_LM(·) so that when the guide is invoked at a prefix s_0, we will sample a continuation from P_LM(s|s_0) that belongs to the set allowed by g when given the previous guided generations in the prefix s_0 (e.g., previous valid logical inferences). §.§ From guides to completion engines Given any guide function g as above, we want to provide a tool for LMs that, once invoked by some special sequence, will constrain the immediate subsequent output using g. To that end, we employ the Constrained Semantic Decoding algorithm (CSD; <cit.>). CSD can constrain the output of the underlying model at inference time by leveraging a user-defined completion engine. We briefly review what a completion engine requires and explain how we define one to implement guide tools. Background: Completion Engines and CSD A completion engine is defined as a function c : Σ^* → RE(Σ), taking a string in the vocabulary Σ and returning a regular expression over Σ. The idea is that c can dynamically compute a set of valid continuations: once the LM generates enough tokens to maximally match the current regular expression, CSD will call c again to determine what can follow. The algorithm handles the technical complication that c and the LM often have different vocabularies. For instance, in our case, the LMs we use have vocabularies with multiple tokens that either contain or overlap with the special strings we use to trigger the guide tool. CSD allows our definition of guide tools to be agnostic to the underlying model's vocabulary. Guide Tool as a Completion Engine We implement the guide tool for a guide function g as a completion engine. First, we arbitrarily pick special strings t_1 and t_2: t_1 will trigger the tool and t_2 will mark the end of the “trusted” generation (e.g., we later use t_1 = and t_2 = ). The guide completion engine takes a string s, containing the model's partial output so far. It first checks whether s ends in t_1 to decide if the tool has been invoked. If not, it returns a regular expression matching any string not containing t_1 and ending in t_1. As a result, CSD will allow the model to freely generate text until it invokes the tool by generating t_1. If s does end in t_1, then the completion engine will collect all blocks of text between occurrences of t_1 and t_2 in s into a sequence S_p, and return a regular expression matching the strings in g(S_p) as its output. We expect the strings in g(S_p) to end in t_2 when the guide wants to allow the model to return to free generation. CSD will then constrain the LM to generate one of the strings matching g(S_p), and it will handle the complication that the model's tokenizer might overlap t_1 and the beginning of some generation in g(S_p). With the resulting completion engine wrapping g, CSD can essentially sample from any given LM while constraining the outputs between the delimiters t_1 and t_2 to come from the guide. In this framework, it is easy to define simple input/output tools by having g return a singleton set. It also allows us to design richer LM tools, as the LogicGuide we introduce next. We describe several other guides in the Appendix, leaving their exploration for future work. §.§ The LogicGuide We now construct LogicGuide, a guide tool for language models to perform externally certified reasoning. Our logical backend of choice is Peano <cit.>, a theorem-proving language together with an environment for incremental proof generation. The main feature of Peano we rely on is that its environment provides a finite action space. Thus, given a partial argument, Peano gives us a list of valid inferences that can be made in a single step given the background theory, assumptions (possibly added by the model) and previous inferences. Our idea is to use this list to guide the model whenever it decides to derive a logical conclusion. While Peano makes the guide implementation particularly simple, other theorem-proving environments might be adaptable for this purpose. We use the guide delimiters and , and implement a guide function that accepts strings with the format . We define 6 actions (exemplified in <ref>) that allow the model to (1) formalize its assumptions (, , , ), (2) set a goal (), and (3) perform logical inferences (). For (1) and (2), the guide returns constraints that ensure the model's formalization to be syntactically valid; since these actions are the boundary between natural and formal language, it is impossible to guarantee that they are semantically valid in the sense that the model has properly formalized the hypotheses. What is certifiable is that the logical inferences (action type 3) follow from the model's formalization, i.e. its inference are valid given its explicit assumptions. (<ref> provides empirical evidence that formalization errors rarely lead to a wrong conclusion; most often they make the model unable to prove or disprove the goal). Using the guide In the typical setup for logical reasoning problems <cit.>, the input contains a context (the set of assumptions) and a goal, and the few-shot examples additionally contain a rationale and the final answer (typically whether the goal is true or false). In our experiments, we demonstrate how to use the guide by creating few-shot examples with the proper LogicGuide action annotations (as in <ref>). Specifically, we add a section before the rationale named “Formalized context” where we repeat the assumptions in the scenario while marking objects, properties and relations, and formalizing each of the assumptions into an block. We do the same for the goal. Then, we prepend each reasoning step with the appropriate action. In this way the model is encouraged to first generate a formal inference step and only then its natural language counterpart. We include all of our prompts in the Appendix. §.§ Guides can overcome computational limitations of Transformers It's unsurprising that Transformer models trained on (often imperfect) human data generate logical inconsistencies or generalize in unpredictable ways. Below we show that LogicGuide can helpfully address these practical failures. Guides also provably expand the computational power of Transformers. As a concrete example, consider the Parity problem: given a binary input string, output the number of symbols modulo 2. The following result is established in <cit.>: *MH(Corollary 2 in <cit.>) Transformers with hard attention cannot model Parity. More precisely, no fixed-size decoder-only Transformer with hard attention can process an input string and generate a character indicating whether its input length was odd or even. Computational limitations like this directly translate to corresponding limits to performing logical reasoning. For instance, the Parity problem corresponds to the problem of processing iterated negations in propositional logic, and thus is a sub-problem of reasoning in any other more expressive logic. Yet Parity is a trivial problem on a traditional computer. Therefore, with the protocol from <ref>, a Transformer model could rely on an external counting guide as long as it was able to provide its input string within the guide delimiters. The Echo problem summarizes this required capability: given an input string of arbitrary length, followed by a special delimiter, copy the input into the output. We observe that the following holds: *echoProposition There exists a fixed-size, decoder-only Transformer with hard attention that can model Echo for unbounded inputs. We show this by manually constructing this Transformer network in the Appendix. A simple modification allows this network to be combined with a guide to solve Parity using the protocol defined in <ref>. This establishes the following: *parityCorollary Transformers with hard attention and a guide tool can model Parity. The idea extends immediately to other example problems used in the literature to analyze computational properties of Transformers. This simple observation points out that guided (or even regular tool-using) Transformers have fundamental differences as models of computation. As language model tools become more prevalent in practical deployments, future theoretical analyses cannot ignore reliance on external tools; there is an exciting opportunity for future work in understanding these hybrid systems, where computation alternates inside and outside of neural networks. § EXPERIMENTAL EVALUATION We now evaluate the effectiveness of in improving language models on reasoning tasks. We focus on three research questions: RQ1:Does improve the accuracy of language models in multi-step reasoning? We investigate this question in <ref> using OpenAI GPT-3, OpenAI GPT-3.5 and LLaMA 13B, and three multi-step reasoning datasets (PrOntoQA <cit.> and ProofWriter<cit.>, as well as the DeontiQA problems we introduce). RQ2:Does reduce content effects in language model reasoning? <ref> explores this, leveraging the PrOntoQA False Ontology split and the Syllogism Validity dataset <cit.>. RQ3:Can an LLM self-improve using by learning from its own solutions? In <ref>, we explore improving a LLaMA 13B model using the Self-Taught Reasoner method. §.§ Impact of on reasoning accuracy Datasets We use two recent natural language reasoning datasets: PrOntoQA <cit.> and ProofWriter <cit.>. Both datasets contain generated reasoning problems with (1) a list of assumptions (e.g. “Every dog is a mammal”, or “Sam is a dog”), and (2) a proposition that can be reasoned about from the assumptions (e.g. “Sam is a mammal?”). In both datasets, the goal is to answer the question with either true or false. Problems are categorized by how many reasoning “hops” the solution needs (1 to 5). In addition, PrOntoQA has three splits: “True Ontology”, where the rules are coherent with common sense, “False Ontology”, where rules violate commonsense (e.g., “Every composite number is prime”), and “Fictitional Ontology”, which uses made-up concepts (e.g., “Every rumpus is feisty.”). ProofWriter uses real concepts for all rules (e.g., people, animals, colors), but the rules are generated at random–thus they also often contradict commonsense. We use the problems from ProofWriter that have proofs for the answer (i.e. ignoring the “closed-world assumption” and “unknown” problems, where fully justifying the answer requires meta-logical reasoning). Language models We evaluate three language models in the few-shot setting: OpenAI GPT-3 (; <cit.>), OpenAI GPT-3.5 Turbo () and LLaMA 13B <cit.>. We use 4 few-shot examples for the vanilla models. For guided models, the prompt examples are augmented to show formalized reasoning. Given the assumptions and the question, we first show the model how to formalize the assumptions and the proof goal, and then present the chain-of-thought where sentences are preceded by a guided inference (in an block, c.f. <ref>). Since this makes the prompt longer, we only use two prompt examples for the guided models: one where the answer is true and one where it is false. We implement CSD on the OpenAI models using their public API, which exposes a parameter to bias the logits on given tokens. We use the rejection-based sampling procedure described in <cit.>, which only requires extra API calls when we detect a violation in the model's output inside guided blocks. requires a slight adaptation (to resume generation) since it is a chat-based model; we detail this along with all of our prompts in the Appendix. Results <ref> shows few-shot results on multi-hop reasoning, measuring final-answer accuracy. When models do not provide any answer to the problem, we assume a random guess. Overall, guided models perform significantly better. GPT-3 and GPT-3.5 are highly accurate in formalizing assumptions, and enjoy the largest benefits (with nearly perfect performance on PrOntoQA with , and improving from chance to  80% correct on ProofWriter). For them, essentially eliminates single-step reasoning errors, and the impact of this benefit grows in solutions requiring more hops—a single error is enough to reach the wrong final conclusion. LLaMA 13B sees gains between 10 and 20% in PrOntoQA False and Fictitional, while hurts its performance in PrOntoQA True (where the unguided model often avoids reasoning altogether, as the answer follows common sense) and ProofWriter. We observe two main failure modes: (1) models can misformalize assumptions, and (2) they can fail at planning, making a sequence of valid inferences that do not ultimately lead to the goal. When formalization errors happen, it's more common that no conclusion can be drawn, rather than a wrong conclusion: in only 1.6% of the solutions did a guided model formally derive a wrong answer; these cases were mostly due to missing a negation when formalizing a sentence (and mostly for LLaMA on ProofWriter). A more common formalization failure (especially for LLaMA) was to use inconsistent names for properties or relations, e.g. in one place and in another. When planning fails and no further inferences can be made, generates the string in the block. When that happens, we observed models spontaneously concluding that the answer is “Unknown” or “Cannot be concluded” despite that not being demonstrated in the prompt (models abstained in 78% of the cases where they exhausted the inferences that could be made, while guessing “False” in 19% of those cases). This contrasts with the unguided models, which most often still make an unjustified guess, writing as if it was a logical conclusion (only unguided GPT-3.5 Turbo abstained in our experiments, in 9% of its predictions). Errors in language model reasoning would be especially problematic in practice when an agent must decide which actions are permissible. Hence we created DeontiQA: a set of 60 new reasoning problems inspired by Deontic Logic <cit.>. We follow the same methodology used in PrOntoQA to create the problems, creating logical forms first and then realizing them in natural language. Like in PrOntoQA, we add distractor rules to prevent guessing the answer from surface shortcuts. In these problems, the goal is to decide whether a given action is permissible, obligatory, or impermissible in the context of managing calendar events for a group of people. We detail the creation of DeontiQA in the Appendix, and make the dataset available along with our code. DeontiQA problems are significantly longer (up to 28 rules) compared to PrOntoQA (maximum of 18). This increased length means we are only able to fit one prompt example in the context window of GPT-3 and GPT-3.5 Turbo. We find to be helpful on DeontiQA: GPT-3 alone is correct on 61.6% of problems, which increases to 80.0% with . GPT-3.5 Turbo alone achieves 77.5% accuracy which increases to 81.3% when guided. Overall, this provides positive evidence for our first research question: can significantly improve the accuracy of base models in natural language reasoning problems. Their answers become not only more accurate but also more trustworthy: makes models answer “Unknown” when they don't have an answer, rather than producing an unsupported guess. §.§ Mitigating content effects in reasoning Both humans <cit.> and language models <cit.> have been shown to suffer from content effects in reasoning: their accuracy in logical judgements is influenced by prior beliefs about the assumptions and conclusions. For instance, from the assumptions that “Some librarians are happy people” and “Some happy people are healthy people”, it does not follow that “Some librarians are healthy people”. Humans and LMs have difficulty judging this argument as invalid because the conclusion agrees with prior beliefs. We hypothesize that LMs will have smaller influence from the content when formalizing assumptions, rather than reasoning from logical sentences. If that is the case, then using will help mitigate content effects. We use two tasks to investigate this hypothesis. First, we contrast the results in the different PrOntoQA ontologies. As in the original PrOntoQA results <cit.>, we see that the base performance of GPT-3 and GPT-3.5 Turbo is already close to ceiling in the True Ontology split (where the model doesn't strictly need to reason correctly as long as it judges the conclusion using common sense). In contrast, accuracy is significantly lower in the False and Fictitional ontologies and decays with more hops. However, both of these models are highly accurate in formalizing assumptions, and thus benefit from the guide in the False and Fictitional ontologies: performance is near ceiling. Interestingly, GPT-3.5 Turbo still exhibits occasional content effects, explicitly judging the conclusions derived using as contradictory. For instance, in one problem where the model must decide whether Sam is luminous or not, it is given that “Sam is a snake”, and from the given assumptions the model correctly concludes “... [[infer:(sheep sam)]] Sam is a sheep”. It then proceeds to question this conclusion and halts: “This contradicts the fact that Sam is a snake. Therefore, we cannot infer whether Sam is luminous or not.”. Second, we leverage the Syllogism Validity dataset <cit.>. In this task, the model is given two assumptions and a conclusion, and has to decide if together they constitute a valid argument (i.e., the conclusion logically follows from the assumptions). The example above about librarians is taken from this dataset. Solutions have a single step: judging the argument as valid or invalid. When using , we prompt the model to first perform a single inference given its formalization of the assumptions and then judge the validity of the argument. Syllogism Validity has 3 conditions: “Nonsense”, where rules are about made-up concepts, “Consistent“, where the conclusions agree with commonsense regardless of whether the argument is valid, and “Inconsistent”, where the conclusion always violates world knowledge. Unguided models behave consistently with those in <cit.>: in the “Consistent” split, all models strongly tend to judge the argument as being valid, thus performing close to chance (GPT-3.5 Turbo is slightly better, at 60%). Both GPT-3 and GPT-3.5 Turbo are, however, highly accurate at formalizing the assumptions and conclusions and tend to trust , nearing ceiling performance for all conditions. LLaMA 13B has much more difficulty judging the syllogisms, performing near chance in all conditions. However, it is still successful at formalizing many syllogisms, obtaining non-trivial performance (60% to 77%) when using . In failure cases, it often confuses logical connectives (e.g., formalizing “Some X are Y” as “X implies Y” and vice-versa). We overall see positive evidence for our second research question: models with show greatly diminished content effects, with stronger benefits for models that are more capable of formalizing individual sentences. §.§ Learning to reason by guided self-improvement Finally, we consider improving the reasoning ability of a language model. The Self-Taught Reasoner (STaR; <cit.>) is a simple method for improving LLMs on reasoning tasks that has been shown to be effective in symbolic, mathematical and commonsense reasoning. Given a dataset of reasoning problems paired with correct final answers (but not reasoning traces), STaR iterates between (1) solving problems with few-shot chain-of-thought prompting, and (2) fine-tuning the model on its own generated rationales that led to correct final answers. This allows the model to bootstrap its own reasoning from a small seed set of few-shot examples. Crucially, STaR relies on the premise that if a generated rationale led to the correct answer, it is likely to be correct. While this holds in domains like arithmetic, it breaks down in binary answer tasks like PrOntoQA. In these cases, right answers will happen often with bad rationales, leading STaR and similar approaches to fine-tune on incorrect reasoning. Indeed, the authors in <cit.> remark that “filtering bad reasoning paired with correct answers remains an open question.” [16]r0.4 < g r a p h i c s > Accuracy of LLaMA 13B on held-out PrOntoQA problems when bootstrapping using STaR. We thus consider STaR training on either all correct answers (with and without the guide) or only on certified correct answers. We run 2 STaR iterations with LLaMA 13B on PrOntoQA[ProofWriter has shortcuts that allow guessing the answer without reasoning <cit.>, which fine-tuning quickly learns. PrOntoQA explicitly includes distractor rules to avoid shortcuts. Thus, we focus on PrOntoQA here.]. In each iteration, we attempt 200 random problems equally split between 1 and 5 hops, and fine-tune on successful solutions, evaluating on unseen problems. <ref> shows the results. As predicted in <cit.>, the high chance of guessing confounds STaR, and training on all rationales that yield the right answer does not give meaningful improvements (“Unguided”, red curve). Training on all guided solutions leading to correct answers brings some improvement (“Guided”; 72% to 80% after one iteration), but still ends up over-fitting to accidentally-correct reasoning. Fine-tuning only on certified correct answers avoids this trap and achieves high performance (“Strict Guided”, up to 86%). This allows us to positively answer our third research question: can be used for effective self-improvement in reasoning, in cases where naïve methods collapse. § DISCUSSION AND CONCLUSION We introduced guide tools for language models. When invoked, a guide locally constrains generation to a controlled set of statements. leveraged this idea for logical reasoning, where the guide allows the LM to formalize its interpretation of input sentences and make certifiably sound inferences with respect to its formalization. This avoids inferences that do not follow from stated assumptions, substantially improving accuracy in natural language reasoning problems. Two major challenges remain. First, natural language is often ambiguous and can be difficult to faithfully represent in formal logic. Indeed, the appropriate formalization of many ubiquitous constructions is still an active subject of philosophical debate <cit.> (e.g., <cit.> recently discusses the formalization of “A unless B”). Domains where arguments tend to have more systematic logical structure, such as law, are more likely to benefit from tools like , based on formalization. Second, making correct logical inferences does not imply making useful ones. LLMs can still fail at planning by making inferences that do not eventually connect to their goal. Many current investigations into planning techniques for LM reasoning are complementary to our work and can be integrated with guides <cit.>. Language models bring to reasoning the flexibility of human language and a wealth of useful prior knowledge. But that power comes with lack of reliability and difficulty verifying extended reasoning. Our approach points to a rich direction for seamlessly integrating reliable symbolic and flexible neural reasoning into a unified text stream. The result is better, and more easily verified, reasoning. plain § PROOFS The Parity language, as defined in Section 4 in <cit.>, consists of the set of strings in the vocabulary {0, 1} where the number of 1's in the string is even. A decoder-only Transformer is said to model a formal language, such as Parity, if it is capable of classifying input strings as belonging (1) or not belonging (0) to the language in its activations after reading the string and an end-of-sequence symbol. Hahn <cit.> showed that Transformers with hard attention cannot model Parity with essentially the following argument (Theorem 1). For a hard-attention Transformer with a given size, it is possible to find input restrictions (essentially, input “templates” where some input symbols are fixed to either 0 or 1 and the others are allowed to vary) where there is an input-independent limit in how many input symbols can influence the function modeled by the Transformer. But changing even a single ignored input symbol in a string s_1, turning it into s_2, will change membership with respect to the Parity language (s_1 is in Parity if and only if s_2 is not). Thus, if the Transformer will produce the same output for both s_1 and s_2, it cannot be correct in both cases. Hence, it cannot model Parity in general. The difficulty presented here is in modeling a function that is sensitive to all input bits with a bounded hard-attention Transformer network. We here show that, given a proper guide function (as defined in <ref>) for Parity, which is trivial to implement in a traditional computer, we can construct a hard-attention Transformer network that will read in the input string, activate the guide, copy its input to the guide's input, and then be constrained to generate the correct output (a bit indicating membership to Parity). Our first step is to show a construction of a Transformer network that models Echo: the problem of reading the input string and copy it into the output. Echo is similar to Parity in that the output depends on all input symbols, but it differs in that each output symbol only depends on one input symbol. This makes it easy to implement with a hard-attention Transformer, circumventing the input insensitivity results from <cit.>. *echo-pProposition There exists a fixed-size, decoder-only Transformer with hard attention that can model Echo for unbounded inputs. We first construct the appropriate Transformer by specifying input and positional embeddings and hidden layer weights, and then show that it behaves as desired. Our Transformer will consist of a single self-attention layer and a final linear output layer. First, let 𝒱 = {⋯, $} be the vocabulary. Our Transformer will model Echo for a string containing any number of tokens and ending in the end-of-sentence symbol $ (which is assumed to not be contained otherwise in the input). We let the hidden dimension be H = |𝒱| + 2. We use the 𝒱 standard basis vectors for embedding input symbols. To define 2-dimensional positional embeddings, first let R = [ cos1/n -sin1/n; sin1/n cos1/n ] be the rotation matrix for an angle of 1/n, where n is the input length[We show the simplest construction where the positional embeddings depend on n to highlight the key mechanism for implementing Echo, though this dependency can be removed by using relative position encodings <cit.>. The missing mechanism is to “look up n positions before”, where n is input-dependent. But this can be recovered by looking up the position of the $ symbol, then using a relative encoding to look up that same number of positions behind. This requires adding one extra self-attention layer, but is otherwise a mechanical extension of this construction.]. We define positional embeddings ρ_i with ρ_0 = [0, 0]^T representing position 0, and ρ_i = Rρ_i-1 for i > 0. Following the notation from <cit.>, we take Layer 0 of the Transformer, f(v_i, ρ_i), to be the concatenation of the symbol embedding of v_i (the i-th standard basis vector in ℝ^|𝒱|) with the position embedding ρ_i. and with a third 2-dimensional vector q_i as follows: q_i will be equal to ρ_0 for i = 0, and will be the zero vector otherwise. This gives a (|𝒱| + 2× 2)-dimensional hidden vector for each input symbol as input to the first self-attention layer. We can see this vector h_i as a concatenation of three vectors, which we'll name o_i (the first 𝒱 dimensions), ρ_i and q_i. To construct the self-attention layer, we must specify the weight matrices Q, K and V for a single attention head (we'll encode all the useful computation in the self-attention phase, and thus assume the identity function in the feed-forward layer). We'll use dot-product attention (thus, f^att_k,h(a, b) = a^Tb). The query vector for position i will be constructed by the linear mapping h_i ↦ [0, 0, Rρ_i]. This linear map is encoded into Q. In turn, we let K be the linear transformation that sets the first 𝒱 dimensions to 0, and V will be the corresponding transformation that sets the position embedding dimensions to zero. The final output layer is the simple linear transformation that reads out the value in v_i and ignores the remaining 2 dimensions. This matrix O has input dimension H and output dimension |o_i| = |𝒱|, matching the vocabulary embedding dimensionality. This concludes the construction. We now analyze the behavior of sampling from this Transformer network. Assume the input is initialized with a sequence s of ending in the $ symbol, with |s| = n. As constructed, we have Rρ_n = ρ_0; thus, the largest dot-product of the query at the $ symbol will be with the first input symbol, regardless of its content. This will cause its corresponding value to be selected by the max in hard attention, and that character will be copied over to the output. At the next position, we'll have Rρ^n+1 = Rρ_0 = ρ_1, causing the second character to be copied by the same argument. This will repeat for as many characters as we sample, cycling over the input. With a network implementing Echo, we can then model Parity: *parity-pCorollary Transformers with hard attention and a guide tool can model Parity. To implement this in our framework, we can define this guide as a completion engine that first produces a regular expression matching any sequence of bits followed by a $ symbol, and then, after a maximal match to this regular expression, will generate a regular expression that only matches the number of 1s in the previous sequence modulo 2. We can then integrate this guide with delimiters t_1 = , t_2 = . The last generated character before will be the parity bit. We can also extend this construction to make the symbol generated after to be the parity bit, by adding an extra self-attention layer where an input symbol embedding of overrides the position to be looked up to be R^-2ρ_i (i.e., the character two positions before). § OTHER GUIDES In <ref> we explored , which captured a rich set of operations that both set and leverage state, as well as a complex external logical support tool. Nonetheless, many other guides can be easily defined in this same framework. We survey several such guides here as potential ideas for future work: Memory GuideA simple guide in the same format of can be one where the model can set values to keys (), and later on retrieve the value associated with a given key (). When retrieving, the guide can limit the key to be within one of the existing values, and will force the value to be the last stored one. Values can be either overridden or added to a set, depending on the domain. This can effectively implement memory, and this can extend beyond a simple context window provided that the guide keeps external memory. In problems like the bAbI tasks <cit.> requiring models to keep track of the state of objects and characters through long stories, this guide can reduce the problem of remembering locations (and avoiding interference in self-attention) by the problem of translating each question into a query, using only local information. Quote GuideLanguage models often hallucinate quotes, e.g. saying that “According to Wikipedia, 'XYZ'. Therefore...”. We can implement a simple guide that forces quotes to actually come from a trusted source. Specifically, whenever a prefix like is generated, the guide can force the subsequent output to be one of the sentences contained in a certain web page (that set is regular). Externally, an UI can even mark guided quotes with their source, which can be kept by the guide. Algebra GuideMathematical reasoning tools can be also integrated as guides for math problem-solving. Peano itself was originally used with algebra problems, and can thus also serve as a guide for mathematical reasoning. We can also leverage other tools, such as computer algebra systems, as guides. One example is the SymPy integration with Codex used previously to solve math word problems <cit.>, where some instructions can add variables, and some can indicate to the solver a variable to be solved for. In the case of <cit.>, the model simply indicates which variable to solve for, and the answer is externally extracted. When specifying equations in an block, the guide can force the model to output a syntactically valid equation, and also one that only uses already existing variables. This will guarantee that parentheses will be correctly closed (the completion engines in <cit.> achieve this by deriving a completion engine from a parser) and that variables are used after they are introduced. If we wanted the model to also use the results from the solver, we can turn the block into a guided block, where x is constrained to be one of the existing variables, and v is given by the algebra solver. § DEONTIQA We generate the DeontiQA problems following the general methodology of PrOntoQA <cit.>, where we first sample assumptions and proofs in a logical form and then realize those in natural language. The main qualitative differences are (1) in the specific logical framework we use and (2) in how we translate logical sentences into natural language. Logical framework We formalize a logical framework for a general domain of managing calendar and event invites in Peano. We create a base type for actions, as well as the deontic predicates permissible and obligatory, to be applied to actions. We have 5 object types: , , , and . From these, we create 14 kinds of actions: * Given an event invite, the agent can , or for that invite. * Given an event, the agent can . * Given a reminder specification (constructed with a person and a time period before the event), the agent can . * Given an event and an entity, the agent can or . * Given an event and a person, the agent can . * For an event, a person might , , or * Given an event and a proper event property to be changed, the agent can , , or , with the property describing the proper update. Problem generation To generate a problem, we first sample a theory — a set of hypotheses —, and using those assumptions we try to construct a random derivation (applying axioms and assumptions to hypotheses or previous conclusions). The conclusion of the derivation (or its negation, 50% of the time) becomes the goal in the generated problem. Translation to natural language To realize the problem in natural language, we use the aid of GPT-4 <cit.>, prompted to translate the logical statements into a story in natural language given a few examples, with sentences describing the axioms. All stories were manually checked to still be unambiguous and logically sound, and we only use GPT-4 to help with diversity. As a result, the DeontiQA problems show greater linguistic variation than both PrOntoQA and ProofWriter, especially at their beginning. We show 3 example problems in <ref>, <ref> and <ref>. The full set of 60 problems is released along with the supplementary material. § CONSTRAINED SEMANTIC DECODING WITH CHAT MODELS Originally, Constrained Semantic Decoding was proposed to work with standard autoregressive language models <cit.>. This relied on the ability to bias the logits of the model during generation, which is both possible in local models as well as through the OpenAI API[<https://platform.openai.com/docs/api-reference/completions/create#completions/create-logit_bias>]. The OpenAI has a different API, since it is a chat-based model. In this API, we pass a list of messages with roles ( or , where the model understands the latter as marking its own past generations). The API also has a logit bias parameter. However, we unfortunately cannot pass an incomplete prefix for the model's response. Thus, we are unable to force the model to complete a certain message while also applying logit biases. Every completion starts a new message. This requires an adaptation to the procedure in <cit.>. We start with the usual rejection-based CSD procedure: we put few-shot examples in the previous messages showcasing the response format we want, and sample a full response from the model. We then use token-by-token CSD to validate the response. If this terminates without finding any violation, we're done — the entire generation, including choices made in guided blocks (e.g., ), were valid. If not, like in original CSD, we take the largest valid prefix and use the CSD algorithm to compute the set of tokens that are valid after that prefix. Here we reach the main difference in the API. We want the model to continue its message from the last valid token. However, this is not possible in the current API. Instead, we must request a new message. Fortunately, we found to very frequently simply continue its generation when its last message appears incomplete[We hypothesize this is done so that models also complete their messages when the token limit is hit in the OpenAI Playground and users immediately request a new completion]. We exploit this behavior and (1) request a new message with a single token, passing the set of valid tokens in the logit bias, (2) append the sampled token to the previous message and request a new, unconstrained message, and (3) repeat until we have received a full response. When the model continues from where it stops, this approach is essentially equivalent to rejection-based CSD. Unfortunately, it is not fully reliable. In a number of cases, we found the model to insistently apologize after noticing that its previous message was incomplete. This is problematic when the previous message started a guided block that is to be finished in the next message. In this case, the model's apology is contained in the guided block, and is thus always an invalid generation. What happens is that this immediately triggers a violation, and the CSD procedure above is executed. Often, the CSD corrections will eventually get the model to make enough choices to complete the guided block, at which point its apology is not an issue anymore (e.g., see <ref>. In rare (< 0.1%) cases, the issue persists and we cannot recover from the apology (<ref> shows an example). To avoid a prohibitive number of API calls, we aborted sampling when more than 20 violations were hit in the same solution. § EXPERIMENTAL DETAILS Experiments with the OpenAI models were made using their public API. For LLaMA 13B, we ran and fine-tuned the model on an NVIDIA A100 80GB GPU. For fine-tuning when running STaR (<ref>), we performed inference on 200 problems — 40 for each number of hops from 1 to 5 — in each STaR iteration, and collected the generations where the model reached the correct answer (with each of the 3 criteria described in <ref>). We fine-tuned for 1 epoch (i.e., seeing each example exactly once) with a batch size of 2 and a learning rate of 2e-5. We used the Adam8bit optimizer with default parameters, reset in each iteration of STaR. § PROMPTS All of our prompts are provided in the attached supplementary material. We use JSON files for the chat model prompts, exactly as we pass them to the OpenAI API. § COMPLETE SAMPLES We here provide full samples of solutions generated by language models with , also showcasing the most common failure modes. <ref> shows one case of on the PrOntoQA False Ontology, where the model properly formalizes all of the assumptions, but still tries to make wrong conclusions very often. As a result, its solution ends up taking a long detour to eventually get to the goal, but eventually does so correctly (it can be concluded directly from two of the assumptions). <ref> shows one example of on ProofWriter, where the model further justifies its solution based on the axioms. We found these post-hoc justifications to be highly accurate. Unguided models sometimes also justify their inferences even if not prompted to do so, but to do so they must procuce hallucinations (or assume general world knowledge, such that “an animal cannot chase itself”). <ref> shows one rare failure mode where the model misclassifies whether it has already proved the goal, and thus does not proceed further. We can detect this failure mode with , since we have access to the Peano state and can ask the environment whether the goal was proved or not. In this way, as explained in <ref>, we can distinguish certified and uncertified answers. <ref> shows a case where LLaMA 13B misformalized (several) assumptions, whereas <ref> shows a similar case with (much more rare). The result in both cases is that the model cannot make progress in its formal inferences, instead of making invalid deductions. Again, since we can detect when the answer was not formally derived, we can avoid fine-tuning on these cases where the model still guesses the right answer but with unsond reasoning, as we exploited in <ref>.
http://arxiv.org/abs/2306.05946v1
20230609150223
Digital Twin-Assisted Resource Demand Prediction for Multicast Short Video Streaming
[ "Xinyu Huang", "Wen Wu", "Xuemin Sherman Shen" ]
eess.IV
[ "eess.IV", "cs.NI" ]
1.0 Digital Twin-Assisted Resource Demand Prediction for Multicast Short Video Streaming Xinyu Huang1, Wen Wu2, and Xuemin (Sherman) Shen1 1Department of Electrical & Computer Engineering, University of Waterloo, Canada 2Frontier Research Center, Peng Cheng Laboratory, China Email: {x357huan, sshen}@uwaterloo.ca, [email protected] ============================================================================================================================================================================================================================================================================= empty In this paper, we propose a digital twin (DT)-assisted resource demand prediction scheme to enhance prediction accuracy for multicast short video streaming. Particularly, we first construct user DTs (UDTs) for collecting real-time user status, including channel condition, location, watching duration, and preference. A reinforcement learning-empowered K-means++ algorithm is developed to cluster users based on the collected user status in UDTs. We then analyze users’ watching duration and preferences in each multicast group to obtain the swiping probability distribution and recommended videos, respectively. The obtained information is utilized to predict radio and computing resource demand of each multicast group. Initial simulation results demonstrate that the proposed scheme can accurately predict resource demand. § INTRODUCTION With the proliferation of mobile devices and ubiquitous Internet access, an increasing number of individuals rely on short videos to stay up-to-date. According to a recent report from TikTok, the number of active global users per month is expected to reach 1.8 billion by the end of 2023, putting immense pressure on mobile networks <cit.>. Multicast technology can effectively enhance the radio resource utilization by utilizing multicast channels to transmit short videos. Due to users’ network dynamics and diversified characteristics <cit.>, short videos usually need to be transcoded into multiple bitrates in the cloud or edge servers to reduce the transmission delay. To facilitate effective multicast transmission and video transcoding, appropriate radio and computing resource reservations are necessary <cit.>. However, user status, such as channel conditions and swiping behaviors, is relatively dynamic, requiring frequent and accurate multicast group updates. Furthermore, users' swiping behaviors can lead to resource over-provisioning if precached segments are not played. To this end, we need to address the following challenges, i.e., how to accurately and timely cluster users into multicast groups; and how to quantify the effect of users’ swiping behaviors on resource reservation? The main contributions of this paper are summarized as follows. Firstly, we construct user digital twins (UDTs) to collect user status and propose a two-step method to realize accurate and fast multicast group construction. Secondly, we abstract multicast groups’ swiping probabilities from the watching duration stored in UDTs and utilize them to predict resource demand. § DT-ASSISTED RESOURCE DEMAND PREDICTION SCHEME §.§ System Framework As shown in Fig. <ref>, we consider a multicast short video streaming scenario, which consists of multiple base stations (BSs), an edge server (ES), and UDTs. ∙ BSs: BSs utilize multicast technology to transmit short videos to each multicast group, and collect real-time user status (including channel condition, location, watching duration, and preference) to update the corresponding UDTs' data. Different data attributes are collected with different frequencies. ∙ ES: The ES connects to BSs and stores popular short videos with the highest representation to reduce frequent content retrieval from remote servers. The stored short videos can be transcoded to a lower representation to adapt to network dynamics. ∙ UDTs: UDTs are deployed on the edge server to store user status for individual user <cit.><cit.>. §.§ Scheme Procedure To facilitate the accurate resource demand prediction through DT in the considered scenario, the specific procedure includes two main steps, as shown in Fig. <ref>. §.§.§ Multicast Group Construction Based on the stored user status in UDTs, we analyze user similarity and then divide users with similar status into the same multicast group. The user similarity is defined by the Euclidean distance between any two user status. To this end, we first utilize a one-dimensional convolution neural network (1D-CNN) to compress the time-series UDTs' data. Secondly, we develop a learning-based method for multicast group construction. Specifically, a double deep Q-network (DDQN) is first adopted to determine the grouping number by mining users' similarities. Then, the K-means++ algorithm is utilized to perform fast user clustering based on the determined grouping number. §.§.§ Group-Based Resource Demand Prediction Once users are clustered into multicast groups, the corresponding group-level information, i.e., swiping probability distributions and recommended videos, are abstracted. Specifically, users’ watching duration on each kind of video is utilized to update multicast groups’ swiping probability distributions. The recommended videos are updated based on video popularity and users' preferences. Based on the abstracted group-level information, we can analyze multicast groups’ average engagement time, video traffic, and computing consumption to predict radio and computing resource demand. § INITIAL SIMULATION RESULTS Simulations are conducted to evaluate the performance of the proposed scheme. We adopt the public short-video-streaming-challenge dataset to generate video bitrates and users’ swiping behaviors. The resource reservation interval is 5 minutes. Users’ preferences are updated based on preference labels and engagement time. Users are initially randomly generated in the University of Waterloo campus and then move along different trajectories. As shown in Fig. <ref>, we present the cumulative swiping probability and the radio resource demand for multicast group 1, where users watch News videos most while Game videos least. Based on the abstracted swiping probabilities, our proposed scheme achieves a high prediction accuracy up to 95.04% on radio resource demand. § CONCLUSION AND FUTURE WORK In this paper, we have proposed a DT-assisted resource demand prediction scheme in multicast short video streaming, which can effectively abstract multicast groups’ swiping probability distributions and recommend videos. For future work, we will investigate how to effectively reserve radio and computing resources based on the predicted multicast groups' resource demand. § ACKNOWLEDGMENT This work was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada, and the Peng Cheng Laboratory Major Key Project under Grant PCL2021A09-B2. IEEEtran
http://arxiv.org/abs/2306.02381v1
20230604153124
Sparse Convolution for Approximate Sparse Instance
[ "Xiaoxiao Li", "Zhao Song", "Guangyi Zhang" ]
cs.DS
[ "cs.DS" ]
Proton decay Tommy Ohlsson July 31, 2023 ================= Computing the convolution A ⋆ B of two vectors of dimension n is one of the most important computational primitives in many fields. For the non-negative convolution scenario, the classical solution is to leverage the Fast Fourier Transform whose time complexity is O(n log n). However, the vectors A and B could be very sparse and we can exploit such property to accelerate the computation to obtain the result. In this paper, we show that when A ⋆ B_≥ c_1 = k and A ⋆ B_≤ c_2 = n-k holds, we can approximately recover the all index in _≥ c_1(A ⋆ B) with point-wise error of o(1) in O(k log (n) log(k)log(k/δ)) time. We further show that we can iteratively correct the error and recover all index in supp_≥ c_1(A ⋆ B) correctly in O(k log(n) log^2(k) (log(1/δ) + loglog(k))) time. empty Proton decay Tommy Ohlsson July 31, 2023 ================= Nothing § INTRODUCTION Computing the convolution A ⋆ B of two vectors of dimension n is one of the most important computational primitives. It also has been widely used in many fields such as computer vision <cit.>, signal processing <cit.>, graph mining <cit.>. For example, it has applications in problems like three number sum problem and all-pairs shortest paths, where the entries of vector A and B are non-negative integers. In string algorithms, non-negative convolution is employed when computing the Hamming distance of a pattern as well as the distance between each sliding window of a text <cit.>. The classical algorithm to compute the non-negative convolution leverages the Fast Fourier Transform (FFT) and its running time complexity is O(n log n). Algorithms in <cit.> give O(n^2) for 3-SUM and related problems. Thanks to the key techniques mentioned in <cit.>, the celebrated Balog-Szemeredi-Gowers Theorem (BSG Theorem) <cit.> and FFT, the authors in <cit.> gave the first truly subquadratic algorithms for various problems associated with 3SUM. The key techniques come from BSG Theorem <cit.> and FFT. The BSG theorem has been improved by <cit.>, however the result of <cit.> did not provide efficient algorithm (with small running time) for solution. They only show the existence of such solution. However, in many scenarios, the vector A and B could be very sparse and we can exploit such sparsity to improve the running time complexity compared to the classical algorithm implemented via FFT algorithm. <cit.> studied the exact k-sparse case and proves that k-sparse non-negative convolution can be reduced to dense non-negative convolution with an additive k loglog k term. We consider the problem of approximately k-sparse non-negative convolution and state the assumption as follows: [Approximate sparse non-negative convolution] Assume A,B ∈_+^n. Additionally, there exist c_1 = Ω(1) and c_2 = o(n^-2) such that A ⋆ B_≥ c_1 = k and A ⋆ B_≤ c_2 = n-k. Note that the previous work <cit.> only considers c_2= 0. We can handle some error here. We summarize our contributions as follows: * We study the approximately k-sparse non-negative convolution problem and design an approximate sparse convolution algorithm (Algorithm <ref>) such that it can approximately recover the all index in _≥ c_1(A ⋆ B) with point-wise error of o(1) in O(k log (n) log(k)log(k/δ)) time. * We further design another algorithm (Algorithm <ref>) which can iteratively correct the error and recover all index in _≥ c_1(A ⋆ B) correctly in O(k log(n) log^2(k) (log(1/δ) + loglog(k))) time. Roadmap We first discuss the related work in Section <ref>. We then present some preliminary definitions and lemmas in Section <ref>. We present our result for approximate sparse convolution in Section <ref>. We then show how to iteratively correct the error in Section <ref>. We conclude our paper in Section <ref>. § RELATED WORK Sparse Convolution There has been a lot of previous work on accelerating sparse convolution computation <cit.>. Hash function has wide applications. In sparse convolution problem: for vectors u, v ∈_+^n, compute their classical convolution u⃗ * v⃗=z⃗ (where z_k=∑_i=0^k u_i v_k-i ) in "output-sensitive" time, close to z⃗, the number of nonzeros in z⃗. The problem was raised by Muthukrishnan <cit.>, and previously solved by Cole and Hariharan <cit.>. Cole et. al. <cit.> obtained a O(k log^2(n) + polylog(n)) time complexity for sparse non-negative convolution case with a Las Vegas algorithm. Their strategy incorporates a number of concepts, including encoding characters with complex entries before using convolution, and constructs on linear hashing and string algorithms to identify (A⋆ B). Recent methods <cit.> rely largely on hashing modulo an arbitrary prime number. This method loses one log factor as a result of the Prime Number Theorem's stipulation for the density of primes and obtained O(klog k) or even O(klog^2 k). Nakos et.al.<cit.> achieves O(k log^2(n) + polylog(n)) time complexity for sparse general convolution case. There are several implementation for sparse convolution algorithms in <cit.>. Sparse convolution is also related to sparse Fourier transform, which has been also widely studied <cit.>. Sparse Matrix Multiplication If most elements in a matrix are zero, we call this matrix is a sparse matrix. However, it would be a waste of space and time to save and compute these sparse matrices. Therefore, it is important to only save the nonzero elements. Standardly in sequential programming languages, sparse matrices are represented using an array with one element per row, each of which comprises a linked list of the nonzero values in that row and their column number. Sparse matrix multiplication is a compute kernel used in a variety of application fields. It plays an important role in many areas such as data analytics, graph processing, and scientific computing. In the past decades, there has been many previous work on algorithm side optimization <cit.> and hardware acceleration <cit.>. The appearance of these algorithms accelerates the sparse matrices manipulation greatly. § PRELIMINARY Notations For any natural number n, we use [n] to denote the set {1,2,…,n}. We use A^⊤ to denote the transpose of matrix A. For a probabilistic event f(x), we define 1{f(x)} such that 1{f(x)} = 1 if f(x) holds and 1{f(x)} = 0 otherwise. We use [·] to denote the probability, and use [·] to denote the expectation if it exists. For a matrix A, we use [A] for trace of A. We use (a,b,c) to denote the time of multiplying an a × b matrix with another b × c matrix. We then provide several definitions of ∂ notation, non-negative convolution, cyclic convolution, generalized norm and support, and rounding here. Given a vector A∈^n, we define ∂ A to be the vector of the same dimension such that (∂ A)_i := A_i · i. We can choose A to be a length 7 vector, A =(3,1,2,1,2,1,1). Then we can compute ∂ A, which becomes ∂ A =(3,2,6,4,10,6,7). A visualization of A and ∂ A is shown in Figure <ref>. Given vectors A,B ∈ℕ^n, the vector C = A ⋆ B ∈ℕ^2n-1 is defined by C_k := ∑_i=0^n A_i · B_k-i. If A = (1,2,4,3,5,0,7), B = (1,4,3,6,7,8,9), then C=(1, 6, 15, 31, 48, 75, 93, 129, 116, 109, 94, 56, 63). A visualization of A, B and C is shown in Figure <ref>. If A = (0,1,0,1,0,1,0), B = (0,1,0,1,0,1,0), then C=(0, 0, 1, 0, 2, 0, 3, 0, 2, 0, 1, 0, 0). A visualization of A, B and C is shown in Figure <ref>. If A = (0,1,0,1,0,1,0), B = (0,1,0,1,1,1,1), then C=(0, 0, 1, 0, 2, 1, 3, 2, 2, 2, 1, 1, 0). A visualization of A, B and C is shown in Figure <ref>. Given the definition of non-negative convolution, we want to solve the following sparse non-negative convolution problem. Given vectors A,B ∈_+^n, we want to recover a vector D ∈ℝ^n such that: (D) =  _≥ c_1(A ⋆ B) D_j =   (A ⋆ B)_j + o(1)   ∀ j ∈_≥ c_1(A ⋆ B) We state our main result in the following two theorems: Theorem <ref> shows that we can compute approximate sparse non-negative convolution for A ⋆ B. Let c_1 = Ω(1) and c_2 = o(n^-2). Suppose that the Assumption <ref> holds. There is an Algorithm <ref> that runs in time O(k log (n) log(k)log(k/δ)). recovers all index in _≥ c_1(A ⋆ B) with point-wise error of o(1), i.e. * (D) = _≥ c_1(A ⋆ B), * D_j = (A ⋆ B)_j + o(1) for all j ∈_≥ c_1(A ⋆ B). holds with probability at least 1-δ. Theorem <ref> shows that we can iteratively correct the error for the approximate sparse non-negative convolution. Let c_1 = Ω(1) and c_2 = o(n^-2). Suppose that the Assumption <ref> holds. There is an algorithm (Algorithm <ref>) that runs in time O(k log(n) log^2(k) (log(1/δ) + loglog(k))) recovers all index in _≥ c_1(A ⋆ B) correctly, i.e. * (D) = _≥ c_1(A ⋆ B) * D_j = (A ⋆ B)_j for all j ∈_≥ c_1(A ⋆ B) holds with probability at least 1-δ. The cyclic convolution of two length-n vectors A, B is the length-n vector A ⋆_n B with (A ⋆_n B)_i := ∑_j=0^n-1 A_j B_(i-j) n. We define support as follows: (A)   := { i ∈ [n] : A_i ≠ 0 } We define ℓ_0 norm, A_0   := |(A)|. We define ℓ_∞ norm as follows, A_∞  := max_i ∈ [n] |A_i|. For vector A ∈^n, we define its ≥ C-norm and ≤ c-norm as follows: * A_≥ C := ∑_i ∈ [n] 1 [ A_i ≥ C ]. * A_≤ c := ∑_i ∈ [n] 1 [ A_i ≤ c]. We define a matrix A ∈^n with ≥ C-support as (A)_≥ C := { i ∈ [n] : A_i ≥ C }. Define function int(·): ↦ℕ which rounds a number to its closest integer. We say ι is an affine operator if ι(A) - ι(B) = ι(A-B). In the following, when we refer to “isolated” or “non-isolated” elements, we only consider those in _≥ c_1(A). Let g(x) = x  mod  p where p is a random prime in the range [m, 2m]. We say that an index x ∈_≥ c_1(A ⋆ B) is “isolated” if there is no other index x' ∈_≥ c_1(A ⋆ B) with g(x') ∈ (g(x) +{-2p, -p, 0, p, 2p }) m. When combined with the ideal hash function ι gives ι(∂(A ⋆ B))=ι(∂ A) ⋆_m ι(B)+ι(A) ⋆_m ι(∂ B). The b-th coordinate of this vector is ι(∂(A ⋆ B))_b=∑_i: ι(i)=b i ·(A ⋆ B)_i which can be accessed by computing the length- m convolutions ι(∂ A) ⋆_m ι(B) and ι(A) ⋆_m ι(∂ B) and adding them together. By setting m=O(k), we can now infer a constant fraction of elements i ∈supp(A ⋆ B) by performing the division ι(∂(A ⋆ B))_b/ι((A ⋆ B))_b=∑_i: ι(i)=b i ·(A ⋆ B)_i/∑_i: ι(i)=b(A ⋆ B)_i for all b ∈[m]. This yields the locations of all isolated elements in supp(A ⋆ B) under ι. We will leverage the following lemma of hashing modulo a random prime during our analysis. With modular hashing, we let the hash function g(x) = x  mod  p where p is a random prime in the range [m, 2m], the value x is an integer hash code generated from the key. Then the following properties hold: Universality: For distinct keys x, y ∈ [U]: [g(x) = g(y)] ≤ 2 log(U) / m. Affinity: For arbitrary keys x, y: g(x) + g(y) = g(x + y) p. We also generalize the definition on g(x) to g(A) for a vector A∈^n. Let g(x) = x p. Then g(A) ∈^p where g(A)_i := ∑_j∈ [n], g(j)=i A_j. A visualization of g(x) = x p and Algorithm <ref> Line <ref> is shown in Figure <ref>. We will also use the Hoeffding bound to obtain high success probability. Let X_1,⋯,X_n be n independent bounded variables in [a_i,b_i]. Let X:=∑_i=1^n X_i, then we have [|X - [X]| ≥ t] ≤ 2exp(-2t^2/∑_i=1^n (b_i-a_i)^2). § APPROXIMATE SPARSE NON-NEGATIVE CONVOLUTION In this section, we present our proposed approximate sparse convolution algorithm architecture as well as corresponding lemmas and proofs. We first describe our approximate sparse non-negative convolution algorithm in Algorithm <ref> and theorem stating the approximation guarantee and running time complexity. In the following lemma, we want to prove for all indexes i ∈_≥ c_1(A ⋆ B), it is isolated in at least L/2 hashing functions with high probability. Suppose that the Assumption <ref> holds, with probability at least 1-δ, for all i ∈_≥ c_1(A ⋆ B), i is isolated in at least L/2 hashing functions in {g^(l)}_l=1^L. Recall that A,B ∈_+^n and there exist c_1 = Ω(1) and c_2 = o(n^-2) such that A ⋆ B_≥ c_1 = k and A ⋆ B_≤ c_2 = n-k according to Assumption <ref>. Let g(x) = x  mod  p where p is a random prime in the range [m, 2m]. By Lemma <ref>, we have [i is non-isolated] ≤   | _≥ c_1 (A ⋆ B) | ·[ g(x) = g(y) ] ≤   k ·[ g(x) = g(y) ] ≤   k · 2 log(U) / m =  2k log n/m ≤  1/C log k ≤  1/4 where the first step follows from definition of isolated index, the second step follows from | _≥ c_1 (A ⋆ B) | =k, the third step follows from [g(x) = g(y)] ≤ 2 log(U) / m, the forth step follows from U=n, the fifth step follows from m ≥ C · k ( log n ) · (log k) , the last step follows from C ≥ 4 and log k ≥ 1. For every fixed i ∈_≥ c_1 (A⋆ B), since hash functions are i.i.d chosen, by Lemma <ref>, with probability at least 1-δ/k, 1/L∑_l=1^L 1[i is non-isolated in g^(l)] ≤  [i is non-isolated] + 2√(log(k/δ)/L) ≤   1/4 + 2√(log(k/δ)/L) ≤   1/4 + 1/4 ≤   1/2 . where the first step follows from 1/L∑_l=1^L 1[i is non-isolated in g^(l)] ≤[i is non-isolated] + 2√(log(k/δ)/L), where the second step follows from Eq. (<ref>), the third step follows from L ≥ 100 log (k/δ), and the last step follows from simple algebra. Therefore it follows by union bound for all i ∈_≥ c_1 (A⋆ B). This completes the proof. [!ht]Approximate sparse non-negative convolution. The goal of the following lemma is to prove that the value V_i^l computed in SparseConvolution in Algorithm <ref> satisfies V^(l)_i = (A ⋆ B)_int(x) + o(1). Consider l∈ [L] and its corresponding hash function g^(l) : [n] → [p^(l)]. Let V^(l) := g^(l)(A) ⋆ g^(l)(B) W^(l) := g^(l)(∂ A)⋆ g^(l)(B) + g^(l)(A) ⋆ g^(l)(∂ B) as defined in Line <ref>, <ref>. Let i∈_≥ c_1 (A⋆ B) be some coordinate and let x := W_i^(l)/V_i^(l) as stated in Line <ref>. Suppose i is isolated with respect to g^(l), then x satisfies |x - int(x)| = o(1) and the V_i^(l) in Algorithm <ref> Line <ref> satisfies V^(l)_i = (A ⋆ B)_int(x) + o(1). There exists unique x̂∈_≥ c_1(A ⋆ B) with g^(l)(x̂) = i. In this case, we have V^(l)_i =  ∑_y: ι^(l)(y) = i (A ⋆ B)_y =   (A ⋆ B)_x̂ + o(1), where the first step follows from V^(l)_i = ∑_y: ι^(l)(y) = i (A ⋆ B)_y (see Algorithm <ref>), the second step follows from ∑_y: ι^(l)(y) = i (A ⋆ B)_y = (A ⋆ B)_x̂ + o(1). Next, we can rewrite x as follows: x =  W_i^(l)/ V_i^(l) =  ι^(l) (∂(A ⋆ B))_i/ι^(l) (A ⋆ B)_i =  ∑_y: ι^(l)(y) = i y · (A ⋆ B)_y/∑_y: ι(x) = i (A ⋆ B)_y =  x̂ + o(1). where the first step follows from definition of x, the second step follows from definition of W_i^(l) and V_i^(l), the second step follows from Eq. (<ref>) and Lemma <ref>, the third step follows from Eq. (<ref>). Therefore |x - x̂| = o(1) and |V^(l)_i -(A ⋆ B)_x̂| = o(1). This completes the proof. With the premise of Lemma <ref> and Lemma <ref>, it is possible to prove the approximation guarantee and time complexity of SparseConvolution in Algorithm <ref> in Theorem <ref>. Suppose that the Assumption <ref> holds, let c_1 = Ω(1) and c_2 = o(n^-2). Algorithm <ref> runs in time O(k log (n) log(k)log(k/δ)) recovers all index in _≥ c_1(A ⋆ B) with point-wise error of o(1), i.e. * (D) = _≥ c_1(A ⋆ B) * D_j = (A ⋆ B)_j + o(1) for all j ∈_≥ c_1(A ⋆ B) holds with probability at least 1-δ. We initiate the high probability event in Lemma <ref>. In this event, for each i ∈ I, we have L/2 ≤ |F_i| ≤ L, so we must have D_i = median(F_i) = V^(l)_j for some l and j such that i is isolated and i = W_j/V_j. Then by Lemma <ref>, this implies D_i = (A ⋆ B)_i+o(1) and _≥ c_1(A ⋆ B) ⊆ I. Hence the first statement is proven. For running time, we notice the followings * Line <ref>-<ref> takes O(k log(n) log^2(k)) time. * Line <ref>-<ref> takes one pass of all recovery, which takes O(mL) time. * Line <ref> and Line <ref> takes totally O(|C|)=O(mL) time. We could build up a perfect hash function, scanning though all pairs in C and take out all possible indexes and its corresponding value in linear time. * Line <ref> takes O(mL) time in total.For each i∈ I, take median value of a set F_i takes O(|F_i|) time. Because |F_i| ≤ L and |I|≤ m, overall it takes O(∑_i ∈ I |F_i|) = O(mL) time. Since m = O(k log(n)log(k)), L = O(log(k/δ)), then mL = O(k log(n) log(k) log(k/δ)). To sum up, the total running time is: O(klog (n) log^2(k) + mL) = O(k log (n) log(k)log(k/δ)). § ITERATIVELY CORRECTING THE ERRORS The previous section illustrates the architecture of algorithms to approximate convolution computation. In this section, we will show how to iteratively corrects the errors in Algorithm <ref>. Let c_1 = Ω(1) and c_2 = o(n^-2). Suppose Assumption <ref> holds. There is an algorithm (Algorithm <ref>) that runs in time O(k log(n) log^2(k) (log(1/δ) + loglog(k))) recovers all index in _≥ c_1(A ⋆ B) correctly, i.e. * (D) = _≥ c_1(A ⋆ B) * D_j = (A ⋆ B)_j for all j ∈_≥ c_1(A ⋆ B) holds with probability at least 1-δ. The proof comes from Lemma <ref> and Lemma <ref>. This completes the proof. [!ht]Sparse non-negative convolution. We use the following Lemma from <cit.>. Since this result only depends on hash function and Lines <ref>–<ref>, the proof is similar. Let ℓ be any level. If A ⋆ B - C^l-1_≥ c_1≤2^-1.5^l-1 k/log^2(k), then with probability 1 - δ / (2L), there will be at most 2^-1.5^l k/2log^2(k) non-isolated elements at level l. We define L as L := Θ( loglog k ) For each l ∈ [L], we define R_l := Θ(log(L/δ)) / 1.5^l-1. Consider r∈ [R_l] and its corresponding hash function g_r : [n] → [p_r]. We define V_r and W_r as follows V_r := g_r(A) ⋆ g_r(B)- g_r(C^l-1) W_r := g_r(∂ A) ⋆ g_r(B) + g_r(A) ⋆ g_r(∂ B) - g_r(∂ C^l-1). Also see in Line <ref> and <ref> in Algorithm <ref>. We define r^* := max_r ∈ [R_l] |_≥ c_1 (V_r)|. Denoting the number of non-isolated elements at level l by r, we have A ⋆ B - C^l_≥ c_1≤ 2r. Focus on arbitrary l, and assume that we already picked a hash function g in Algorithm <ref> Lines <ref>–<ref>. In Definition <ref>, we have provided the definition of V and W. By the affinity of g it holds that V = g(A ⋆ B - C^l-1), by additionally using the product rule W = g(∂(A ⋆ B - C^l-1)). Now focus on an arbitrary i ∈ [n]. There are three cases: Case 1. For all x ∈ [n] with g(x) = i, there is 0 ≤ (A ⋆ B - C^l-1)_x ≤ c_2 · n^l-1. In this case, we have V_i =  ∑_x: ι(x) = i (A ⋆ B)_x ≤   n · c_2 =   o(1). Thus i ∉_≥ c_1(V). Case 2. There exists unique x, such that x ∈_≥ c_1(A ⋆ B - C^l-1) with g(x) = i. In this case, we have V_i = ∑_x: ι(x) = i (A ⋆ B)_x = x + o(1) and ι (∂(A ⋆ B))_i/ι (A ⋆ B)_i =  ∑_x: ι(x) = i x · (A ⋆ B)_x/∑_x: ι(x) = i (A ⋆ B)_x =   x + o(1). Therefore we successfully recover C_x^l = int( V_1) = (A ⋆ B)_x. Case 3. There exists multiple x ∈_≥ c_1(A ⋆ B - C^l-1) with g(x) = i. In this case, we have V_i = ∑_x: ι(x) = i (A ⋆ B - C^l-1)_x and ι (∂(A ⋆ B - C^l-1))_i/ι (A ⋆ B - C^l-1)_i = ∑_x: ι(x) = i x · (A ⋆ B - C^l-1)_x/∑_x: ι(x) = i (A ⋆ B - C^l-1)_x. If V_i ≥ c_1 and ι (∂(A ⋆ B - C^l-1))_iι (A ⋆ B - C^l-1)_i = x̂, then (A ⋆ B - C^l)_x̂ = Ω(1). Otherwise this iteration does not recover any coordinate in A ⋆ B - C^l-1. In Algorithm <ref>, we can correctly outputs C = A ⋆ B with probability 1 - δ. We show that with probability 1-δ it holds that A ⋆ B-C^ℓ_≥ c_1≤ 2^-1.5^ℓ k log ^-2(k) for all levels ℓ. At the last level, L= log _1.5log k = O(loglog k), we must have A ⋆ B-C^ℓ_≥ c_1=0 . and thus A ⋆ B=C^ℓ=C. The proof is by induction on ℓ∈[L+1]. For ℓ=0, the statement is true assuming that the SparseConvolution in Algorithm <ref> with parameter δ / 2 ≤log ^-2(k) / 2 succeeds. For ℓ>1, we appeal to the previous lemmas: By the induction hypothesis we assume that A ⋆ B- C^ℓ-1_≥ c_1≤ 2^-1.5^ℓ-1 k log ^-2(k). Hence, by Lemma <ref>, the algorithm picks a hash function g under which only 2^-1.5^ℓ k log ^-2(k) / 2 elements are non-isolated at level ℓ. By Lemma <ref> it follows that A ⋆ B-C^ℓ_≥ c_1≤ 2^-1.5^ℓ k log ^-2(k), which is exactly what we intended to show. For ℓ=0, the error probability is δ / 2. For any other level, the error probability is 1-δ /(2 L) by Lemma <ref> and there are L such levels in total. Taking a union bound over these levels, we can obtain the desired error probability of 1-δ. This completes the proof. There is an algorithm (Algorithm <ref>) that can compute C = A ⋆ B in O(k log(n) log^2(k) (log(1/δ) + loglog(k))) time. The running time of SparseConvolution in Algorithm <ref> can be computed in the following steps: * Line <ref> and Line <ref> takes O(k log(n) log^2(k)) time to do the FFT. We can bound the number of iterations by : ∑_ℓ=1^L[2 log (2 L / δ)/1.5^ℓ-1] =   (2 log (2 L / δ)) ∑_ℓ=1^L1/1.5^ℓ-1 ≤   (2 log (2 L / δ))· 10 =   O(log (L / δ)) , where the second step follows from the sum of infinite geometric series. Therefore, the total time spent on FFT is O(k log(n) log^2(k) log(1/δ)). * Line <ref> to Line <ref> take O(mL) = O(k log(n) log^2(k) loglog(k)). Therefore, the overall time complexity is: O(k log(n) log^2(k) log(1/δ)) + O(k log(n) log^2(k) loglog(k)) =   O(k log(n) log^2(k) (log(1/δ) + loglog(k))) This completes the proof. § CONCLUSION The computation of the convolution A ⋆ B of two vectors of dimension n is considered to be one of the most important and fundamental computational primitives in a wide variety of fields. Utilizing the Fast Fourier Transform, which has a time complexity of O(n log n), is the traditional way to solve the problem of non-negative convolution might have a very sparse representation, which is a property that we can use to our advantage to speed up the computation to obtain the result. In this paper, we show that for approximately k-sparse case, we can approximately recover the all index in _≥ c_1(A ⋆ B) with point-wise error of o(1) in O(k log (n) log(k)log(k/δ)) time. We further show that we can iteratively correct the error and recover all index in _≥ c_1(A ⋆ B) correctly in O(k log(n) log^2(k) (log(1/δ) + loglog(k))) time. alpha IEEEtran
http://arxiv.org/abs/2306.09849v1
20230616134655
On Evolvability and Behavior Landscapes in Neuroevolutionary Divergent Search
[ "Bruno Gašperov", "Marko Đurasević" ]
cs.NE
[ "cs.NE" ]
Searching for Milky Way twins: Radial abundance distribution as a strict criterion L. S. Pilyugin<ref>,<ref> G. Tautvaišienė<ref> M. A. Lara-López<ref> July 31, 2023 ================================================================================================================== Evolvability refers to the ability of an individual genotype (solution) to produce offspring with mutually diverse phenotypes. Recent research has demonstrated that divergent search methods, particularly novelty search, promote evolvability by implicitly creating selective pressure for it. The main objective of this paper is to provide a novel perspective on the relationship between neuroevolutionary divergent search and evolvability. In order to achieve this, several types of walks from the literature on fitness landscape analysis are first adapted to this context. Subsequently, the interplay between neuroevolutionary divergent search and evolvability under varying amounts of evolutionary pressure and under different diversity metrics is investigated. To this end, experiments are performed on Fetch Pick and Place, a robotic arm task. Moreover, the performed study in particular sheds light on the structure of the genotype-phenotype mapping (the behavior landscape). Finally, a novel definition of evolvability that takes into account the evolvability of offspring and is appropriate for use with discretized behavior spaces is proposed, together with a Markov-chain-based estimation method for it. § INTRODUCTION Recent years have seen a strong surge of interest in a number of evolutionary computation and population-based search approaches rooted in the ideas of divergent search and open-endedness <cit.>. When combined with neuroevolution, these approaches offer a promising alternative <cit.> to more canonical gradient-based algorithms for training deep reinforcement learning agents, especially for tasks that include sparse or deceptive rewards, or require generation of a large number of high-performing yet behaviorally diverse solutions. Within this realm, the novelty search <cit.> and quality diversity <cit.> families of approaches have gained considerable attention in the research community. Novelty search is a class of divergent search algorithms based on abandoning objectives altogether and solely pursuing behavioral novelty instead. Novelty is typically evaluated by comparing the behavior of an individual to the behavior of the current population and a record of previously encountered individuals, known as the archive. On the other hand, quality-diversity approaches (as the name itself implies) consider both the objective quality of solutions and their diversity, aiming to discover a diverse set of high-performing solutions that occupy different niches, each of which represents a different type of behavior. As these algorithms "illuminate" the fitness potential of feature space areas, they are sometimes referred to as illumination algorithms <cit.>. The underlying thread of all divergent search algorithms is their emphasis on enabling the necessary conditions for open-ended processes to occur, which requires a high level of what is known as "evolvability". The concept of evolvability has been frequently studied, both in the area of natural evolution <cit.> and evolutionary computing <cit.>. Despite the abundance of various, even somewhat conflicting definitions <cit.>, we see evolvability as the capacity of an individual (genotype) to produce offspring with diverse phenotypes. Following this, evolvability can be considered an indicator of the individual's ability to exhaustively explore the solution space through its offspring. While the importance of evolvability for divergent search can hardly be overstated, given that it increases the potential for future evolution <cit.>, this area of research remains relatively unexplored, especially in combination with neuroevolution. It has been shown that novelty search in itself promotes evolvability <cit.> (measured via reachability and sampling uniformity of the offspring) on both individual and population level <cit.>, with some caveats[A distinction should be made between novelty search for neuroevolution of weights and neuroevolution of topologies.] <cit.>. Yet novelty search boils down to greedily choosing the most diverse individuals within each generation without taking into account their other characteristics, including evolvability itself, which might lead to less diversity in the long run. Similar considerations have led researchers <cit.> to propose algorithms that instead search directly for evolvability. The successes of these approaches in generalizing well and producing higher evolvability even in unseen testing environments, as well as in yielding solutions that can be quickly adapted for different tasks, are demonstrated. A somewhat different path is taken by Katona et al. <cit.> who propose the Quality Evolvability Evolutionary Strategy. In their work, a population of diverse and well-performing solutions is obtained indirectly - by finding a single individual with such a distribution of offspring. In <cit.>, the authors list several properties, characteristics, and mechanisms which were demonstrated to be conducive to improving evolvability, including extinction events <cit.> and reduction of the cost of connections between network nodes <cit.>. Ferigo et al. <cit.> first define evolvability as the mean difference in fitness between the parents and their offspring, and then use a grid-like structure for studying the interplay between evolvability and fitness. Their main findings suggest that evolvability is not an intrinsic property of the fitness landscape, but that it rather heavily depends on the used evolutionary algorithm. Finally, we mention the newer work of Doncieux et al. <cit.> who treat evolvability as reachability in the task space and provide a theoretical perspective on the underlying problem. Our research lies at the intersection of (behavior) landscape analysis, neural network weight neuroevolution, and evolvability research, exploring the strong links between the three topics. Following definitions from <cit.>, we are primarily concerned with individual evolvability, measured via a form of reachability, as opposed to population evolvability <cit.>. Furthermore, unlike the authors in <cit.>, we study behavior for its own sake and do not rely on any particular fitness function. We particularly explore the effect of the use of different diversity metrics and levels of evolutionary pressure on evolvability in the context of neuroevolutionary divergent search. The main contributions of this study are the following. First, we introduce a number of metrics related to evolvability and behavior landscape analysis. Then we adapt the concept of walks from the existing literature on fitness landscape analysis to the context of behavior landscape analysis. We also propose "dissimila", points with low evolvability that play a role in novelty search that we suspect is similar to that of local optima in standard fitness-driven search. Second, we propose a diversity metric based on the use of Gaussian kernel density estimates. Third, we study the sensitivity of individual evolvability to the level of evolutionary pressure. By varying the level of evolutionary pressure present in the environment, we aim to generalize existing research on the connections between novelty search and evolvability. Positive links between evolvability and the level of evolutionary pressure present in the environment would provide additional evidence on the significance of novelty search for promoting evolvability. Fourth, we investigate the effect of different diversity metrics on evolvability, with a focus on the aforementioned kernel density estimation-based metric. A suitable diversity metric is a key heuristic in a divergent search algorithm as it plays a vital role in assessing the diversification of the solutions produced by the algorithm. Fifth and finally, we put forth a novel definition of evolvability that also considers the evolvability of offspring and is suitable for use with discretized behavior spaces, together with an associated Markov-chain-based estimation method. The paper is organized as follows. Section 2 introduces definitions and metrics which represent the prerequisites for further work. In particular, concepts such as evolvability, sensitivity, and walks are expounded on in great detail, and a number of diversity metrics are listed. Section 3 presents the experimental setup, including the task description, operators, neural network architecture, and hyperparameters. In Section 4, the experimental results are presented. Section 5 proposes a novel evolvability definition together with an associated estimation method. Finally, Section 6 concludes the paper by summarizing the findings and offering suggestions for future research in the field. § DEFINITIONS AND METRICS §.§ Behavior function and descriptors A behavior function ϕ : Θ↦ℬ is a mapping from genotypes (in neuroevolution represented by parameters of a neural network) to phenotypes (behavior vectors): ϕ(θ) = 𝐛, where Θ is the set of genotypes, ℬ is the behavior space (BS), i.e., the set of all behavior vectors (BVs), θ is a genotype, and 𝐛 its phenotype. Typically, ℬ⊆ℝ^n with n ≥ 2. The components of a BV are referred to as behavior descriptors. For example, a common choice of behavior descriptors in evolutionary robotics is given by the final position of the robot's body along two dimensions, i.e., 𝐛 = (x_T, y_T) where T is the terminal time. In some approaches (e.g., the MAP-Elites algorithm <cit.>), the BS is split into niches - regions or subspaces containing solutions with similar behavior, which can be obtained by simply discretizing the BS into a grid. As an alternative, centroidal Voronoi tessellation <cit.>, which ensures scalability to high-dimensional BSs, might be used. The use of niches gives rise to an alternative categorical representation of BVs in the form of one-hot vectors 𝐛 = (…, 1_behavior in the niche i, … ), with the i-th entry set to 1 if the BV belongs to the i-th niche, and 0 otherwise. However, it is important to notice that this encoding discards the original relationships between the niches and eliminates any naturally occurring distance metrics between them. §.§ Behavior landscape A behavior landscape is an abstraction that represents the genotype-behavior mapping, i.e., the function ϕ. We define it analogously to the fitness landscape, which deals with genotype-fitness mapping and has been thoroughly studied in various contexts, and the behavior-fitness mapping, considered widely in quality-diversity approaches. This triad of mappings forms a bridge between classical evolutionary algorithms and new illumination-based approaches. Behavior landscapes are intricate and complex structures influenced by a number of factors, such as the choice of the behavior function, the topology of the (neural network) controller[A controller simply determines the mapping from the set of observations (states) to the set of actions, which can be represented by a reinforcement learning agent.], the interaction of the resulting controls with the environment, and the environment's own internal mechanics. They are typically highly rugged, showing significant epistasis <cit.>, and more difficult to visualize than fitness landscapes <cit.>, given the possible high dimensionality of the BS. A fitness landscape is usually considered difficult if it has a large number of peaks and valleys (local optima) or if the paths to global optima are not easy to traverse. In behavior landscape analysis, different criteria are required, and ruggedness is not necessarily a hindrance, as it is closely related to diversity. A difficult behavior landscape could be seen as one in which a divergent search algorithm struggles to explore the BS easily due to the presence of inaccessible regions. Therefore, different topographical aspects (compared to classical fitness landscape analysis), such as the distribution of different niches in the behavior landscape, should be scrutinized. On a related note, we remark that some studies suggest that behaviorally diverse solutions lie concentrated in a small subset of the genotype space called the hypervolume[In <cit.>, the emphasis is put on elite hypervolumes in the sense of quality-diversity.] <cit.>. It has also been noted <cit.> that, in the context of novelty search, the fitness function is dynamic and contingent upon the set of individuals comprising the current population and archive. In contrast, the behavior landscape is static[In stochastic environments, where behaviors are noisy, behavior descriptors (which are now random variables) can be replaced by their expected values, thereby preserving the static nature of the behavior function.], rendering it less recalcitrant to analysis. Yet the research area remains mainly understudied with some notable exceptions <cit.>, while even the broader domain of neuroevolutionary fitness landscapes has received very limited scholarly attention <cit.>. §.§ Local sensitivity A behavior function ϕ is said to be locally sensitive at θ if small steps around it can lead to large changes in the ensuing behavior. More formally, the local sensitivity of a behavior function ϕ at θ∈Θ can be defined as: LS_ϕ(θ) = θ'∈𝒩(θ) max |ϕ(θ)-ϕ(θ')|, where 𝒩(θ) is the set of all neighbours of θ, associated with a certain mutation operator ℳ. θ' is a neighbor of θ if and only if θ' can be reached from θ through a single mutation. This relation is not necessarily symmetric[E.g., consider removal of a neural network weight as a mutation operator.]. Large LS_ϕ(θ) values point to the existence of neighbors that are significantly behaviorally different from θ but say nothing about their frequency. On the contrary, small LS_ϕ(θ) values indicate high levels of behavioral neutrality <cit.> locally around θ. Note that, when training neural networks via neuroevolution, mutations are often given as random samples from a multivariate (normal) distribution used for perturbing its weights, in which case 𝒩(θ) = Θ, rendering the notion of the set of neighbors meaningless. However, this can be easily addressed by truncating the underlying distribution or by otherwise enforcing genotypic proximity given by |θ-θ'| < c, for some positive constant c. As previously hinted at, Eq. <ref> is sensitive to outliers due to the use of the operator. A more robust alternative can be provided. First, consider the sampling distribution over the set of neighbors 𝒩(θ) induced by ℳ and denote it by ℱ. Now simply define the expected local sensitivity as: LS_ϕ^*(θ) = θ'∼ℱ(θ'∈𝒩(θ)) 𝔼|ϕ(θ)-ϕ(θ')|. Eq. <ref> measures the expected distance between θ and its randomly selected neighbor. It can be estimated by sampling and taking the mean value: LS_ϕ^*(θ) = 1/𝐜𝐚𝐫𝐝(𝒮(θ))∑_θ'∈𝒮(θ) |ϕ(θ)-ϕ(θ')|, where 𝒮(θ) ⊆𝒩(θ) is a sample from the set of neighbours of θ and 𝐜𝐚𝐫𝐝(…) denotes cardinality. §.§ Global sensitivity The global sensitivity of ϕ is defined as: GS_ϕ = θ' ∈Θ , θ”∈𝒩(θ') max |ϕ(θ')-ϕ(θ”)|. It is equal to the local sensitivity of the genotype θ that has the largest local sensitivity. The expected global sensitivity is defined in analogy to Eq. <ref> but is not provided here for the sake of brevity. §.§ Evolvability Evolvability <cit.> can be conceptualized as a gauge of an individual's capacity to produce offspring with mutually diverse phenotypes (in our case BVs). Although there is no consensus on its definition, we use the following one: η_ϕ(θ) = θ', θ”∈𝒩(θ) max |ϕ(θ')-ϕ(θ”)|, where η_ϕ(θ) denotes the evolvability of solution θ. Hence, evolvability corresponds to the diameter of the cluster comprising the neighbors of θ. The same caveats that apply to Eq. <ref> apply here as well. By comparing Eq. <ref> and Eq. <ref> it is clear that, due to the triangle inequality, η_ϕ(θ) ≤ 2 LS_ϕ (θ), and the two metrics are expected to be positively correlated. Again, similarly to Eq. <ref>, the expected evolvability is a robust alternative given by: η_ϕ^*(θ) = θ', θ”∼ℱ(θ', θ”∈𝒩(θ))𝔼|ϕ(θ')-ϕ(θ”)| and its estimator: η_ϕ^*(θ) = 1/𝐜𝐚𝐫𝐝(𝒮(θ)) 𝐜𝐚𝐫𝐝(𝒮(θ)-1)∑_θ', θ”∈𝒮(θ) θ'≠θ” |ϕ(θ')-ϕ(θ”)|. Under a one-hot representation of BVs, evolvability can be estimated relative to the sample size, as: η_ϕ(θ; 𝐜𝐚𝐫𝐝(𝒮(θ))) = d/n, where n is the total number of niches, and d the number of niches occupied by solutions from the sample 𝒮(θ). Although our focus is on individual-level evolvability, we also present the following expression used to estimate the expected population evolvability: ρ_ϕ^*(i) = 1/n_i+1 (n_i+1-1)∑_θ', θ”∈𝒢_i+1 θ'≠θ” |ϕ(θ')-ϕ(θ”)|, where ρ_ϕ^*(i) denotes the expected population evolvability of the i-th generation, 𝒢_i the set of all members of the i-th evolutionary generation, and n_i its cardinality. Finally, it is worth noting that evolvability is generally computationally expensive, and in some cases, even prohibitively so, since it requires evaluating the fitness function (which can itself be costly) on the whole 𝒩(θ) or a sufficiently large sample of it. §.§ Local "optima" Consider the ratio of evolvability and local sensitivity: r(θ) = η_ϕ(θ)/LS_ϕ(θ)+ϵ, where protected division is employed (for some infinitesimal ϵ>0). It can be shown that 0 ≤ r(θ) < 2. Also, consider its probabilistic variant: r^*(θ) = η_ϕ^*(θ)/LS_ϕ^*(θ)+ϵ. These local metrics provide insight into the relationship between θ and its neighbors. Solutions with relatively small r^*(θ) values are analogous to local optima in classical fitness landscapes - due to their large (in relative terms) expected local sensitivity they are likely to be selected in NS-like algorithms, despite their low expected evolvability. Hence, we suspect that they could represent points of deception and that their presence is expected to ramp up the difficulty of the respective behavior landscape. We refer to such solutions as "dissimila", in analogy to "optima". On the contrary, solutions with relatively large r(θ)^* values represent highly evolvable points in the behavior landscape that might be difficult to find because of their relatively low expected local sensitivity, i.e. their relative similarity to their neighbors. §.§ Walks A walk <cit.> 𝒲 on a behavior landscape is a sequence (time series) of behavior vectors (𝐛_0, …, 𝐛_𝐢, …, 𝐛_𝐓), associated with a sequence of genotypes, (θ_0, …, θ_i, …, θ_T), via the relation b_i = ϕ(θ_i). The underlying sequence of genotypes is generated by some process 𝒜 (e.g. an evolutionary algorithm), i.e., 𝒜 (θ_i)=θ_i+1. This ensures that actual segments of the search space traversed by the respective (novelty search) algorithm are considered <cit.>. Solution θ_i+1 is called the child of θ_i, and conversely, θ_i is referred to as the parent of θ_i+1. Unless stated otherwise, real-valued (non-categorical) BVs are assumed. Also, all walks are assumed to be of finite length T+1. We highlight that walks are particularly well-suited to evolvability analysis, as they involve a single parent generating a large number of offspring (neighbors) during each step. §.§.§ Highly selective walk In a highly selective walk, the best (according to some criterion) solution from 𝒮(θ_i) is chosen, i.e.: 𝒜 (θ_i) = θ_i+1 = θ'∈𝒮(θ_i)argmax𝒢 (ϕ(θ')), where 𝒢 is a (novelty-based) fitness metric and 𝒮(θ_i) ⊆𝒩(θ_i) is defined as earlier. This type of walk represents situations in which there is a large amount of evolutionary pressure. If niches are considered, an alternative definition is possible, given by: 𝒜 (θ_i) = θ_i+1 = θ' ∈𝒮(θ_i) N(ϕ(θ')) ≠ N(ϕ(θ_i))argmax𝒢 (ϕ(θ')), where N(ϕ(θ)) denotes the niche to which ϕ(θ) belongs. Hence, among solutions that fall into a different niche than θ_i, the best one is accepted. The corresponding walk through niches is given by: (N(ϕ(θ_0)), …, N(ϕ(θ_i)), …, N(ϕ(θ_T))). §.§.§ Adaptive walk In an adaptive walk, any solution "better" than θ_i is accepted. It is an intermediate type of walk between a highly selective and a random walk. Consider first the subset of children of θ_i, given by C_a = {θ' ∈𝒮(θ_i) |𝒢 (ϕ(θ') ≥ a } for some a which is a proxy for evolutionary pressure. Depending on the model, a can be constant or change dynamically in time. Now simply: θ_i+1∼Unif(C_a), where Unif denotes the uniform distribution. Hence, unlike the case with highly selective walks, a random solution that surpasses a certain threshold is selected instead. If no such solution exists, set θ_i+1=θ_i and sample from 𝒮(θ_i+1) again. Let us also consider adaptive walks when using BVs expressed as one-hot vectors. In that case, consider the subset C_a = {θ' ∈𝒮(θ_i) | N(ϕ(θ')) ≠ N(ϕ(θ_i)) } and sample uniformly from it to obtain θ_i+1. Similarly as before, if C_a is an empty set, let θ_i+1=θ_i. We emphasize that a highly selective walk can also be considered adaptive, with a selected in such a way that 𝐜𝐚𝐫𝐝(C_a)=1. Finally, we raise attention to the connections between biological evolution by natural selection and the concept of adaptive walks <cit.>. §.§.§ Random walk In a random walk, any neighbor is accepted: θ_i+1∼Unif(𝒮(θ_i)). It describes the conditions under which there is no evolutionary pressure altogether. §.§.§ Multi-state walk In a multi-state walk, at each step i, a set of genotypes is selected, resulting in a matrix 𝒵_i filled with BVs. For the sake of simplicity, in what follows the focus is solely on single-state walks. By doing this we abstract the key aspects of divergent search, stripping it to its barest minimum and focusing on the evolvability of individuals. §.§ Diversity metrics There are various options available when selecting the diversity metric 𝒢. Here we list and propose several: §.§.§ K-nearest neighbors distance to other individuals in the population and the archive This case corresponds to the vanilla novelty search algorithm <cit.>: 𝒢 (ϕ(θ')) = ∑_1 ≤ k ≤ K |μ_k(θ')-ϕ(θ')| where μ_k(θ') is the phenotype of the k-th nearest (in the BS) individual to θ', with μ_k(θ') = ϕ(θ”), θ”∈𝒮(θ_i) ∪𝒳. 𝒳 is the archive of previous novel (or randomly selected) individuals. The Euclidean distance is typically used. §.§.§ Distance to ancestors If the distance to the parent is used as a metric, we have: 𝒢 (ϕ(θ')) = |ϕ(θ_i)-ϕ(θ')|. Observe the similarity to Eq. <ref>; if 𝒮(θ_i) = 𝒩(θ_i), the distance between consecutive BVs in the respective walk is maximum and equal to the local sensitivity at θ'. Alternatively, it is possible to consider not only θ_i but also the entire chain of ancestors, which then acts as a simplified archive. More formally: 𝒢 (ϕ(θ')) = ∑_0 ≤ j ≤ i |ϕ(θ_j)-ϕ(θ')|. The Markovian property no longer holds with Eq. <ref>. §.§.§ Gaussian kernel density estimation-based metric. Finally, it is possible to set: 𝒢 (ϕ(θ')) = 1/𝐜𝐚𝐫𝐝(𝒴)∑_θ”∈𝒴 -K_H( ϕ(θ')-ϕ(θ”)) γ(θ”), where 𝒴 = (𝒮(θ_i) ∪𝒳) ∖{θ' }, K_𝐇 is the Gaussian kernel, H is the bandwidth[Also known as the smoothing matrix.] matrix and the rest of the notation is the same as before. The Gaussian kernel is given by: K_𝐇(𝐱 )=(2π )^-d/2|𝐇| ^-1/2e^-1/2𝐱^𝐓𝐇^-1𝐱, with d denoting the number of dimensions. The idea is to select solutions that are located remotely from the areas already explored. However, since visiting previously seen regions (i.e., backtracking) can even be beneficial <cit.>, we introduce a discount function γ that ensures that recently explored regions are avoided more strongly than those visited long ago, as well as use an archive with a finite size. In the context of biological evolution, this approach could resemble a group of individuals searching for food while avoiding areas depleted of resources by previous inhabitants, in an environment that slowly replenishes resources over time. Note that by using such a metric, the archive is implicitly contained in the shape of the kernel density estimate, which plays the role of a memory container. Also, observe that the choice of the matrix H is a subtle one, as it determines the amount of smoothing applied to the density estimate, with small (large) bandwidths resulting in more peaked (smoother) estimates. § EXPERIMENTAL SETUP Experiments on an evolutionary robotics task are conducted to examine the effect of evolutionary pressure and the use of different diversity metrics on evolvability. Although the used environment in its original form includes an explicit goal, e.g. moving a block to a target location, our study focuses solely on the diversity of learned behaviors and disregards any other goals. The experiments are implemented using <cit.>, an open-source Python library for reinforcement learning, and its associated collection of environments, . Parallel processing is achieved with , a toolkit for scaling Python applications to clusters. The environment used for the experiments is illustrated in Fig. <ref>. §.§ Task description The experiments are performed on a toy episodic reinforcement learning task provided within the Fetch "Pick and Place" environment <cit.>. The environment is based on the Fetch Mobile Manipulator, a seven-degree-of-freedom robotic arm equipped with a two-fingered parallel gripper as its end-effector, which can be either closed or open. The robot is controlled by moving the gripper in a Cartesian space. In the original task, the goal is to use the manipulator to pick up and move a block from an initial position to a target position. The block is initially located on a table, while the target position can either be on the table or in mid-air. In the task formulation with rewards, a positive reward is given upon completing the goal (if using a sparse reward formulation) or moving the block closer (if using a dense reward formulation) to the target position. The manipulator is controlled at a frequency of f = 25 and each simulation time-step has a duration of dt = 0.002. The observation space consists of 25 variables, including the current positions, displacements, rotations, and velocities of various parts of the end-effector and the block. Kinematic information is provided through Mujoco <cit.> bodies attached to both the block and the end-effector. The action space consists of four variables: Cartesian displacements dx, dy, and dz of the end-effector, and one variable controlling the closing and opening of the gripper. The episode is truncated when its duration reaches the maximum number of steps, which is set to 50 by default. The determinism of the environment is ensured by fixing the initial conditions, more specifically, the initial and the target position of the block. §.§ Behavior space and descriptors In the considered environment, the BS is two-dimensional and the BVs are given by 𝐛 = (x_T, y_T) where x_T (y_T) denotes the final position of the block in the x (y) axis direction. §.§ Neural network controllers A fully-connected feed-forward neural network architecture was used for the experiments. The first layer consists of 25 input neurons that encode the state (observation) space features. It is followed by two hidden layers, each with 32 neurons. The output layer contains only 4 neurons (the same as the cardinality of the action space) which are responsible for controlling the robotic manipulator. The rectified linear unit () activation function was used in all places, with the exception of the output layer, where the hyperbolic tangent () activation function was employed to ensure that the actions produced by the network are within the [-1,1] range, as required by the used reinforcement learning environment. The choice of the neural network architecture was guided partly by prior works that showed the effectiveness of relatively shallow designs (with only 2 hidden layers) on various reinforcement learning tasks with low-dimensional state spaces <cit.>, and partly by our initial experimentation. The resulting policies are deterministic in the sense that states are mapped to specific actions as opposed to probability distributions over the action space. The Xavier normal[Weights are sampled from a normal distribution with mean 0 and variance inversely proportional to the sum of the number of inputs and outputs to the layer.] initialization was used for the weights, while the biases were initialized to a constant value of 0. The topology of the neural network remains constant, and only the weights are updated during the search. §.§ Variation operator The choice of mutation operator is pivotal as it defines the set of neighbors and the probability distribution over it, thereby affecting the search through the behavior landscape. We use the standard Cauchy distribution (also known as the Student's t-distribution with a single degree of freedom) to perturb the weights of the employed neural network. For simplicity, the crossover is not utilized in our experiments. This choice was partly inspired by recent work in which an evolutionary algorithm based on the use of Cauchy deviates (simple Cauchy mutations) was shown to have favorable convergence properties <cit.>, although this was done outside of the context of neuroevolution. It should be noted that, due to the distribution's leptokurtosis, the use of Cauchy-based mutations tends to frequently cause small changes to the respective genotype, and occasionally large ones. §.§ Hyperparameters The size of the archive is limited, and new solutions are randomly added to it with a probability of P = 10 %. The values for the remaining hyperparameters can be found in Table <ref> and in the captions of the respective figures. The selection of certain hyperparameters, such as the number of walks (run) or the walk length, was made in such a way as to ensure that the simulations were manageable within the budget of our available computational resources. Additionally, the parameter a (required for generating adaptive walks) is set dynamically such that in each generation, the top x% of individuals pass the threshold. § RESULTS §.§ Evolutionary pressure We start by analyzing the effect of evolutionary pressure on evolvability. Fig. <ref> shows the mean evolvability under a fixed number of runs as a function of the walk step number, for walks with different levels of evolutionary pressure. The number of walk steps is set to 50. Since highly selective walks instigate evolvability, adaptive walks employing novelty as a criterion are also expected to do this, albeit to a smaller degree. The parameter (given in parentheses) specifies the fraction of the best children that comprise the set from which the next parent is randomly picked. Thus, higher values indicate lower degrees of evolutionary pressure. Extremes are represented by the highly selective walk, in which the best single child is always selected, and the random walk (in essence a random search in the genotype space) in which any child is chosen. Observe that, after starting from approximately the same mean evolvability value, the trajectories spread out in a manner that suggests a positive correlation between evolvability and the level of evolutionary pressure. We use evolvabilities at the end of the walk (step 50) and the corresponding levels of evolutionary pressure to determine the Spearman rank-order correlation coefficient r, which is a measure of the monotonicity of the relationship between two variables. The obtained correlation coefficient equals r = 0.7106 (p < 10^-46), demonstrating a strong positive correlation. Also note that, while the random walk trajectory leads to a clear degradation of evolvability over time, even modest amounts of evolutionary pressure are sufficient to preserve (or even enhance) it for all of the considered trajectories. In summary, the obtained results are in accordance with recent findings on novelty search promoting evolvability <cit.> and also offer additional perspective on the relationship between evolvability and evolutionary pressure. We also leave for further work the following question: Is it possible to devise neural network initialization schemes that lead to high levels of evolvability at the very beginning, introducing one-shot evolvability and facilitating the ensuing search process? This is hoped to lead to even faster evolvability gains. §.§ Different metrics In this Subsection, we analyze the effect of different diversity metrics on evolvability. Fig. <ref> shows the results on the Fetch "Pick and Place" environment, with the number of walk steps set to 50. We emphasize that the distance to ancestors metric takes into account the whole chain of ancestors, up until the very beginning of the walk. As seen in the figure, the metric based on Gaussian kernel density estimation seems to dominate over the alternatives, while the k-nearest neighbors without the archive variant yields the lowest evolvability values. To corroborate this observation statistically, consider first the sets of evolvability values at walk step 100 associated with each of the evaluated metrics. To this end, we turn to the non-parametric Kruskal-Wallis H-test, which does not require the assumption of normality, while its other assumptions (independence of samples, random sampling, ordinal or continuous dependent variable) are met. It is then used to test the null hypothesis that the population medians of all of the considered sets are equal. The obtained p-value equals p=0.01943, rejecting the null hypothesis at α=0.05 significance level. The Conover's test is used as a post-hoc test to make pairwise comparisons, with the choice of the Holm method for adjusting p-values. The p-values comparing the Gaussian kernel density estimates to (a) the KNN method, (b) KNN without an archive, and (c) the distance to ancestors-based metric are, in order: 0.053031, 0.052218, and 0.040418. All the other p-values are equal to 1. Hence, given the α=0.05 significance level, we cannot conclude that there is a significant difference between the mean ranks of these groups, except for the Gaussian kernel density estimates/distance to ancestors pair. Nevertheless, the results do indicate that the use of the Gaussian kernel density estimate-based metric provides a competitive, if not superior, alternative that could be added to the arsenal of existing metrics. An important difference between the Gaussian kernel density estimate-based metric and the one based on KNN with Euclidean distance should be highlighted. Consider, for example, points A and B in a BS and the line connecting them. KNN with Euclidean distance (assuming k=2) would be indifferent between the points on the line, which represents a contour line in the corresponding contour graph. In contrast, a Gaussian kernel density estimate-based metric would favor the mid-point. While the results are so far limited to a single reinforcement learning environment, further work should test generalization across multiple environments. Finally, keep in mind that highly selective walks represent a very simplified version of novelty search with only a single parent in each generation. Consequently, it is possible and even likely that the presented results underestimate the evolvability levels that would appear in a fully-fledged novelty search algorithm. § MORE ON EVOLVABILITY §.§ Towards better evolvability definitions It has been observed, already in the original MAP-Elites paper <cit.>, that most solutions are positioned in the BS relatively close to their parents <cit.>. Donciex et al. <cit.> measure evolvability by estimating reachability (with a simple ratio) and uniformity separately (by using the Jensen-Shannon distance). The authors emphasize that separating the two aspects facilitates interpretation and prevents informational overlap, as the same Jensen-Shannon Distance (JSD) can result from sets of points with varying coverage. While valuable, we argue that this separation alone does not deal with the central problem because two sets of points with identical coverage and JSD can greatly vary in terms of their actual evolvability. More specifically, the ensuing definition is highly myopic in the sense that it does not take into account the evolvability of the offspring themselves. To illustrate this, consider from example solutions A and B, each with two children, belonging to niches 1 and 2, and 3 and 4, respectively. A and B clearly have the same coverage and JSD. However, if solutions from niches 3 and 4 are more likely to generate offspring belonging to other niches than solutions from niches 1 and 2, it could be argued that B should be considered more evolvable. Furthermore, the evolvability of the visited niches should also be accounted for, some of which might serve as "wormholes", generating offspring in remote or hardly reachable regions of the BS. Therefore, evolvability metrics should consider not only the coverage and uniformity but also the very structure of the behavior landscape, i.e. the mutual reachability of different niches (regions) in it and the very structure of the genotype-phenotype mapping. With this goal in mind, we suggest the following criterion, which presents a generalization, applicable when the BS is split into niches: Evolvability in discretized BSs: A solution θ' is said to be more l-evolvable than solution θ” if and only if: 𝔼(𝐜𝐚𝐫𝐝(𝒟_θ'^l, U)) > 𝔼(𝐜𝐚𝐫𝐝(𝒟_θ”^l, U)), where 𝐜𝐚𝐫𝐝(𝒟_θ^l, U) denotes the cardinality of the set of all niches visited at least once by descendants of θ up to their l-th generation, assuming a total of U descendants. The parameter l captures the level of long-sightedness, with l=1 corresponding to the simple coverage of the children, as introduced in <cit.>. In non-discretized BSs, the number of visited niches could be replaced, for example, by the hypervolume of the convex hull spanned by the descendants. In what follows, we lay out a procedure for estimating the evolvability of solution θ under such a definition: * Use a grid to split the BS into niches * Generate the offspring of θ and calculate their discrete distribution D over the set of all niches * Use MAP-Elites <cit.> or other NS-like algorithm to estimate the probability matrix T=(t_i, j), where t_i, j is the probability of solution in niche i generating a child in niche j * Construct a Markov chain with T as its transition matrix, the set of niches as its state space, and D as its initial distribution * Perform U random walks of length l+1 on the constructed Markov chain and obtain the discrete distribution D” of the visited niches over the set of all niches, cumulatively for all U walks * Apply an evolvability metric to D”, i.e. coverage (in accordance with Expression <ref>) * Repeat steps (5) and (6) multiple times, and use the mean to approximate the evolvability of θ Additionally, it is possible to define the evolvability of niche i by using the appropriate initial distribution (i.e. a one-hot vector) as D. We remark on several aspects. Firstly, selecting the correct number of niches (i.e. the grid size) in the first step is a delicate matter, as having too few niches could result in the matrix T with mostly close-to-zero values in all its non-diagonal entries, while having too many can lead to a large number of unvisited niches. Secondly, note the similarity with the curiosity score <cit.> used in quality-diversity approaches. The shortcoming of the method (in the presented form) lies in the fact that it can only be used a posteriori, i.e. after the entire BS has already been explored. Despite this limitation, it provides a useful tool for better exploring the structure of the behavior landscape, primarily through the estimation of the transition matrix T. Also, by evaluating T in an online fashion, it could also be integrated with other methods that search directly for evolvability <cit.>. § CONCLUSION AND FURTHER WORK This paper explores evolvability in neuroevolutionary divergent search from the perspective of a behavior landscape analysis. In particular, the results confirm the positive correlation between evolvability and the amount of evolutionary pressure present in the environment. Moreover, they indicate that the use of Gaussian kernel density estimate-based metrics should be added to the arsenal of existing diversity metrics in the context of evolvability promotion. Finally, a new definition of evolvability, that considers the offsprings' evolvability as well and takes into account the very structure of the behavior landscape, is proposed, along with a method for its estimation. As part of further work, generalization to a wider range of environments should be studied, along with the use of alternative operators and behavior space descriptors. The relationship between the newly proposed definition of evolvability and the curiosity score should also be experimentally studied. Another possible path forward is to investigate the links between the evolvability of neural network controllers and their sensitivity to initial conditions in stochastic (noisy) reinforcement learning environments, i.e., a form of Lyapunov stability <cit.>. More specifically, the phenotypic sensitivity to environmental noise (with a fixed genotype) could be used as a behavior (meta)-descriptor. Further ideas rely on the direct evolvability search <cit.> and lead it into new directions. One idea would be to take it a step further and consider direct meta-evolvability search: instead of optimizing for evolvability, optimize for the potential for evolvability. Another possible avenue is to encode the parameters of the mutation operator's distribution into the genotype, thereby also performing a search through the space of heuristics. More generally, the use of hyperheuristics such as genetic programming might help uncover heuristics with the capacity to further enhance evolvability. Further studies could also test the use of Gaussian kernel density estimate-based distance metrics in the context of population-level evolvability <cit.>. Finally, the effects of optimization with conflicting objectives <cit.> on evolvability could also be considered. unsrt
http://arxiv.org/abs/2306.10641v2
20230618214037
On the critical points of semi-stable solutions on convex domains of Riemannian surfaces
[ "Massimo Grossi", "Luigi Provenzano" ]
math.DG
[ "math.DG", "math.AP", "math.SP", "58J32, 58J61, 58J20, 58J05" ]
Ubboldmn thmTheorem[section] cor[thm]Corollary lem[thm]Lemma ite[thm]Itemize prop[thm]Proposition definition defn[thm]Definition remark rem[thm]Remark ex[thm]Example conj[thm]Open Question equationsection
http://arxiv.org/abs/2306.02287v1
20230604073337
It Takes a Village: A Case for Including Extended Family Members in the Joint Oversight of Family-based Privacy and Security for Mobile Smartphones
[ "Mamtaj Akter", "Leena Alghamdi", "Jess Kropczynski", "Heather Lipford", "Pamela Wisniewski" ]
cs.HC
[ "cs.HC" ]
Joint Family Oversight for Mobile Privacy and Security]It Takes a Village: A Case for Including Extended Family Members in the Joint Oversight of Family-based Privacy and Security for Mobile Smartphones [email protected] 0000-0002-5692-9252 Vanderbilt University Nashville Tennessee 37212 USA [email protected] 0000-0003-2102-9155 University of Central Florida Orlando Florida 32826 USA [email protected] 0000-0002-7458-6003 University of Cincinnati Cincinnati OH 45221 USA [email protected] 0000-0002-5261-0148 University of North Carolina, Charlotte Charlotte North Carolina 28223 USA [email protected] 0000-0002-6223-1029 Vanderbilt University Nashville Tennessee 37212 USA We conducted a user study with 19 parent-teen dyads to understand the perceived benefits and drawbacks of using a mobile app that allows them to co-manage mobile privacy, safety, and security within their families. While the primary goal of the study was to understand the use case as it pertained to parents and teens, an emerging finding from our study was that participants found value in extending app use to other family members (siblings, cousins, and grandparents). Participants felt that it would help bring the necessary expertise into their immediate family network and help protect the older adults and children of the family from privacy and security risks. However, participants expressed that co-monitoring by extended family members might cause tensions in their families, creating interpersonal conflicts. To alleviate these concerns, participants suggested more control over the privacy features to facilitate sharing their installed apps with only trusted family members. <ccs2012> <concept> <concept_id>10002978</concept_id> <concept_desc>Security and privacy</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002978.10003029.10003032</concept_id> <concept_desc>Security and privacy Social aspects of security and privacy</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Security and privacy [500]Security and privacy Social aspects of security and privacy [ Pamela J. Wisniewski ======================== § INTRODUCTION A Pew Research study reported that 85% of U.S. citizens own smartphones  <cit.>, and 77% of them have downloaded and installed different third-party mobile apps on their devices  <cit.>. Mobile apps collect personal information (e.g., contact data, emails, photos, location, calendar events, and even browser history) from users when granted permission to do so <cit.>, creating digital privacy threats when this personal information is misused <cit.>. Unfortunately, the majority of U.S. adults lack knowledge regarding how to protect their digital privacy and security, which increases the potential for privacy and security violations <cit.>. Due to the lack of mobile privacy knowledge at an individual level, networked privacy researchers (e.g., <cit.>) have suggested adopting more collaborative and community-based approaches for managing digital privacy and security, where trusted community members (e.g., family, friends, co-workers) can work together to help keep one another safe online. Interestingly, some research has even shown how adult family members often rely on younger generations of their family (e.g., their teens) for technology support, as youth may be tech-savvier than their parents <cit.>. For instance, a recent study <cit.> examined family online safety and privacy management by asking parents and teens to evaluate a collaborative mobile app to understand whether parents and teens can help one another manage their mobile privacy and online safety. Despite the hierarchical tensions and asymmetry in privacy and security knowledge between the parents and teens, they found value in such bi-directional joint family oversight for keeping both parties safer online. Meanwhile, our current work acknowledges that a family often consists of other relationships beyond just parents and teens; therefore, it is unclear whether this collaborative approach could be generalized for the whole family by going beyond the parent-teen dyadic relationship. As such, we build upon prior work with parents and teens and extend it by exploring whether, how, and why joint family oversight that includes extended family members may work given a similar use case. Therefore, we first developed a mobile app for Community Oversight of Privacy and Security ("CO-oPS"). Second, we had 19 parents and teens install and evaluate the app in a lab-based setting. In addition to understanding the dynamics of using the app within the context of the parent-teen relationship, participants also expressed the value of including additional family members within the app for broader oversight within an extended family network. As such, we were able to answer the following high-level research questions: * RQ1: With whom else could they see including in their trusted networks when sharing and receiving advice on mobile privacy and security? * RQ2: What would be the potential benefits and drawbacks of including extended family members in a joint family approach? * RQ3: What are the important design considerations for designing an app for joint family oversight that includes extended family members? Overall, parents and teens felt that they would use the CO-oPS app with their close relatives, particularly grandparents, siblings, and cousins (RQ1). Parents and teens both saw benefits in co-managing their mobile privacy and online safety with their extended families as they thought it would help benefit more vulnerable family members (the older adults and children), and get more expert advice from others within the family who are tech-savvier. However, having extended families included in the CO-oPS family network might also cause unwanted tension and interpersonal issues for the parents and teens, e.g., blaming parents for teens' mobile online behavior, disallowing teens' autonomy, unwanted questioning, and family arguments (RQ2). To alleviate these concerns, they suggested implementing more controls within the privacy feature so that they can share their installed apps with only specified family members (RQ3). Our research makes important contributions to the networked privacy research community by examining whether co-managing mobile privacy, online safety, and security with other family members, beyond just the parent-teen relationship, could potentially help keep entire families safe online. § BACKGROUND We place our study within two main streams of research: 1) mobile privacy and security management on the individual level, and 2) collaborative approaches for mobile privacy, safety, and security. §.§ Mobile Privacy and Security Management at the Individual Level The proliferation of the usage of smartphones and mobile applications <cit.> caused mobile phone users to be over-exposed to digital privacy, safety, and security threats  <cit.>, as mobile apps get access to users' sensitive and personal information via different permissions  <cit.>. Ironically, mobile app users often do not fully understand what these mobile app permissions do and what they are used for  <cit.>. Users also lack the understanding of how their personal data are being used by these third parties  <cit.>. What is worse, third-party mobile apps may even get unauthorized access to users' information  <cit.>. For example, Calciati et al. <cit.> found that when users give single permission, the app can silently obtain further permissions. They further revealed that many third-party apps leaked and misused users' sensitive data, such as the user's precise location, list of contacts, history of phone calls, and emails, the permissions which users never explicitly granted. Despite these various privacy threats, the majority of US adults lack significant knowledge regarding digital privacy and security and therefore, find it difficult to manage their own privacy and security  <cit.>. In the next section, we synthesize the relevant literature that suggests adopting collaborative approaches to help resolve individual's challenges in privacy and security management. §.§ Collaborative Approaches for Mobile Privacy, Safety, and Security Several networked privacy research studies have demonstrated that individuals often seek help and informal advice from their trusted community (e.g., families, friends, coworkers) regarding digital privacy and security  <cit.>. Users are also influenced by others' privacy and security practices to make changes to their own privacy behaviors <cit.>. Therefore, networked privacy researchers have called for more collaborative approaches within an individual's networks so that people can exchange mobile privacy, safety, and security support with one another <cit.>. However, Kropczynski et al. <cit.> have reported that in order to get effective support for digital privacy and security management, one must have some technical expertise in their community. Meanwhile, studies have indicated that teens often provide informal tech support to their family members  <cit.>. In a recent study, Akter et al. <cit.> examined a collaborative family oversight approach that allowed teens and their parents to help one another manage their mobile online safety and privacy. Their lab-based study revealed that parents and teens overall valued such collaborative approaches to manage their online safety and mobile privacy. However, there were some tensions between parents and teens because of the differences in hierarchical power and tech-savviness. This work gave us the idea to further investigate whether such joint family oversight mechanisms would be applicable to the other relationships in the family. Building upon these prior studies, we explore whether an app created for Community Oversight of Privacy and Security ("CO-oPS") <cit.> could be beneficial for immediate and extended families to work together to help one another manage their mobile online safety, privacy, and security. § METHODS §.§ Design of the CO-oPS App We developed the CO-oPS app <cit.> based on the model of community oversight for privacy and security initially proposed by Chouhan et al. in  <cit.>. This model suggests mechanisms for trusted communities to review one another's mobile privacy and security practices (apps installed and permissions granted) and exchange guidance. The CO-oPS app includes three key aspects: 1) discovery of installed apps, 2) permissions granted/denied, and 3) people in the family. The Discovery feature (Figure-<ref>a) allows users to review the list of installed apps on their own phone with the ability to hide some apps from their family members. The permissions feature (Figure-<ref>b) allows users to review the permissions granted or denied to each of the apps installed. In the People screen (Figure-<ref>c), users can view the list of their family members with the ability to directly message them and explore their installed apps. Here, to help the parents and teens imagine using this app with their other family members, we added two fictional family members: the teen's uncle and aunt. Therefore, during the study, each participant viewed three people in this family list. §.§ CO-oPS Parent-Teen User Study Our study consisted of two distinct phases: 1) A guided think-aloud exploration of the CO-oPS app with probing questions, and 2) A semi-structured interview with parents and teens, with one component where we asked them to reflect on whether using this with the other family members (including extended family) would be useful for them. The study session started with showing participants a video demo of the CO-oPS app to explain its core functionalities. The participants were then asked to install the app on their phones and use its different features, as shown in Figure-<ref>. We asked probing questions to learn about participants' reactions to the fictional family members presented on the CO-oPS People page, as well as their suggestions as to who else they would prefer to have in their CO-oPS family network. The presentation of these two extended family members motivated our participants to think about the potential benefits of allowing their extended family to participate in their joint family oversight and encouraged them to think about their concerns of having these family members included in reviewing their apps' permissions. Thus, participants could consider both benefits and drawbacks, which also helped them to brainstorm design suggestions that can support the benefits and alleviate the drawbacks of having extended family included in the CO-oPS family network. The study sessions took place on Zoom and were audio and video recorded. We then transcribed the recordings and conducted a grounded thematic analysis for insights, using Braun & Clarke's <cit.> six-phase framework. Overall, we recruited a diverse sample of 19 parent and teen pairs where 42% of the participants were Asian, 32% Caucasian, 21% Hispanic / Latino, and 5% African American families. 53% of the teens self-reported as females, and 47% were males, whereas 58% of the parents were female, and 43% were male. The teens' ages ranged from 13-17, with a mean age and standard deviation of 15.4 and 1.4, respectively. The parents’ mean age and the standard deviation were 47.7 and 4.76, respectively, where their ages ranged between 40 to 55. § RESULTS In this section, we present the themes that emerged from our qualitative analysis regarding the inclusion of extended family in CO-oPS. Participants' quotations are identified by their IDs (for teens: T1, T2,..T19 and parents: P1, P2,..P19), age, and gender information. §.§ People to whom the joint family approach could be extended (RQ1) Most of the parents and teens felt that they would extend the joint family oversight approach to their grandparents, siblings, and cousins. More than two-thirds of the parents (68%, N=13) and half of the teens (53%, N=10) said they would be interested to have their parents (teens' grandparents) in their CO-oPS family network since they are the closest ones in their family who provide them support, care, and acceptance. "This would be very useful if you include your grandparents. They'd probably freak out if they knew all the apps that are sharing their location...they can then rely on us to help them with the permissions... It's not just they are family, they care about us, they love our kids." – P9, Female, 51 years old Teens showed more flexibility in terms of including other family members, such as siblings, and cousins. About half of the teens (58%, N=11) and a few parents (16%, N=3) said they would like to include their siblings and cousins (parents' other children, nephews and nieces) into their CO-oPS family network. We noticed that teens showed more enthusiasm in co-managing with their siblings and cousins because they are of similar age and they get along with them easily. A good number of parents (42%, N=8) also said they would like to include their own siblings (teen's uncles and aunts) as they are close to their immediate family. Interestingly, around one-third of the parents (32%, N=6) mentioned they would monitor their significant other's mobile apps and permissions as they are not much aware of the importance of using safe apps and granting safe permissions. §.§ Potential benefits and drawbacks of including extended family (RQ2) Parents and teens, in general, envisioned benefits in the joint family oversight with their other family members but also saw some potential concerns for including extended family who are not as close to them. Most parents and teens thought such joint family oversight would be beneficial for the vulnerable people of their families. More than half of the participants (63%, N=12 parents and 47%, N=9 teens) mentioned that the CO-oPS app would help the older adults, as they often are less aware of the digital privacy threats and also lack the knowledge to monitor or manage app permissions. Both the parents and teens thought that through the CO-oPS app, they would be able to help the older adult family members by letting them know about their unsafe apps and permissions. Next, 58%, N=11 teens felt it would benefit younger children as they also tend to be less aware of mobile privacy and security issues, similar to the older adults. "A lot of older people like my parents, or like the old people in general, they do not get much time, they are not very tech-savvy either... If my parents, they're included in our app like that,...it would help them.” – P8, Female, 46 years old Parents and teens also saw value in including extended families in the joint family oversight as it would bring necessary expertise into their family network. Almost half of the teens (47%, N=9) and a couple parents (11%, N=2) mentioned that they would be able to get more expert advice or guidance from the tech-savvy people of the extended family. Here, teens mostly mentioned their tech-savvy older cousins and siblings. Similarly, N=2 parents also mentioned their tech-savvy siblings who have better knowledge regarding mobile privacy and security. Apart from this, around one-fourth of the participants (21%, N=4 parents and 26%, N=5 teens) said that having their extended family would help them more in terms of being safe online and securing their personal information as more people would be able to warn them about unsafe apps. To this end, they often referred to some popular but controversial social media and gaming apps, e.g., TikTok, Snapchat, Discord, and Instagram, that they came to know from their extended family members. Hence, having more family members in the CO-oPS family network would help them be more aware of different mobile apps and their privacy issues. "This would be more about educating the mind and creating awareness because they're [cousins] gonna reevaluate your apps. So we might as well all learn from each other.” - T14, Female, 16 years old Parents saw additional benefits in including extended family as they would get more help in monitoring their children's mobile online safety. Around half of the parents (47%, N=9) felt that involving extended family, especially their siblings (teens' uncles and aunts), in the CO-oPS family network would enable them to share the parental responsibilities to monitor teens' app usage. These parents thought that their siblings would care about their teens as much as they did, and so, they would have some peace of mind knowing there would be other people to monitor their children's mobile online safety. Additionally, one-third of the parents (32%, N=6) said their children would consider listening to the advice more when they would receive guidance and feedback from more people. A few parents (26%, N=5) also thought using CO-oPS with extended family would strengthen the relationship between their children and the other family members who live out of their town or state as they would get an opportunity to work together on managing their family online safety and privacy. "Our extended family is pretty concerned about this stuff. So, this is a lot easier to kind of check on our kids, you know to make sure they are not using anything dangerous, and I would know that there are others to tell them about if any safety concerns they might have.” - P2, Female, 50 years old Participants felt that including their extended family in the joint family oversight would bring more tensions. Almost half of the parents (47%, N=9) and one-fourth of the teens (26%, N=5) expressed that involving the extended family in co-managing their mobile privacy and security might become more stressful for parents. They felt that the extended family might blame parents for teens' unsafe online safety behaviors, especially if they find inappropriate mobile apps on teens' phones. Interestingly, 43%, N=8 teens said such co-managing mobile privacy and security would cause more harm for them because parents might get provoked to pull away their autonomy in using social media, the internet, or even mobile phones. They often brought up the restrictive and authoritarian parenting styles in their uncles' and aunts' families, where their cousins are not allowed to own mobile phones, and this might influence their parents to adopt such restrictions in their family as well. A few participants (16%, N=3 parents and 5%, N=1 teens) specifically said that they did not see any necessity of including their extended family as they are not close. "I think once it reaches like extended families, like grandparents and aunts and uncles, it could get a little bit harmful because these are people you aren't living with or aren't as close. I think mostly there would be just drama with my parents, oh, they're doing this or that, you know, stressing them more.” - T13, Female, 13 years old Participants also expressed concerns for potential interpersonal conflicts that might arise. More than one-third of the participants (N=42%, N=8 parents and N=32%, N=6 teens) envisioned that some people in the extended family might become officious and therefore, there will be some unnecessary questioning about their personal choice of app usage. Some participants (N=37%, N=7 parents and N=21%, N=4 teens) also believed that there would be more incidence of family arguments when either parents or teens ignore the advice given regarding their mobile apps installed or permissions granted. “But if it's for the apps, and then you’ll hear like why do you have this app for, then I don't think you have the right to do it [questioning] because they're not completely as close as my immediate family. It's like giving them a new scope to roast me for using a particular app.” -T16, Female, 14 years old §.§ Design suggestions for a joint family oversight app (RQ3) Participants suggested more control over the app privacy features to allow only specific people to co-monitor their apps and permissions. Around half of the parents (47%, N=7) and one-fourth of the teens (26%, N=5) mentioned that they would want to keep their apps visible only to their immediate families and to some specific people of their extended families, due to the drawbacks highlight above. A couple of teens (11%, N=2) also said they would want the ability to hide from their dominating older siblings. Interestingly, about one-third of the parents (32%, N=6) mentioned that they would like to hide their teens' apps from their extended family to avoid any potential tensions or conflicts from happening. Parents often said that they would either manually take their teens' phones or ask their teens to hide their installed apps from the extended family. “If this went beyond my immediate family, my husband, and kids, then I would probably reconsider that thought. I would not want all of them to check my things, I would hide apps from them... But again, if there is any option to keep it shown for just a few people from the extended family, not all of them, that would be nice.” – P12, Female, 55 years old Additionally, a few parents (16%, N=3) and teens (21%, N=4) suggested additional features that would allow them to remotely change the apps and permissions on their extended family members' phones. These participants mostly wanted such features to help some of their relatives who are non-tech savvy and do not live with them. To further explain, they often said that being able to review and let them know would not be enough actually to help them. As some of their extended family, e.g., grandparents, do not have any technical knowledge, they would not be able to follow others' feedback and go to the settings to deny permissions. Therefore, these parents and teens expressed that they would like to have the ability to remotely change the apps and permissions on their family members' phones. “If we could use that with like my grandparents, I would want to change from here [CO-oPS], because they may not know how to go into the place and they can't change it. If you're able to do it here. That would be a thing like I needed in our situation.” - T3, Female, 14 years old § DISCUSSION One of the key lessons learned was that our participants did not implicitly envision their whole family as their CO-oPS family network. They instead envisioned using this family oversight app with people in the family who have strong bonds with them. This reconfirms one of the findings of Chouhan et al.'s paper <cit.>, where their participants were primarily motivated to help only close people. Also, while opening the CO-oPS family network to others in the extended family, our participants were also keenly aware of the trade-off between getting more help versus being able to avoid potential arguments and interpersonal conflicts. As a result, participants tried to negotiate this tension by filtering out the visibility of their apps based on whether someone was close to them or not. Therefore, a key takeaway of our work is to make sure the family members have full agency over with whom their apps are being shared. This desire for more controls is in contrast with the results from Akter et al. <cit.>, where participants saw little use of privacy features within just the parent-teen context <cit.>. On the other hand, our participants wanted to include their extended family in the joint family oversight not only to receive expert advice from others, but also to provide help as well. For instance, parents and teens were willing to include tech-savvy family members in their CO-oPS family network who can help them with privacy and security advice and guidance. This was because they felt the needed expertise that did not exist within their immediate family. In Akter et al.'s work <cit.>, they found that teens did not trust the feedback of their parents, as they perceived their parents as less tech-savvy. Therefore, including the extended family might provide them with the dependable expert advice and guidance that teens need. Our participants also showed equal enthusiasm to provide help to other family members, especially to older adults who are less tech-savvy. In Kropczynski et al.'s studies <cit.>, they found that older adults tend to be on the receiving end of tech support <cit.>, whereas younger adults are more likely to be tech support givers. So, joint family oversight could support such tech caregiving mechanisms, including the support for mobile privacy and security. Akter et al. <cit.> reported in their study that teens did not feel empowered to monitor their parents' apps and permissions and hence, were reluctant to participate in joint family oversight. So, teens' being interested to provide help to others in the extended family, such as their grandparents, shows the potential for increasing their engagement in the joint family oversight mechanism. § LIMITATIONS AND FUTURE RESEARCH We recognize several limitations of our study. First, we examined the perceived benefits and drawbacks of using the CO-oPS app with extended family members only from the parents' and teens' perspectives, as this was an emergent finding from our primary study that was worthy of further investigation. As such, it would be an important next step to explore extended family members' opinions in future work. Another potential limitation was that we presented the teens' uncle and aunt as their fictional extended family members on CO-oPS, which might have led participants to think more about the prospective problems that might arise from co-managing with these family members. Interestingly, however, our participants felt that they would use the CO-oPS app mostly with their other relatives: teens' grandparents, siblings, and cousins. Lastly, because the nature of our study was lab-based, parents and teens could not evaluate the CO-oPS app in a realistic setting; thus, in future studies, we would want to deploy the CO-oPS app among groups of family members, including diverse family relationships. § CONCLUSION Given the continued proliferation and usage of smartphones and third-party mobile apps, we believe community members can work side by side to co-manage their mobile privacy, safety, and security. Our study explored a joint family approach, highlighting parents' and teens' perceived benefits, concerns, and design considerations for co-managing their mobile privacy and security with other members of their extended families. Our work demonstrates the added benefits and challenges when broadening an oversight community beyond parents and teens. Generally, participants felt that such a collaborative approach with extended family would help them exchange privacy and security support with those they cared about. Yet, potential tensions and interpersonal conflicts would require additional controls on collaborative monitoring so that only trusted and close family members can review their mobile privacy and security behaviors. We will continue to build upon this work to examine how we can help people successfully co-manage mobile privacy, safety, and security within their families. We acknowledge the contributions of Nazmus Sakib Miazi, Nikko Osaka, Anoosh Hari, and Ricardo Mangandi in developing the CO-oPS application. We would also like to thank the parents and teens who participated in our study. This research was supported by the U.S. National Science Foundation under grants CNS-1844881, CNS-1814068, CNS-1814110, and CNS-1814439. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the U.S. National Science Foundation. ACM-Reference-Format